Maya Ganesh

Kontakt

Maya Ganesh
Am Sande 5
21335 Lüneburg
lg069809@stud.leuphana.de

Maya works at the intersection of digital technologies, human rights, visual advocacy, cybersecurity and privacy as a researcher, writer and information-activist. She spent close to eight years at Tactical Technology Collective and left in late 2017 to focus on her doctoral work at Leuphana University investigating machine learning, ethics and accountability. She writes for Cyborgology, a technology and theory blog.

Forschungsprojekt

Tests, testing and standards of machine intelligence

My research is a socio-technical investigation of how tests, testing, and standards of machine intelligence (or, 'artificial' intelligence, or AI)  relate to imaginations and practices of ethics and accountability in the Law, Computer Science, and in context of the public discourse around AI. Taking the case of driverless cars and autonomous driving, my work responds to the question: “How do tests and testing of machine intelligence and the development of measures and standards, enable discursive, material and protocological regulation of these intelligences in automated decision-making contexts?” I develop this through three empirical sites: the mainstream and academic discourse on ethics tests as applied to machine learning indriverless car technology; an analysis of the material and epistemological outcomes of initiatives around algorithmic accountability’; and through participation in standards-setting for algorithmic regulation.

Machine intelligence has been known through tests such as the Turing Test, and through proxies for human intelligence such as games of Chess, Go, and Jeopardy; now, machine intelligence is supposed to drive a car. What do tests imply for how similarities and differences between human and machine are produced, and what do they mean for how machines are held to account for decisions made? 

The thought experiment known as the The Trolley Problem has become a popular way to introduce the ‘ethics of autonomous driving'; derivatives such as the Lin Problem (2013) and the Moral Machine project can be found amidst a growing body of Legal and Computer Science literature. In these formulations, the question is framed as 'Is there an ethics of autonomous driving?' or 'Can ethics be programmed into driverless cars?'  In material terms, ‘programming ethics’ comes down to verifying that machine learning can distinguish and assess different objects on the road and make a decision about how to respond to that object in the case of a potential accident. And while it is possible that these tests are not entirely reliable or valid tests of automated decision-making in driverless cars, they have become powerful imaginaries in shaping discourses about ethics and autonomous vehicles and even among engineers and auto manufacturers. 

I read tests of machine learning and machine intelligence as Barad-ian apparatuses (2007). The apparatus is not merely a laboratory device that measures things as per human scale, but is a material-discursive practice that produces ‘subjects’ and ‘objects’, is boundary-making, and “are themselves phenomena” with “no distinct boundaries but are open ended practices” (p 146). The test-as-apparatus is itself a construction, and is inextricably shaped by the very thing it seeks to measure. During my time at MECS I will unpack how tests work as apparatuses, the scales they produce of what ethics, human and machine intelligence are. 

Barad, K. (2007) Meeting The Universe Halfway: Quantum Physics And The Entanglement Of Matter And Meaning. Durham and London: Duke University. 

 Lin, P (2013) The ethics of autonomous cars. The Atlantic. www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/  Retrieved December 20, 2016

Rahwan, I (2016) The Social Dilemma of Driverless Cars. Tedx Cambridge. moralmachine.mit.edu  Retrieved www.youtube.com/watch 14 November 2017.  


Ausgewählte Publikationen