Artificial Paranoia

Today, information is filtered out of an ever-increasing data stream, whereby the algorithms required for this operation are not only increasingly automated and self-learning, but also largely hidden. In my research project, I take a closer look at the «sub-medial space» (Groys 2012), which is inhabited by a whole range of algorithmically generated and networked agents. The question «Who speaks», central for the analysis of paranoia (cf. Lacan 2000), becomes paramount in a time of racist bots (Microsoft’s Tay), uncanny systems (Amazon’s Echo) and dreaming networks (Google’s Inceptionism). To better understand the technological condition of our networked world, I take up the idea of a «psychoanalysis of things» (Sartre 2001, 765) and apply it to Artificial Intelligence (A.I.), which is currently being rearranged around artificial neural networks. Rather than a symbolic representation of the world, neural-network models simulate intelligence on digital computers, in order to generate an A.I. with its very own technological unconscious. Paranoia as an «information-processing technique» (Chun 2006, 257) therefore offers a tool of analysis to focus on new aesthetics, epistemologies and politics, by which a network of artificial neurons learns to discriminate (i.e. to filter) patterns in a deluge of data (cf. Apprich et al. 2019).


  • Prof. Dr. Clemens Apprich