Fellows Sommersemester 2019

Rupert Gaderer

Shitstorm. Agitation, Simulation und Kampf in digitalen Lebenswelten

Das Forschungsprojekt untersucht aus einer medienkulturwissenschaftlichen Perspektive die Simulation und Virtualität digitaler Agitation. Dabei stehen drei Problemfelder im Zentrum des Projekts: Erstens stellt sich die Frage nach den soziotechnischen Infrastrukturen der digitalen Simulation des Streits. Bei diesem Aspekt sind Netzwerk-Dispositive relevant, die ein soziotechnisches Gefüge entstehen lassen und virtuelle Foren der Differenzaustragung entwerfen. Der zweite Bereich betrifft die Computergeschichte digitaler Konflikte. Angesprochen sind damit jene schwer kontrollierbaren Störungen, die in den 1970er Jahren als ›Flamewars‹ bezeichnet wurden. Sie wurden in Mailing-Liste, News-Gruppen oder Diskussionsforen beobachtet und stellten bereits damals eine systemimmanente Störung der textbasierten Kommunikation dar. Das dritte Problemfeld berührt die Simulation der Simulation der Agitation. Seit den 2010er Jahren bieten mehrere Beratungs-Agenturen sogenannte »Shitstorm-Simulatoren« an, die es ermöglichen, Krisen-Szenarien der digitalen Empörung zu testen. Konzerne, politische Parteien und Personen in der ›Celebrity-Culture‹ setzen diese Simulationen ein, um gruppenpsychologische Effekte zu beobachten und eine Steigerung der präventiven Krisenarbeit des unternehmerischen Selbst zu erzielen. Unter diesen drei Aspekten untersucht das Projekt digitale Agitations- und Konflikt-Szenarien im Horizont aktueller Medien- und Kulturtheorien der Computersimulation.

Benjamin Peters

The Computer is Not a Brain: How Smart Tech Lost the Cold War, Outsmarted the West, and Risks Ruining an Intelligent World

My plan this summer is to complete a draft of the scholarly book project tentatively titled The Computer is Not a Brain: How Smart Tech Lost the Cold War, Outsmarted the West, and Risks Ruining an Intelligent World. This project will be the first nonfiction book to describe for a general scholarly audience how and why “smart media” have made idiots out of the West in the literal Greek sense of private persons. This history of smart media argues that the industrialized West has “smarts” upside down. It charts the rise and international diffusion of cold war military research that advanced the dawn of “smart computing technology” that attributed success to an individual’s capacity to outsmart another—the consequences of which have shipwrecked both our current media and natural environments.

Drawing on previously uncovered archival and scholarly sources in America, northern Europe, and the former Soviet Union, it charts a twentieth-century history of how research and public talk about intelligence and machinery has mistakenly translated the vices of yesterday’s cold war rationalist—strategic self-interest, networked nations, mental machismo, and even (trickily) open-mindedness—into the virtues of today’s online personas. It asks: What does the computer-brain analogy reveal about its maker, and why was the human brain held up as a model for computer processing (how did the “i” get in the iPhone)? How has “smart”—a near-cognate for the German for “pain” (Schmerz, as in “Ouch, that smarts”)—come to decorate our prized smartphones, smart cars, smart cities, smart algorithms, etc., and in turn shape the dreams and fears of modern life, and at what cost? This timely history and analysis about how the industrialized West has fashioned smart media brings to light timeless historical insight central to the humanities—about our changing sense of the mortal self, cooperative and competitive intelligences, and the sources of our current global environment crises. This same narrative also lays a foundation for detoxifying our media environment and rebuilding a more humane future for our often foolishly smart species. 

This book lays out a unique and longer narrative on-ramp for the broad general interest in the field of “smart tech”—artificial intelligence, machine learning, learning algorithms, and feminist criticism of Silicon Valley’s toxic “brogrammer” culture. This project draws together diverse scholarly resources for backlighting a global stage for the smart media drama with scripts predating the cold war. In the process, it challenges how the history of technologized individual intelligence (the private brain, the talking head, the serial processor) emerged out of the collaboration of research groups. In all, it aims to show how digital media become so smart and at once so toxic while also reclaiming a foundation toward a more humane and intelligent media environment.

Martin Woesler

Society 5.0 by China's "Digital System for Society-Management" and its Computer Simulation Aspects

The Chinese government currently is setting up a software system to rule the nation (Digital System for Society-Management DSSM, officially called “social management”).

This project compares DSSM structurally with the old socialist system of a planned economy, which failed in real existing socialism, thus aiming at the difference between planning and simulating. It sketches the computer simulative aspects of the program and makes available Chinese sources not yet available to non-Chinese speakers. The simulation is taken as a narrative strategy and by this the project contributes to the duality of simulation and fiction. The investigation includes current Chinese Science Fiction writing, e. g. by Liu Cixin and Hao Jingfang.

The project identifies factors of the failure (e.g. mentality of fulfillment, sugarcoated figures). It discusses (speculatively) the chance for a success of DSSM and possible consequences, both domestically and internationally. The main resource for the coming information economy is data. The project compares the societal system with Western systems and asks, how far Western data companies will buy data from China and therefore support the Chinese system. It also asks, how far Western societies may orient themselves towards the Chinese model.

Preliminary research shows that DSSM consists out of:

  1. 1. surveillance system and central data collection including movement profile, identity recognition, payments, communication (preferences, wishes, dreams, values and ideally thoughts – brain scanner experiments have started),
  2. 2. algorithmic and big data analysis,
  3. 3. an information system to inform (and manipulate) the citizens,
  4. 4. a motivating system (unconscious advertisement, sesame credit points for regime loyalty) and
  5. 5. a sanctionizing system including automatic censorship, detention, forced confessions/gag orders and death penalty (estimates of up to 10.000 executions per year) etc.

DSSM contains several simulations, especially to predict developments in the imminent future. It is a conscious advancement of the concept “Industry 4.0” (or, in the communication area Digitalization 4.0 (its economic part called “Made in China 2025”), following the media epochs of Oral Communication (1.0), Written (2.0), Book Print (3.0); cf. Luhmann, Baecker). It has the following characteristics:

  1. 1. artificially intelligent,
  2. 2. (ideally) completely automatic, decisions are made by algorithms based mostly on correlations, less on causes,
  3. 3. optimizes itself through learning,
  4. 4. communicates with users indirectly (system works best if unknown to the user, e.g. illness-probabilities are only discovered through correlation, not communicated), and
  5. 5. non-explicit, man-machine, machine-machine communication.

The legal framework in China and the centralization under party control fit massive data collection. DSSM is supported by education (incl. ideological warfare/propaganda) and guidance of citizens from preschool to death, including 10 percent of school classes, university courses and work time (at Party schools even for non-party-members starting from the rank of dean) being devoted to ideological indoctrination. Citizens are digitally externally controlled by means of pedagogy, psychological pressure and group dynamics (patriotism, competition to collecting points).

Shane Denson

Discorrelated Images

My current book project, Discorrelated Images,explores the transitional spacetime between cinema and post-cinema. More precisely, it probes the transformational temporal and spatial articulations of contemporary moving images and our perceptual, actional, and affective interfaces with them as they migrate from conventional forms of cinema and enter the computational systems that now encompass every aspect of audiovisual mediation. While the generation, composition, distribution, and playback of images increasingly become a matter of algorithms, software, networks, and codecs, our sensory ratios (as McLuhan called them) are being reordered, our perceptual faculties are being reformed (or re-formed) in accordance with the new speeds and scales of imaging processes. In a post-cinematic media regime, that is, both the subjects and the objects of perception are radically transformed. Older relations—such as that between a human subject and a photographically fixed object—are dissolving, and new relations are being forged in the microtemporal intervals of algorithmic processing. With the new objects of computational images emerge new subjectivities, new affects, and uncertain potentials for perception and action.

At the heart of these transformations lie the generative dynamics of high-speed (often “real-time”) feedback and feed-forward processes, which introduce (and modulate) new contingencies at the heart of post-cinematic mediation. We glimpse such processes in digital glitches, for example, which derail perception and inject the microtemporal misfirings of the computer into our subjective awareness. The underlying contingencies, however, are beyond the purview of subjective perception; the algorithms and hardware operations responsible for the glitch are fundamentally “discorrelated” from phenomenological processes of noetic intentionality. Moreover, the glitch reveals a more general instability attaching to computationally mediated images, which are highly volatile and always in danger of dissolution. Processed on the fly in an interval that is inaccessible to human perception, the images that populate our world are themselves discorrelated from human subjectivity. Nevertheless, various forms and manifestations of contemporary audiovisual media mediate to us these processes, providing sensory complements to sub-perceptual events, helping us in a sense to negotiate the transition to a truly posthuman, post-perceptual media regime. These mediations and negotiations are the core focus of the book project.

Jeremiah Lasquety-Reyes

Simulating Ockham's Philosophie

This project investigates the potential of computer simulation to model the ideas of the philosopher, William of Ockham. Ockham is most famous for "Ockham's razor," a principle of parsimony often used in scientific theorizing. However, he is also famous for his nominalism or conceptualism, the metaphysical position that insists there are no real universals in the world but only singulars (in contrast to the position dominant during his time). He also developed an original cognitive theory of “mental language” that serves as the foundation for written and spoken language. This project attempts to use current resources in machine learning and computer simulation (specifically, agent-based modeling) to represent Ockham’s ontology and psychology, and in the process, explore how computer simulations can help facilitate the understanding of philosophical ideas. Ockham conceived of reality as made up only of singulars. Each tree, for example, is as different from another tree as a tree is from a cat or a man. There is nothing in reality that trees share that gives them all their ‘tree-ness.’ Rather, what makes them all trees is a pre-linguistic concept in our minds of TREE. We acquire this concept through the actual encounter with individual trees and this concept signifies all the trees in the world. The project will simulate this ontology using a virtual world populated with unique singular objects and agents with algorithms that convert the cognition of these singulars into universal concepts. Though the primary mechanism is simple, numerous elements in Ockham’s text offer challenges for simulation. For example, he makes a distinction, common to Aristotelians, between the substance which is the object per se (for example, the substance of a man), and the accidents which inhere in the substance (for example, being blue-eyed, left-handed, etc.). Upon encountering a unique singular in the world, one not only derives the concept of the substance, but also the concepts for all the accidents that inhere in the substance. In addition, these accidents taken together allow us to recognize substances as particular individuals (for example, this man is Socrates because of his bald head, his beard, etc.). What is the best way to implement all these factors in a computer simulation? The second stage of the project focuses on Ockham’s psychology of “mental language” as it applies to memory, recognition and human communication. Concepts are combined and remembered in the memory as mental propositions which are structured similarly to normal sentences. These mental propositions are used by a person in activities such as recognizing objects encountered before and, more importantly, in human communication. What happens in the mind when we recognize objects or when we learn new things from other people? In order to simulate this aspect of Ockham’s thought, the project will draw from Ockham’s Summa Logicae in conversation with the contemporary philosophy of language.

Fabrizio Li Vigni

Governing through participatory scenarios: the case of companion modeling

This project proposes to understand the governance by scenarios (Granjou & Mauz, 2011) through the study of Companion Modeling (Collectif ComMod, 2005). Founded by a group of French scientists from CIRAD, ComMod is a techno-political promise offering to bring more democracy and justice to the management of public goods. It involves researchers, citizens and decision makers, and it mobilizes sophisticated techno-scientific devices, called agent-based models (ABM), for the urban planning and the natural resources management. This project will lean on two methodological approaches: a scientometric one, to study ABM sub-communities through CorTexT software, and a sociological one grounded on the ethnographical study of a ComMod project. It concerns a participatory simulation of the Ile-de-France Region’s mobility for a car-free city. The present proposal offers to analyze this case with a double theoretical eye, informed by Science & Technology Studies and Pragmatist sociologies.

Keywords: Futures; Participation; Companion Modeling; Agent-based Models; Urbanism; Computational Sciences; Science & Technology Studies; Pragmatist sociologies.

Pablo Schneider

Die gute Nachbarschaft der Bilder – Simulative Rekonstruktion und historische Bildevidenz

Untersuchung einer exemplarischen Bildthematik der KBW mittels der Software HyperImage.

In den Jahren 1928 und 1929 forcierte der Hamburg Kunst-, Kultur- und Bildwissenschaftler Aby Warburg die Arbeiten an seinem Vorhaben eines Bilderatlas. Unter dem Titel Mnemosyne-Atlas sollten zentrale Themen seiner Forschungen, wie die Prozesse eines visuellen Nachlebens sowie der Bedeutung der Pathosformel, zugänglich gemacht werden. Er wählte hierfür das Medium der Bildertafeln, die ihre Argumentationen in erster Linie aus dem sichtbaren Befund heraus entwickeln sollten. Drei Fassungen dieses Vorhabens konnten dokumentiert werden, wobei bisher nur die letzte publiziert wurde.

Die ausführliche Beschreibung der Tafel 43 der ersten Serie, welche sich mit dem frühneuzeitlichen Nachleben des Themas der Verschwörung des Claudius Civils befasst, konnte strukturelle Aspekte herausarbeiten. So ist davon auszugehen, dass die Tafeln von dem Gedanken getragen wurden, in der Form eines Gesamtbildes zu wirken. Von diesem Punkt ausgehend erwiesen sich kompositorische Elemente wie Größenverhältnisse, Abstände zwischen Einzelbildern und wiederholt aufscheinende Details innerhalb der Einzelbilder als wichtige, bisher in der Forschung so nicht beschriebene, als wichtige Erkenntniselemente. Die, aus einer kunst- und bildwissenschaftlichen Perspektive gewonnenen Beobachtungen, stellen eine inhaltliche sowie technische Herausforderung für die digitalen Geisteswissenschaften dar. Es ist vorgesehen im Anschluss an die textbasierte Analyse nunmehr Formen einer digitalen visuellen Argumentation mittels der Software Hyper Image zu realisieren und der Fachöffentlichkeit zugänglich zu machen.

Die Zielsetzung besteht in der simulativen Rekonstruktion der historischen Bildevidenzen, welche sich im Konzept der Guten Nachbarschaften verdichten, mittels der Software HyperImage anhand einer ausgewählten Bildthematik der KBW erfahrbar zu machen.

Sarine Waltenspül

Cinematographic Model Worlds as Analog Simulations

In 1924, Joseph A. Ball, physicist and engineer at Technicolor, suggested the following formulas to calculate the exact frame rate when filming miniatures: »If f, m, l and t are the symbols representing the fundamental quantities, force, mass, length and time as it is in the model and if f’, m’, l’, and t’ are the corresponding quantities in the imaginary world on the screen we can write the fundamental dimensional equations: f = l/t2 f’ = m’l’/t’2«. (Ball 1924, 120) The formulas were actually developed in the course of the early model experiments with fluid dynamics in the 19th century. Originally, they served to translate calculations from model experiments – or simulations – to full-scale objects like ships or airplanes. Thus, Ball transferred a scaling technique from physics to the field of cinematography to visually scale up models. In the case of dynamic models – i.e. in combination with fluids like water or air – scaling is a challenge in physics as well as in film productions, irrespective of whether models, computer simulations or a combination of both are used. However, the interfaces between ‘analog’ and ‘digital’ scaling are not just the problems that arise in dealing with them, but also the corresponding solutions. For it is not only when working with material cinematographic models that methods from fluid dynamics like the formulas are used. They also form the basis for dynamic simulations used for computer-generated imagery (CGI). 
My dissertation, which I am preparing for publication during my fellowship at MECS, examines this issue and other cinematographic scaling techniques at the intersection of materiality and mediality, the analog and digital, illusion and intended fractures, as well as the techniques, dispositifs and contemporary aesthetics that were developed in the course of the work with scale models.

Daniela Zetti

Wie Computer Geschichte schreiben

Das digitale Zeitalter ist geprägt durch die Verhandlung von nicht deterministischen Narrationen und Figurationen, wie etwa derjenigen „des Users“, „der Simulation“, oder „der Computergeneration“. Die Geschichte digitaler Gesellschaften lässt sich eben nicht auf Gründerväter, obligate Orte oder Momente festlegen. Epistemologien der Computersimulation und die Frage, „Wie die Welt in den Computer kam“ (David Gugerli) strukturieren das Untersuchungsfeld ­– die Entstehung digitaler Wirklichkeit.

Das Publikationsprojekt leistet eine interdisziplinäre und methodisch informierte Bestandsaufnahme und verfolgt dabei das Ziel einer historisch-methodischen Perspektivierung der Frage, wie Forschungsergebnisse zur Geschichte des digitalen Zeitalters dargestellt werden können. Der projektierte Sammelband ist zugleich problem- und ergebnisorientiert: er bespricht Instrumentarien und Konzepte der Geisteswissenschaften im Lichte der Forschungsergebnisse einer medienwissenschaftlich informierten Technikgeschichte bzw. einer technikhistorisch informierten Medienwissenschaft.