Supercomputing

Media Cultures Between Parallelism and Scalability

When in 1976 Seymour Cray presented his Cray-1 to the public, not only did a black monolith traversed by red illuminated surfaces suddenly become the icon of the computer and high-tech age; it also marked the start of the rapid development of an avant-garde of increasingly faster supercomputers constantly surpassing each other with higher clocking. Our project researches from a media and cultural studies perspective the field of supercomputing under two guiding aspects: First, we would like to examine the media-technological history of supercomputing in an historical and systematic manner. Such a media history can be grouped around one central concept and be theoretically grasped by it: Parallelism is not only the principle that describes the hardware architectures of supercomputers, but also whose organization on the level of programming languages and software development is at once a guiding paradigm and a central problem.

Second, our project gauges the epistemic effects of supercomputing by examining new knowledge formations based on its application contexts that have become addressable only through HPC (High Performance Computing). These effects must always be regarded in connection with the socioeconomic and sociopolitical contexts in which and for which supercomputing is applied. In our view, the concepts of scaling and, again, parallelism are pivotal for an epistemology of supercomputing. And while microelectronic miniaturization as the withdrawal of tangibility and distancing of structures determining existence (Kittler 1993, Siegert 2004, Dotzler 2006) marks a starting point of reflection in media studies, the epistemic and governmental implications of escalating and parallelizing computer capacities have not been interrogated to date (cf. also Pias 2011).

Using the media-material example of supercomputing, the central concepts of parallelism and scaling can be dovetailed and informed with each other on different levels. On the one hand, on the level of pure hardware, where problems related to the size, from processors, via the organization of storage and networking, to energy and cooling topologies, become essential; on the other, on the level of operating and file systems, the programming languages and user interfaces, where the problem of size and scales have a discourse-generating effect. Finally, in HPC the problems and problem accesses, as well as occasionally the formulations of responsibility, are organized via discourses on scales: large amounts of data—big data—can only be handled using supercomputers operating in parallel, and the processing of these data enables novel modes of cognition. The maximum variant of these discourses is offered by the Wired writer Chris Anderson, who says that with big data theory formation and scientific methods have come to their end. Yet instead of assuming this postulated end of theory, we seek to evidence and reconstruct in a media-historical way the media-technological possibility condition of supercomputing based on its genealogy and epistemology.