skip to content

Histories of Artificial Intelligence: A Genealogy of Power

 

Histories of Artificial Intelligence

Winter Symposium

14-15 December 2021

Homerton College, University of Cambridge


Building on twelve months of reading and discussion amongst the seminar participants, this symposium draws together a capacious range of geographic and temporal foci, methodological approaches, and kinds of histories. 

Four paper sessions will feature presentations of new research related to Revocations, Epistemologies, Ecologies, and Evolutions; broad themes that emerged in submissions to the Call for Papers for this event.

Four panel sessions will feature discussions between authors in our Special Issue for the British Journal for the History of Science (BJHS): Themes on Histories of AI, in preparation for publication. BJHS papers will be shared in advance for attendees to read and comment on (via Google Docs). Author panels will discuss connections between their papers and to the aims of this Seminar, with the central aim of sharpening the final Themes publication.

Breakfast, lunch and dinner is provided at Homerton College. Speakers are asked to share their slides with organizers a week prior to the event.

 

14 DECEMBER 2021


9:00 - 10:45 - BJHS Special Issue Workshop 1 - Pre 1960s: Origins? 

Chair: Richard Staley (University of Cambridge)

Sananda Sahoo (Western University)

        Multiple Lives of Mahalanobis’ Biometric Data Travel as Biovalue to India’s Welfare State   

Bo An (Yale University / Max Planck Institute for the History of Science)

        A Critical History of Early Artificial Intelligence in the People's Republic of China (1950s-1980s)  

Aaron Mendon-Plasek (Columbia University)

        How the idea of creativity in 1950s machine learning produces social and political possibility

Jonnie Penn (University of Cambridge)

        Animo Nullius: Locating AI’s Foundations in the Colonial Doctrine of Discover

Flora Lysen (Maastricht University)

        Histories of AI in image-based medicine: diagnostic assemblages in radiology (1950s – present)  

10:45 - 11:15 - Tea

11:15 - 13:00 - BJHS Special Issue Workshop 2 - 1960-1970s: Interactions

Chair: Jonnie Penn (University of Cambridge)

Andrew Meade McGee (Carnegie Mellon University)

        Institutions and AI: Organizational Capacity, Structures of Power, and Sites of Discourse in the Development of Artificial Intelligence as a Field, 1960-2020

Rosamund Powell (University of Cambridge)

        Positivism, power and politics: an exploration of disingenuous rhetoric in 1970s social critiques of machine intelligence

Olessia Kirtchik (HSE University, Moscow)

        The Soviet experiences with AI: if a machine cannot “think”, can it “rule”?

Sam Schirvar (University of Pennsylvania)

        From Man-Machine to Human-Computer Interaction: Psychologists, Secretaries, and the Personal Computer at Xerox PARC, 1973-1983

13:00 - 14:00 - Lunch

14:00 - 15:45 - Revocations

Chair: Ranjini Raghavendra (Public Health Foundation of India)

Cheshta Arora (National Institute of Advanced Studies, Centre for Internet and Society)

        Of data, objects and machines: The making of an ML tool to mitigate online-gender based violence

The paper draws from an experience of participating in building a machine learning tool to detect and mitigate instances of online gender-based violence and hate speech in three Indian languages: Indian English, Tamil, and Hindi. The paper contends that the making of this tool far exceeds the small team of researchers, activists, and data scientists.

The immediate history of the present tool can be traced back to 2012 when, with the Nirbhaya rape case, we witnessed an upsurge in technological interventions to the problem of sexual violence or to 2014 that announced the right-wing shift in national politics and normalized hate speech on social media thereby proliferating interest in ML approaches to curb it.

The paper, however, backtracks the aforementioned present moment to earlier instances of engagement with data, objects, and machines in the history of the women's movement in India. The paper looks at the emergence of the postcolonial women’s movement in India in the 1970s and the success story of Towards Equality report, its appropriation of the state-machine and statistical knowledge that contributed to the postcolonial problematic of ‘women’, and its eventual encounter with the development machine that funded the NGOisation of the autonomous women’s movement in India in the 90s. The narrative excavates the movement’s experiments with data, machines, and other objects (and the silences around these experiments that made and unmade its histories and stories) to narrate the making of an ML tool in the present.

While ‘appropriating’ technology continues to be central to the rhetorical strategies deployed by the team to explain itself to the sceptics, narrating the history of the tool within the context of women’s movement in India, and its experiments with data, objects and, machines, situates the story of this appropriation and its political viability.

Apolline Taillandier (University of Cambridge, Universität Bonn)

        AI in a different voice: rethinking computers, learning, and gender difference at MIT in the 1980s

This paper explores the “critical” AI projects developed around the Lego community at MIT in the mid-1980s. While a rich scholarship studies how programming and AI were made masculine, little has been said about those AI practitioners who drew on literary criticism and feminist epistemologies with the hope to overcome the “technocentric stage of computer discourse” and undo gender hierarchies underlying computer cultures and programming experimental standards. At MIT, AI researcher Seymour Papert and sociologist Sherry Turkle argued that cognitive theories of AI and intelligent behavior as flexible, intuitive, and object rather than task-oriented could help challenge the masculinist assumptions in formal AI and expert systems approaches, as well as the gendered labor division in computer science. Taking inspiration from “emergent AI” and “mind as society” models, more than from the earlier philosophical critique of AI as instrumental reason by Dreyfus, Weizenbaum, and Hofstadter, they tied computer programming to Piaget’s theory of intellectual development, Keller’s critique of objectivity and dominant scientific epistemology, and Gilligan’s moral psychology. At a critical moment of the history of AI projects, but also of debates about the social and moral responsibility of machine intelligence scientists, they sought to shift the social perception of computers for fear of backlash. Intersecting political history, feminist theory, and the history of science, this chapter contributes to the “hidden history” of women and feminist activism in AI, to the material history of AI models and software, and to the history of AI as a human science located partly in the Harvard-MIT complex. This helps historicize recent discussions of the whiteness and masculinity of algorithms, but also to clarify the interweaving between discourses of gender difference and the sidelining of feminist agendas in computer professions from the 1980s onwards.

Sanaa Khan (UC San Diego)

        Student Perceptions of Algorithmic Grading: Studying the Effects of the 2020 Ofqual A-Levels

Predictive analytics and algorithms have shown to include racial bias in many different contexts, including education. Discrimination resulting from such systems in education only compounds existing issues of student equity. Most recently, this has come to a head in the 2020 Ofqual grading algorithm for A-level exams that left students in U.K. state schools, often with a higher number of BAME students, scored lower than students in private schools. The Ofqual algorithm was touted as a way to neutrally assign A-level scores in the light of pandemic precautions interrupting usual testing; instead it made students in lower-income and racialized communities vulnerable, putting student futures at risk. I use data collected from interviews conducted with students who received A-level scores from this method to posit that the use of this algorithm created additional precarity in racialized students’ lives. Findings indicate that the use of this technology exacerbated racial tensions that already exist in British schooling, and led to greater  uncertainty in student perceptions’ about their future.

15:45 - 16:15 - Tea

16:15 - 18:00 - Epistemologies

Chair: Syed Mustafa Ali (The Open University)

Salwa Hoque (New York University)

        Law and Digitality: Tracing Modern Epistemologies and Power

Prevailing norms of modernization and secularization tend to idealize technologies and uncritically endorse the digitization of law and legal records without troubling to investigate the harmful effects on marginalized communities, especially in the Global South. To address this, I explore legal pluralism across state courts and non-state courts in Bangladesh. My first argument is: non-state courts in rural Bangladesh that practice informal law, which human rights and technochauvenist groups tend to regard as “too Islamic” and backwards, maintain practices that have more equitable outcomes for women in certain types of cases than formally modern state courts. My second argument is: the digitization process of law and legal records is not neutral; rather, it is closely tied to preexisting social biases such as being generated in elite settings and centered solely on the modern rule of law, which exclude and/or distort socio-legal realities and standpoints of women in rural Bangladesh. These biases and exclusions can generate automated results and outcomes that further harm marginalized communities that are already living in the margins in unexpected ways. Through the analysis of my two arguments, the goal is to demonstrate how the line between law and the digital sphere is blurred and how the intersection between the two spheres is a site that connects older (neocolonial, patriarchy, elitism) and newer (biased search engine results, selected catchwords/keywords, skewed AI judges) forms of power, which are enforced on marginalized communities such as rural women in Bangladesh, as well as postcolonial states and the Global South more broadly.

Matteo Pasquinelli (University of Arts and Design Karlsruhe)

        The political epistemology of AI in between modern mechanical thinking and algorithmic thinking

What model of knowledge is the current form of AI (i.e. machine learning) representing? In the history of human civilizations tools have always emerged together with a system of (technical) knowledge associated to them, but this aspect seems very confused in an artefact that is said to directly automate human ""intelligence"" and ""learning."" This epistemological dimension, that is the distinction between knowledge and tool of course exist also in machine learning as the distinction, for example, between programming languages and application, but it seems to be continuously removed from the debate on AI that is fixated on an equation unique in the history of epistemology: machine = intelligence.

To criticize this assumption the paper reads the idea of “machine intelligence” not as a novelty but as the latest stage of the history of algorithmic thinking, and this as the confluence of the longer history of mechanical thinking with statistical thinking. Whereas the epistemology of mechanical thinking and statistical thinking has been flourishing, the epistemology on machine learning is still fragmentary and the paper attempts an overview of the different epistemological schools and methods that could help consolidate this field.  

The paper is not concerned with technical knowledge per but with the historical, economic, and social conditions that made it emerge, such as in particular the drives towards labour automation in the industrial and post-industrial society. As a contribution to a historical, and political epistemology of AI, the paper proposes to extends the analyses of Boris Hessen, Henryk Grossman and Peter Damerow on modern mechanical thinking to the algorithmic thinking of the 20th and 21st century.

Mazviita Chirimuuta (University of Edinburgh)

        Rules, Judgment, and Mechanisation

The question of mechanised judgment features in contemporary discussions about the “promise” and limitations of artificial intelligence amongst practitioners (Cantwell Smith 2019). This paper has two aims: first, to give an exposition of judgment as discussed in historical sources that oppose judgment to algorithmic rule following or data processing; second, to show how the development of AI has occurred within contexts that facilitate decision making independently of the need for judgment.

The paper begins with discussion of Daston’s (forthcoming) work on the changing meaning of ‘rules’. She argues for the recent origin of the algorithmic notion of rules, i.e. ‘fool-proof’, step-by-step procedures. Daston’s “pre-modern rules” were models which could not be applied without an exercise of judgment over the fit between general rule and particular circumstances. This notion of judgment, which can be found in Kant’s discussion of general logic (1781/1787/1998: A133/B72-A34/B73), is contrasted with the account put forward by Cantwell Smith, in which ontological and ethical commitment are pre-requisite for judgment, and judgment is contrasted with “reckoning” (proficiency in calculation).  A common thread in these two accounts is of judgment being a capacity to navigate between the abstract and the concrete particulars.

Claims for the self-sufficiency of algorithmic, rule-following intelligence have been important in historical arguments for the possibility of human-like AI (Turing 1950: 436). In Part 2 of the paper, drawing on the work of Collins and Kusch (1995), I argue that the appearance of self-sufficiency can only be maintained to the extent that designers of automated decision-making systems pre-determine the environment so that the computer does not have to exercise judgment over how rules are to be applied to particulars. Thus, the need for judgment is inversely related to the amount of stabilisation of the environment in which computers operate. I note that this stabilisation is part of the longer history of mechanisation which begins with the regularisation of human physical and mental labour (Schaffer 1994, Daston 2018).

18:00 - 18:30 - Pre-dinner Drinks

18:45 - Dinner at Homerton College

 

 

15 DECEMBER 2021


 

9:00 - 10:30 - BJHS Special Issue Workshop 3 - 1980-1990s: Disciplining ideologies

Chair: Richard Staley (University of Cambridge)

Michael Castelle (University of Warwick)

        Are Neural Networks Neoclassical? The Role of Economic Rationality in Artificial Intelligence

Luke Stark (Western University)

        Artificial Intelligence and the Conjectural Science

Harry Law (University of Cambridge)

        Reading Machines: Bell Labs and the Neural Network ‘Revolution

10:30 - 11:00 - Tea

11:00 - 13:00 - BJHS Special Issue Workshop 4 - 2000-2020s: Search and monitor

Chair: Sarah Dillon (University of Cambridge)

Alexa Hagerty (University of Cambridge), Florencia Aranda, Diego Jemio

        Algorithmic Expectations: Foresight and Predestination in a Predictive Model for Teenage Pregnancy

Fernando Delgado (Cornell University)

        Domain Search: Prospecting the Profession

Simon Taylor (UNSW Sydney, Australia)

        Species Ex Machina: how agricultural labour shapes the optimisation of AI

Bruno Moreschi (Group on Artificial Intelligence and Art (GAIA) and Faculty of Architecture and Urbanism - University of São Paulo, Brazil)

        Beyond visual surfaces – What and how to see the images that train commercial Computer Vision

Sarah T. Hamid (Carceral Tech Resistance Network)

        TBC

13:00 - 14:00 - Lunch

14:00 - 15:45 - Ecologies 

Chair: Jonnie Penn (University of Cambridge)

Poornima Paidipaty (King's College London)

        Open Worlds and Postcolonial Futures: Norbert Wiener and the 'Grammar of the Semi-Exact Sciences’

In the mid 1950s, the MIT mathematician and father of cybernetics, Norbert Wiener spent 8 months as a visiting professor at the Indian Statistical Institute in Calcutta.  In between classroom lectures, Wiener spent most of his time developing a new book manuscript, titled ‘The Grammar of the Semi-exact Sciences’, on problems of nonlinear prediction.  He promised his publisher that the book would serve as a splashy follow-up to his 1948 publication, Cybernetics.  Though he never completed or published the book, the manuscript offers an important glimpse of Wiener’s work on predictive analysis, especially when set in the context of India’s rapid and uncertain postcolonial economic development.  

This essay examines the tensions in midcentury economic planning and prediction, between postcolonial technocratic desires to encapsulate and enclose the economy through formal scientific management and new statistical methods that embraced the complexities of open, nonlinear systems.  Rather than placing the two in strict opposition, this exploration revisits postwar statistics and cybernetics to reconsider easy claims of scientific “enclosure”, recognizing that these practices experimentally grappled with systems that were dynamic and interconnected.  In the process, the essay raises new questions about how to understand the political foundations of Cold War data science. Histories of AI and computing continue to rely on earlier accounts of militarized “closed worlds” (Edwards 1996; Galison 1994), as fantasies that underpinned 20th century US developments.  However, if we recenter the conversation around postcolonial, developmentalist visions, our historical and political genealogies of AI and machine learning produce murkier, more challenging political trajectories, in which prediction appears at best a “semi-exact” enterprise.

Paola Ricaurte Quijano (Tecnológico de Monterrey)

        Khipus as meaning-making systems: collective records and world taxonomies in Pre-Hispanic times

Pre-Hispanic computation and record techniques can help us analyze how ontological, epistemic, aesthetic, ethical, and political dimensions are embedded in the technologies we build. A khipu was a record, a material and semiotic artifact. The structure was organized using a taxonomy that reflected the order of the Andean world. The khipu was a delicate arrangement of strings of different thicknesses, heights, colors, and knots. It integrated seven levels of information, using a binary code for the numerical system and semiotic codes for narratives. As a mathematical and narrative system, the khipu condensed collective practices, identities, and memories. This technology, a complex knowledge system, was of enormous value in the Andean world. It was carefully crafted and transported from one village to another. In each Andean village, the khipukamayucs, guardians of the khipus, preserved them. The Spanish invaders prohibited using khipus since they did not understand their coding system and functioning. However, in distant communities, the tradition remained for some time. Before the khipus were banned, the church encouraged their use to keep track of personal sins. Khipus reflect how every technology emerges from a specific socio-cultural context in which it produces meaning and establishes a set of power relations. Today, hegemonic technologies, such as AI, help consolidate a universal history and world order. Pre-Hispanic khipus offer the possibilities of imagining and building alternative socio-technical pluriverses, and alternative histories of artificial intelligence, based on our ontological, epistemic, ethical, aesthetic, and political differences. In this text, I will analyze the khipu as a pre-Hispanic system of meaning-making, the expression of a world order through numerical and narrative codes and records, and contrast it with the role of AI technologies in the construction of global narratives and universal history.

Matt Lukacz (Columbia University)

        AI against Extinction: Towards The History of Conservation by Algorithm

Vital for the historians is a task of contextualizing how and why various forms of AI have been mobilized to alleviate the environmental crises. In this vein, this paper examines the past and present of algorithms in conservation biology. Cambridge geographer Bill Adams coined the term "conservation by algorithm" to capture the contemporary uptake of automation within various stages of the conservation pipeline. Adams (2018) writes: "[c]onservation by algorithm (…) tends to move the power to direct conservation policy into the places where data is managed," and adds, "the technology has a disturbing genealogy." I will unpack this genealogy as part of my ongoing larger project of situating the dynamics between ecology and information technology within environmental history of computing. A concurrent reading of historiographies of conservation and AI reveals both fields’ shared ancestry in rational choice theory. I build on Michael Castelle’s (2021) analysis the “economic origins of contemporary AI” to explore how mathematical economics influenced not only later forms of AI, but also decision making in conservation biology. I provide an alternative understanding of how conservation was both in conversation with, and a response to, operations research. By exploring interactions, the uneasy alliances conservation and operations research I show how operations research influenced an integral part of conservation: conservation preserve design. In sum, the production of space and preservation of nature are influenced by the legacies of economics, decision theory, and operations research. I use this history to shed light on the work of contributors to the first volume on AI and Conservation published by Cambridge UP. Based on content analysis and interviews with the contributors, I unpack: first, a clash between the "conservation from above" and the shifting role of “conservation on the ground,” (Adams, 2019), and second, issues related to AI-supported anti-poaching surveillance and security systems.

15:45 - 16:15 - Tea

16:15 - 18:00 - Evolutions 

Chair: Jenny Moran (University of Cambridge)

Tim Taylor (Monash University)

        Darwin and the Machine: Evolutionary influences in 19th and early 20th century visions of superintelligent AI, and their relevance today

Within a year of the publication of  "On the Origin of Species", the American botanist Asa Gray set out to popularise Darwin's work in the cultural magazine The Atlantic. Gray drew an analogy between Darwin's ideas and the way in which boat design has improved, diversified, and become more specialised over time. Over the next twenty years at least three authors took the analogy one step further, to imagine machines that were not merely subject to artificial selection by humans but had the more "biological" capacity for self-reproduction and evolution quite independent of human guidance. Samuel Butler, Alfred Marshall and George Eliot all explored the possibility that this might lead to super-intelligent machines, and probed the implications for humankind.

In addition to imagining Darwinian mechanisms, these works, and others that followed in the early 20th century, also explored wider issues of machine evolution such as: machine-human co-evolution; different architectures for self-reproduction; and the possibility of self-designing machines. They also envisaged a spectrum of applications and outcomes of the technology, some beneficial for our species and others catastrophic.

In the mid 20th century, many of the pioneers of cybernetics and digital computers were interested in the possibilities of self-reproducing machines. Their implementation was listed as a grand challenge for AI as recently as 1988 in the AAAI Presidential Address. However, the topic is currently under the radar, overshadowed by interest in deep learning and related techniques. Nevertheless, the design of self-reproducing machines, both in hardware and in software, elicits a variety of contemporary concerns including the nature of agency and intrinsic teleology in AI systems, requirements for focused versus open-ended evolutionary dynamics, and the relationship between Darwinian and more general selectionist paradigms. The long history of thought about machine evolution therefore offers a rich resource of contributions to these current debates.

Jeff Nagy (Stanford University)

        Watching Feeling at Rockland State Hospital: Manfred Clynes and Emotion AI, 1956-1972

One day in 1968, a research scientist at Rockland State Hospital made a surprising discovery: he found himself “feeling unduly well!” Manfred Clynes had been hired to lead Rockland’s Dynamic Simulation Lab in 1956, with the brief of applying cybernetic feedback principles to the treatment of psychiatric disorders, “adapting missile control concepts” to the more nebulous realms of the brain and human behavior. Over the course of the 1960s, Clynes developed a cybernetic emotion science that he called sentics, which aimed at transforming emotions into computer-readable data. In experiments in the later 1960s, Clynes used analog computers and pressure transducers to collect subjects’ emotional responses and collate them into curves that he asserted were universally representative of basic emotions.

But as Clynes discovered in 1968 while experimenting on himself, sentics was not merely a science; it was also a therapy. He took the outputs of his emotional computers and fed them back into human subjects, in experiments he claimed showed those subjects underwent emotions corresponding to the datafied feelings used as input. Via sentics, not only might computers begin to register human feelings, but human feelings could be reprogrammed thanks to loops of man-machine feedback circulating emotional data.

In this paper, I draw on original archival research to show how cybernetics was fused at Rockland with psychological and psychiatric approaches to emotion. I argue that Clynes’ program of using computers to record and analyze emotion is a bridge between mid-century cybernetics and the kinds of emotion AI that are increasingly ubiquitous on digital platforms, and that the paradigms he created continue to shape contemporary AI systems aimed at computing human feeling. Finally, I consider how sentics was molded by its institutional environment, a state psychiatric hospital that repeatedly pioneered new media systems for patient surveillance.

Sam Franz (University of Pennsylvania)

        Adaptation as Learning: John H. Holland and Evolutionary Computing

Historians of artificial intelligence have focused on questions concerning the relationship between the mind/brain and computing. Beginning in the mid-1950s with the Logic of Computers Group at the University of Michigan, John Henry Holland and his colleagues working in the then nascent discipline of automata studies explored adaptive approaches to computing, borrowing the language of evolution from biologists like R. A. Fisher and Sewall Wright. Collapsing the distinction between learning and adaptation, Holland also used models of brain activity from the psychologist Donald O. Hebb, adopting his framework of “cellular assemblies” in contrast to the Pitts-McCullough model of neural nets typically emphasized in the history of AI. Holland would go on to become an important figure at the Santa Fe Institute and in the “complexity sciences” broadly, emphasizing adaptation as a central problem for computing and a host of other disciplines. In this way, this paper expands histories of the complexity sciences, investigating the link between evolutionary approaches to computing and AI. This paper makes two arguments: first, adaptive approaches to computing did not always draw on cybernetic theories of the mind/brain but rather emphasized evolution as a conceptual framework for understanding learning in machines. Second, this paper shows that in the context of ecological and adaptive approaches to computing, the model of a single, learning mind doesn’t fit—Holland and his colleagues centered multi-agent systems, expanding both “learning” and “adaptation” beyond an individual model.

Johan Fredrikzon (Stockholm University)

        The Making of Human Error in the Era of Artificial Intelligence, 1940–1990

I wish to present a three-year postdoc project that has recently started and is carried out in collaboration with KTH Royal Institute of Technology in Stockholm and UC Berkeley. The project aims to rewrite the history of artificial intelligence (AI) by studying the conception of errors. As objects of investigation, errors and mistakes provide a privileged track to understand AI not only from a design or engineering perspective, but also as a criticized undertaking. Hence, the project engages source materials that range from algorithm designs to philosophical arguments. The attention to errors is particularly congenial to AI because of the persistently ambiguous character of mistakes throughout AI history: now an indication of sophistication, now a sign of failure. Following Steven Jackson's notion of broken world thinking the project asks: What were the relationships between error and intelligence? In what way did promoters of AI take into account human fallibility as a factor to either functionally counteract or creatively aspire to? To what extent did critics of AI base their idea of human uniqueness on the inversion of what was perceived as typical mistakes of computers? Did the conception of error change over time? If so, how? The project is designed as three case studies, two of which scrutinize American undertakings in Cold War cybernetics and expert system design as well as the criticism directed at them circa 1940–1975, and one which explores 1970s and 1980s AI work in Swedish academia and military in relation to institutionalized critique from the so-called Dialog Seminar. The investigation brings a novel theoretical perspective – tentatively labeled "the intelligence of mistakes" – which proposes to significantly inform the manner in which AI is treated by historical sciences. By its very framing, it carries the emancipatory ambitions that currently underpin the new critical histories of artificial intelligence.

18:00 - 18:30 - Creative Work

  • Bruno Moreschi (University of São Paulo, Brazil)

18:30 - End

18:45 - Dinner at Homerton College


Symposium Organizers

  • Syed Mustafa Ali, Computing, The Open University
  • Stephanie Dick, History and Sociology of Science and Medicine, Pennsylvania
  • Sarah Dillon, Faculty of English, University of Cambridge
  • Matthew L. Jones, History, Columbia; Big Data and Science Studies Cluster, Center for Science and Society
  • Jonnie Penn, History and Philosophy of Science, University of Cambridge
  • Richard Staley, History and Philosophy of Science, University of Cambridge

Join a listserv on the History of AI here: https://lists.cam.ac.uk/mailman/listinfo/hps-hoai.