skip to content

Histories of Artificial Intelligence: A Genealogy of Power

 

The critical historical work of the Seminar is focused by four themes that are strongly related but most often approached separately: hidden labour, encoded behaviour, disingenuous rhetoric and cognitive injustice. Addressing diverse aspects of AI as applied epistemology in both its formation and propagation, these themes have all been raised in one form or another in recent critiques of AI. They highlight aspects of AI's promise that raise troubling questions about the social and power relations unsettled by its use. To understand them better, we devote significant attention to each theme, but also examine them both comparatively and historically to expose critical, formative relationships.

Our first theme, hidden labour, aims to shed light on the unacknowledged human tasks required to make AI-powered systems practical (i.e. data labeling, data structuring, content moderation). We will explore how the introduction of automated systems has tended to reconfigure the distribution of authority and responsibility in various periods and locales. The second and third themes, encoded behaviour and disingenuous rhetoric, direct attention to the ways that users, citizens, and commercial audiences have received AI systems in unanticipated ways. We examine the distinctions between how those systems are imagined, described, taught, advertised or sold (as well as the manner in which they are defended after a crisis) and then compare these portrayals to their actual use and effects. We also question the surreptitious ways in which AI systems, via reinforcement learning, have altered or framed the behaviour of those who encounter them, a process that occurs in tension to the promise of open, transparent connectivity apparently embodied in such technologies. The final theme, cognitive injustice, points to epistemic and ontological injustices that are entangled with AI in its prehistory and making, examining the ways that the image and protocols of intelligence that it deploys may actually narrow and delimit knowledge, particularly for marginalised groups.

In the following indicative sections on each theme, we outline critical perspectives raised by recent scholarship, introduce questions animating the work of the Seminar and offer examples illustrating the significance of recognising the historical relationships on which they depend:

Hidden labour

The introduction of forms of mechanisation – of labour and knowledge equally – is usually described as saving labour; but in reality shifts it. Examining how the boundaries between human and machine have been changed over time we must track status, and ask what is being hidden: how have autonomous systems redistributed tasks in ways that burden and mask the contributions of marginalised people? In Ghost Work (2019), Mary L. Gray demonstrates how the history of labour laws in the United States, manifested in a shift from piecework to outsourcing, hollowed out the decency of many entry level jobs in the digital economy while simultaneously powering the AI revolution through low-pay click-work. Gray credits today's MTurk workforce as 'the AI revolution's unsung heroes' (Gray 2019). In her own scholarship and advocacy work, such as with TurkOpticon, Lilly Irani has sought to illuminate additional hidden layers of human data work (Irani 2016).

In the Seminar, we will bring a comparative historical perspective to bear on contemporary conversations about which human faculties can be 'automated' and explore how AI and its implementation serve to value and devalue certain forms of cognitive labour and systems of recognition. As the previous work of Jones (2016) and Dick (forthcoming) has shown, as well as that of Lorraine Daston (1994), such an equivocation of value has long been central to the human history of mathematical calculation, and critically from the nineteenth century onwards. It is in that moment when 'calculation' is debased from genius to the 'merely mechanical' that it becomes the domain of human computing, to be executed by low paid labourers and regendered as women's work (Hicks 2017).

Encoded behaviour

AI renders aspects of human behaviour legible, intelligible, formalisable even programmable in diverse ways. How have 'learning' systems of this variety tended to discipline the behaviour of their user base and surrounding communities? In what ways has the informational infrastructure required to make such systems 'learn' in fact pressured individuals and communities to act in the ways 'predicted' from supposedly analogous cases? One area in which this could be examined fruitfully concerns how notions of 'credit' and 'creditworthiness' – historically developed over two centuries – were turned into a profitable source of knowledge making for other purposes (Lauer 2017, Horan 2011). Today's data-broker behemoths, Experian, Equifax and TransUnion, use AI techniques to leverage these historical traditions, which Igo has critiqued as 'discrimination done properly', to gather extraordinary densities of data on individual citizens (Igo 2018).

Consumer-turned-citizen surveillance is not the only avenue by which AI-informed modes of automation and statistical analysis threaten to encode individual and group behaviour. Workplace surveillance is another avenue to consider. Caitlin Rosenthal, a historian of eighteenth- and nineteenth-century management practices, has shown how plantation owners in this period went about automating the lives of slaves as 'cogs' in a system based on factory automation techniques (Rosenthal 2018). Parallels between the sophisticated actuarial practices of plantation owners and those of twenty-first century business owners echo in the work of Karen Levy, whose recent work examines how electronic monitoring systems in the trucking industry are used to force compliance and 'automation' upon employees (Levy 2014).

With this platform to work from, the Seminar will explore how individuals and groups have resisted comparable pressure to conform, brought about by extensive and sophisticated informational infrastructure. What do the prehistories of data-surveillance tell us about the likely consequences of introducing new statistical techniques like machine and deep learning into the fold? Has the obscured materiality of these systems, so often developed to 'disappear from view' via design affordances that prioritize usability and convenience, caused them to be historicised as apolitical? (Dinnen 2017, Dick forthcoming)

Disingenuous rhetoric

What role has rhetoric played in shaping the imaginaries that surrounded past forms of automation and faux-mechanical-sentience? Who controls the rhetoric around AI, and who benefits from certain framings of these technologies? Emerging work in the digital humanities attends to the rhetorical presentation of AI in the New York Times over a thirty year period (Fast and Horowitz 2017), the dominant topics and frames around AI in five major American newspapers from 2009 to 2018 (Chuan, et al. 2019), and the conceptual structures around the presentation of AI in contemporary American cinema using computational analysis of the OPUS Open Subtitles Corpus (Reccia 2020). In dialogue with the other seminar themes, the Seminar's comparative and critical analysis of the rhetorics of AI will historicise such work and focus attention specifically on the power dynamics, past and present, embedded in the creation and control of AI rhetoric.

The artist Astra Taylor has characterised the contemporary overselling or exaggeration of AI's abilities as 'fauxtomation' (Taylor 2019). This phenomenon has its historical comparisons. Simon Schaffer (1992) notes that in 1834 two models of Charles Babbage's government-funded Difference Engine were made by the instrument designer Francis Watkins, an electrician and showman at the Adelaide Gallery, a leading London showcase for new engineering. Schaffer notes that even when the Engine had been abandoned Babbage insisted 'it should be placed where the public can see it', and it was put on display in King's College London's museum. The pre-AI history of automation therefore reveals a structural relationship between government, industry, invention and entertainment. Current phenomena such as Hanson Robotics' humanoid robot, Sophia, offer a comparable contemporary instantiation. Created by an artist who previously worked as a sculptor and researcher in Disney's Imagineering Lab, Sophia has no artificial intelligence and yet the UN has named it as an Innovation Champion and it has been awarded Saudi Arabian citizenship. The rhetoric used to sell Sophia has obfuscated its actual technological capabilities, with more rights now being held by an inanimate machine than some human citizens of Saudi Arabia, for instance women, are afforded.

Both contemporary and historical fictional and non-fictional narratives of AI tend to focus on embodied AI – robots. As such, AI also becomes raced and gendered. Comparative analysis of this phenomena can reveal the effects of this limited focus, which represents in fact only a small subset of contemporary AI research and technologies, or ones that are decades if not centuries away from becoming a technological reality. For what economic, political or ideological ends are anthropomorphic rhetoric and representation employed, and with what consequences for perpetuation of societal prejudices and inequities? In what ways does this limited focus distract attention from the reality of contemporary AI technology and its implementation – more often than not invisible and distributed – and therefore distract critique from the more immediate differential effects of such technologies on women and people of colour?

Cognitive injustice

In developing and implementing theories of intelligence, the discipline of AI and its many derivative applications stand to radically remake concepts of knowledge and the knower. Cognition – the 'act of knowing' – encompasses both epistemological and phenomenological experiences of the world. Injustices committed in these dimensions can take the form of imposed subjectivities, practices, cultures, and/or materialities (Ricaurte 2019), in some cases, violently. We take seriously these oppressive dynamics in our analysis of the history of AI and the theories of epistemology embedded therein. We aim to connect new scholarship on conceptions of data-colonialism (Couldry and Mejias 2018, Ricaurte 2019) to broader historical narratives on coloniality and the statistical and information tools it has called to use (Yale 2015).

The asymmetries in access to information and power outlined above have bled through global culture and politics in insidious ways, refiguring individual, group, national, and ethnic identities as they have moved. Often these transformative tools have been propagated outwards from a geographical core (i.e. the west) through academia and industry, re-structuring semi-peripheral and peripheral sites as they are spread in ways both explicit and unconscious. Those who study the epistemology of peoples who have been systematically oppressed and colonised are deeply familiar with these circuitous patterns and the roles that sophisticated knowledge tools have historically played in their perceived legitimacy. The Seminar will contribute to raising awareness of unrecognised ignorance and hitherto unmarked violence in order to generate histories of AI that offer a comprehensive guide to the societal implications of the field.

The Seminar will explore how a comparative historical approach can inform exploration of the ways in which the injustices posed by AI are not 'merely' epistemic, but also ontological insofar as they involve the marginalization of bodies and their dehumanization. The Seminar will create a historical context for contemporary work in this area, such as Os Keyes' (2018) exploration of how Automatic Gender Recognition (AGR), a subfield of facial recognition which aims to algorithmically identify the gender of individuals from photographs or videos. Keyes exposes how AGR has proposed applications in physical access control, data analytics and advertising which would have differential consequences for gender non-conforming individuals. We will examine what kind of new picture is created if this work on contemporary AI is compared, for instance, to D.D. Mahendran's research showing how Post-Cartesian computationalism is entangled with a legacy of dehumanisation and antiblack racism (Mahendran 2011).

In summary, the comparative and critical interdisciplinary work of the Seminar will generate new knowledge through exploring the interrelationship between these four threads, as they are manifested in specific historical case-studies. Consider, for example, the figure of the automaton, which history tells us is entangled with patronizing Enlightenment-era discourses about Orientalism (Ates 2012). In the early 2000s, Amazon.com Inc. named its marketplace for unregulated micro task, micro-pay labour 'Mechanical Turk' (MTurk). This is indicative of the enduring legacy of labour fulfilled by exotic 'others' under the slick guise of 'automation.' In the eighteenth century, The Mechanical Turk was a theatrical device used to fool onlookers into believing that a puppet in Ottoman dress could play chess. In fact, a human player hid inside. Today, Amazon markets MTurk as 'artificial-artificial intelligence'. Human workers are stripped of their identity; they participate via a Worker-ID number connected to payees through an API and digital interface. This low-pay 'click' workforce has proved 'vital' to the success of contemporary AI because it has drastically reduced the cost of labelling training data (Gray 2019). This is a culturally-degenerative feedback loop in which hidden labour packaged and sold under the guise of 'other' is used to re-normalise profound injustices from centuries past. Meanwhile, new disingenuous rhetoric is coined to equate AI with autonomy, a wager that pays when citizens' behaviour is steered en masse toward predictability and conformity by optimizing widely used reinforcement-learning systems (ex. News Feeds, sales and search recommendation engines) to maximise profit.

Join our Mailing List!

To receive further information on all our activities (and learn their online coordinates), please subscribe to the HoAI mailing list.

Email us at: hoai@hermes.cam.ac.uk

Join our Slack Channel!

To participate in conversations with scholars on topics related to your interests, please join our HoAI Slack Channel.