skip to content

Histories of Artificial Intelligence: A Genealogy of Power

 

Discriminating Data

Time: Wednesday May 19 @ 15:00-17:00 BST

Co-facilitated by: Stephanie Dick (University of Pennsylvania), Jonnie Penn (University of Cambridge)

Summary:

In her forthcoming book, Discriminating Data, Wendy Hui Kyong Chun reveals how polarization is a goal—not an error—within Big Data and machine learning. These methods, she argues, encode segregation, eugenics, and identity politics through their default assumptions and conditions. Correlation, which grounds Big Data's predictive potential, stems from twentieth-century eugenic attempts to 'breed' a better future. Recommender systems foster angry clusters of sameness through homophily. Users are 'trained' to become authentically predictable via a politics and technology of recognition. Machine learning and data analytics thus seek to disrupt the future by making disruption impossible.

Chun, who has a background in systems design engineering as well as media studies and cultural theory, explains that although machine learning algorithms may not officially include race as a category, they embed whiteness as a default. Facial recognition technology, for example, relies on the faces of Hollywood celebrities and university undergraduates—groups not famous for their diversity. Homophily emerged as a concept to describe white U.S. resident attitudes to living in biracial yet segregated public housing. Predictive policing technology deploys models trained on studies of predominantly underserved neighborhoods. Trained on selected and often discriminatory or dirty data, these algorithms are only validated if they mirror this data.

How can we release ourselves from the vice-like grip of discriminatory data? Chun calls for alternative algorithms, defaults, and interdisciplinary coalitions in order to desegregate networks and foster a more democratic Big Data.

In this session, we will read excerpts of a working draft of the book alongside various primary source materials, listed below.

Assigned Readings:

  • Chun, Wendy Hui Kyong, and Alex Barnett. Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. Cambridge, Massachusetts: The MIT Press, 2021.

  • Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. ‘Learning Representations by Back-Propagating Errors’. Nature 323, no. 6088 (October 1986): 533–36. https://doi.org/10.1038/323533a0.

  • Wang, Yilun, and Michal Kosinski. ‘Deep Neural Networks Are More Accurate than Humans at Detecting Sexual Orientation from Facial Images.’ Journal of Personality and Social Psychology 114, no. 2 (February 2018): 246–57. https://doi.org/10.1037/pspa0000098.

  • Winner, Langdon. ‘Upon Opening the Black Box and Finding It Empty: Social Constructivism and the Philosophy of Technology’. Science, Technology, & Human Values 18, no. 3 (1993): 362–78.

Suggested Readings:

  • Hannah Arendt, “The Crisis in Education,” in Between Past and Future: Eight Exercises in Political Thought. New York: Penguin, 1968

  • Jones, Matthew L. ‘How We Became Instrumentalists (Again): Data Positivism since World War II’. Historical Studies in the Natural Sciences 48, no. 5 (November 2018): 673–84. https://doi.org/10.1525/hsns.2018.48.5.673.

Join our Mailing List!

To receive further information on all our activities (and learn their online coordinates), please subscribe to the HoAI mailing list.

Email us at: hoai@hermes.cam.ac.uk

Join our Slack Channel!

To participate in conversations with scholars on topics related to your interests, please join our HoAI Slack Channel.