skip to content

Histories of Artificial Intelligence: A Genealogy of Power

 

Reading Group #3 on 'Introduction to Seminar Theme: Cognitive Injustice'

Wednesday 15 July 2020, 15:00–17:00 BST

Overview

lewisCo-facilitator: Prof. Jason Edward Lewis (Concordia University)

Discussants: Apolline Taillandier (Sciences Po), Sananda Sahoo (Western University)

Moderator: Dr Richard Staley (University of Cambridge)

Readings (in their suggested reading order):

  1. Little Bear, Leroy, and Ryan Heavy Head. 2004. 'A Conceptual Anatomy of the Blackfoot World.' ReVision 26 (3): 31–38.
  2. Todd, Zoe. 2016. 'An Indigenous Feminist's Take on the Ontological Turn: "Ontology" Is Just Another Word For Colonialism.' Journal of Historical Sociology 29 (1).
  3. Lewis, Jason Edward, Noe Arista, Archer Pechawis and Suzanne Kite. 'Making Kin with the Machines.' Journal of Design and Science, vol. Summer 2018, no. 3.5, July 2018.
  4. Benesiinaabandan, Scott. 2020. 'Gwiizens, the Old Lady and the Octopus Bag Device.' In Indigenous Protocol and Artificial Intelligence Position Paper, edited by Jason Edward Lewis, 45-57. Honolulu, HI: Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR).

Co-facilitator: Prof. Jason Edward Lewis. Prof. Lewis is the University Research Chair in Computational Media and the Indigenous Future Imaginary as well as Professor of Computation Arts at Concordia University, Montreal.

A summary of the Cognitive Injustice theme is available here. Prof. Lewis provides the following overview of the session:

Indigenous epistemologies are many and varied. What many (not all!) of them center is language, storytelling, territory and protocol as means of knowledge generation, preservation and dissemination. This is often (not always!) expressed in terms of kinship relations with human and non-humans, and the reciprocal responsibilities that energize and maintain the mesh of relationships. This session on cognitive justice will explore how Indigenous epistemologies and cosmologies can inform historical and contemporary perspectives on how AI systems are and have been designed, implemented, and brought into ethical and functional engagement with humans and other non-humans.

The discussion will be grounded in four texts: 'A Conceptual Anatomy of the Blackfoot Word' (L Little Bear and R Heavy Head, 2004), '"Ontology" Is Just Another Word For Colonialism' (Z Todd, 2016), 'Making Kin with the Machines' (J Lewis, et al, 2018), and selections from the 'Indigenous Protocol and Artificial Intelligence Position Paper' (various, 2020). The Little Bear text illuminates the Blackfoot language-world so the reader can glimpse what it might be like to perceive existence as a field of ever-changing always-in-relation knots of space-time rather than a collection of objects. Todd critiques the 'ontological turn' as well as various species of new materialism for claims of pioneering new intellectual ground while ignoring, dismissing and overwriting long histories of Indigenous philosophy that address the same issues—often more clearly and from firmer conceptual footing. These two texts helped set the foundation for the 'Making Kin with Machines' essay and subsequent Indigenous Protocol and Artificial Intelligence Position Paper, which brings Indigenous relational epistemologies to bear on the question of what kind of relative AI might be and become and what that means for how we should design, implement and deploy such systems. Both texts also critique how concepts of 'equity, diversity and inclusion' in the AI conversation (and in tech in general) are used to banish Indigenous (and other non-Cartesian) epistemologies from the techno-scientific center and relegate them to the representational margins.

At the moment, here at the beginning of this research trajectory, I am interested in five main questions:

  1. How do we understand Indigenous epistemologies in situ?
  2. Should we and can we formalize such knowledge in computational terms?
  3. Should we and can we implement intelligent machine systems based on those epistemologies?
  4. Would those systems be better at promoting Indigenous flourishing than those currently being built?
  5. Would such approaches be better at capturing what it means to be humanly intelligent than those based on current computational models?

Summary of event

Prof. Lewis's stated goal is to understand computational technology and intelligence outside of the western intellectual canon, the hegemony of which can cloud understandings of other scholarly traditions. He approaches these subjects by exploring the foundations of various indigenous world views. Rather than position these knowledge systems in 'response' to western epistemic colonialism, which could give the false impression that they emerged solely from and/or have been sustained in reaction to such forces, Lewis aims to honour the origins of these traditions on their own terms, meaning as sovereign and original. Thus, his intention was (and is) 'to speak rather than to respond'; he called on participants to recognize indigenous world views as active knowledge systems.

For the purpose of making this endeavour manifest, Prof. Lewis assigned four readings in preparation for the session. His presentation explained his epistemological approach to these texts, and the reasons for their assignment.

1. A Conceptual Anatomy of the Blackfoot World
The purpose of this text was for entry into an indigenous world view. Blackfoot is a language of event-happenings rather than a language organised around objects within objects, in which only humans have agency. The comparisons of a language like English with Blackfoot demonstrate how a world view or culturally-informed imagination is structured by language frameworks. Prof. Lewis made the case that a flow-language like Blackfoot might be more suitable for accurate discussion of quantum physics, intelligence, and consciousness.

2. An Indigenous Feminist's Take on the Ontological Turn: 'Ontology' Is Just Another Word For Colonialism
The purpose of this text was to understand how indigenous knowledges have been not only erased but overwritten, and how scientific progress and historical understandings of intelligence (broadly construed) has been hindered by this erasure. This is not just a colonial violence but intellectual violence to the pursuit of knowledge, because scientific inquiry becomes trapped into a series of confirmation bias errors due to a limited understanding of what it means to be human and in relation to the world.

3. Making Kin with the Machines
The purpose of this text was to connect the deep relationality, which is at the heart of many indigenous ways of knowing, to new ways of thinking about machines and intelligence. It makes the case that encoded racism is not a bug of AI or machine intelligence but rather a feature of white supremacy, therefore there is a need to change structures to stop this logic being reproduced. Indigenous epistemologies are offered as a way of doing this (not the solution, but perhaps the central solution for some indigenous knowledge-holders) that assists in understanding the diversity of indigenous thinking and relationality with the non-human. Prof. Lewis made the case that these are epistemologies fundamentally grounded in specific languages and territories, not to be abstracted and appropriated, thus the AI systems which may work best for the flourishing of everyone should be grounded in this local situatedness.

4. Gwiizens, the Old Lady and the Octopus Bag Device
The purpose of assigning this new telling of an Anishinaabe parable was to enter into an indigenous teaching story to pose the challenge of how one might formalise the knowledge therein to make it computable. The repetition, cyclicism, intergenerational knowledge-passing, assistance from non-human kin, and presence of Wisakedjak tend to fall on western ears as inconclusive, non-linear, and/or filled with bizarre causation. The question for Prof. Lewis is how to formalise this in a way that will serve indigenous communities, which is going to be the next stage of his research.

In addition to the five questions outlined above, his provocations for the cohort were as follows:

  1. What would the history of machine intelligence look like when written from these different foundations?
  2. What would this mean for the concept and reality of how we engage with machine intelligence?

Apolline Tailandier offered three provocations for Prof. Lewis:

  1. Why focus on AI entities about super-intelligence rather than existing AI systems that do emphasise relationality, blur boundaries, or unfix cultural meaning?
  2. Could there be an analogy between climate politics and AI politics in relation to indigenous ways of thinking?
  3. How advantageous is the prospect of creating a human experience that is central (or a central evaluation of human flourishing) through indigenous AI? Would this approach a liberal strategy of inclusion and recognition in a particularly white public space through the formalisation of affective indigenous cultures and languages in the context of capitalisation?

Sananda Sahoo, beginning with a land acknowledgement of the traditional territories of the Anishinaabek, Haudenosaunee, Lūnaapéewak, and Attawandaron peoples in London, Ontario (where she is based), offered the following 2 provocations for Prof. Lewis:

  1. Explaining the challenge of the AI-fication of narrative, and the narrative production of the colonial data ordering, Sananda questioned how to challenge cultural imperialism in relation to talking about cognitive injustice from within existing power structures:
    (i) What kind of methodology should be used to foster this condition?
    (ii) Whether or not to advocate for a teleological suspension of methodology?
  2. In recognition of the existence of multiple heterogenous knowledges, what tendencies are imminent in the system that can lead to political challenges to changing the status quo?

Addressing Apolline, Prof. Lewis made the point that indigenous peoples do not have the luxury of disengaging from that which is epistemologically corrupt, as it is affecting them now in ways that already have profound impact. In relation to climate, he noted that very few papers from an indigenous perspective directly engage with artificial/machine intelligence, so it seems there is not an easy translation from one to the other, however what does relate to both is the long-running analogy of kinship with non-humans, which can be used to understand a new machinic nonhuman on the scene. To the third question, Prof. Lewis noted this was indeed a danger, and also an exciting, new conversation. He intends to explore this in future research.

Addressing Sananda, Prof. Lewis noted the importance of re-storying narrative: that the kind of narratology critiqued by Le Guin in the beginning of Scott's story is an inaccurate depiction of humanity. He made the case that fixing narratives is also about fixing what we expect from them, and more specifically unfixing the notion that the world is computable in the fitting of narratives. He noted that political challenges to this approach are huge, both in terms of being dismissed in industry as well as academia, due to the perception that alternative narratives such as those offered by Scott are non-rigorous, not data, not computable. The problem is that what they will compute then has an effect on everything else. He proposed that ethics must approach the individual and the collective equally, rather than the individual as atomised, otherwise AI ethics will maintain business as usual.

Works cited (in the chat)

Date: 
Wednesday, 15 July, 2020 - 15:00

Join our Mailing List!

To receive further information on all our activities (and learn their online coordinates), please subscribe to the HoAI mailing list.

Email us at: hoai@hermes.cam.ac.uk

Join our Slack Channel!

To participate in conversations with scholars on topics related to your interests, please join our HoAI Slack Channel.