AI as extraction (part 2)
Some further thoughts on Kate Crawford's recent book Atlas of AI, this time on myths and definitions
In a previous post I looked at materiality and power in Kate Crawford’s recent book Atlas of AI. In that piece I explored how the book focused on the extensive materialities of artificial intelligence. The focus on materiality, and particularly the idea of extraction, allowed the book to bring out the environmental and social aspects of these systems1. There is a further dimension to the book’s arguments that I didn’t fully explore in that previous piece, this concerns the way that AI are defined and mythologised.
Entangled in those materialities are the myths shaping AI futures. In one important passage Crawford claims that ‘we can see two distinct mythologies at work’. These myths both deal with how AI are understood to lift intelligence out of what Katherine Hayles has called ‘cognitive assemblages'. The first myth, Crawford notes,
‘is that nonhuman systems (be it computers or horses) are analogues for human minds. This perspective assumes that with sufficient training, or enough resources, humanlike intelligence can be created from scratch, without addressing the fundamental ways in which humans are embodied, relational, and set within wider ecologies’
As we see here, Crawford focuses in upon two myths that are a part of the development of artificial intelligence in its broadest terms. This first myth is that machines will think in ways that are comparable with human thinking. Crawford points out that this myth, or set of myths, is based upon the idea that artificial intelligence is seeking to be progressed in ways that replicate human thinking. This is an important myth for Crawford because it frames how AI are understood in the first instance and then impacts on the direction they take. Such a myth of humanlike intelligence also permeates into how AI are approached and accepted. What this myth does, Crawford points out, is to also imagine that the human is discrete and without context in their modes of thinking. This forges a connection between the two myths Crawford identifies.
The second of these myths links into this decontextualised approach toward understanding thinking. Crawford suggests that ‘the second myth is that intelligence is something that exists independently, as though it were natural and distinct from social, cultural, historical, and political forces’. This myth here is that thinking can occur outside of its social context. In the case of artificial intelligence, this is to imagine the systems as operating outside of the environmental, corporate or infrastructural features that define them. This second myth, Crawford observes, lifts the thinker out of the surroundings and out of the conditions in which that thinking occurs. These myths matter because they have become part of artificial intelligence itself. Indeed, Crawford claims that ‘these mythologies are particularly strong in the field of artificial intelligence, where the belief that human intelligence can be formalized and reproduced by machines has been axiomatic since the mid-twentieth century’. Two existing myths that are particularly animated by AI2.
These myths not only intersect with the materialities of AI, they also impact on how AI is understood and even defined. There is a framing of AI going on. Crawford argues that ‘each way of defining artificial intelligence is doing work setting a frame for how it will be understood, measured, valued, and governed’. For Crawford the definitions of AI are themselves to be explored in order to grasp how they are part of strategies and agendas. As Crawford puts it, ‘the task is to remain sensitive to the terrain and to watch the shifting and plastic meanings of the term “artificial intelligence” – like a container into which various things are placed and then removed – because that, too, is part of the story’. The definitions of AI are part of the story, they are not something to be approached as if they are fixed or natural in their form.
One of Crawford’s points here is about the importance of what might be left out of these definitions. There are, it is argued:
‘significant reasons why the field has been focused so much on the technical – algorithmic breakthroughs, incremental product improvements, and greater convenience. The structures of power at the intersection of technology, capital, and governance are well served by this narrow, abstracted analysis’.
In this sense, AI is something upon which things are projected in those definitions. It is ‘a two word phrase onto which is mapped a complex set of expectations, ideologies, desires, and fears’. The definitions incorporate these factors and so might be used to also reveal these things3. AI is full of ‘discourses that support its aura of immateriality and inevitability’. The myths of AI support these kinds of auratic properties.
The myths of AI spring-up in different places in Atlas of AI. For instance, in one dedicated chapter, data gathering gets caught up with an idea that it is serving the development of new forms of intelligence. More and more data gathered leads to better refined AI. The idea here is that machine learning requires such data in order to learn from a big enough trough of ‘training data’. Crawford explains that:
‘data has become a driving force in the success of AI and its mythos and how everything that can be readily captured is being acquired. But the deeper implications of this standard approach are rarely addressed, even as it propels further asymmetries of power. The AI industry has fostered a kind of ruthless pragmatism, with minimal context, caution, or consent-driven data practices while promoting the idea that the mass harvesting of data is necessary and justified for creating systems of profitable computational “intelligence.”’
The result of this is that data harvesting can be justified through the need for more data that can then improve the machine learning. Crawford adds that training data are ‘used to assess how they perform over time’. This creates, Crawford identifies, a ‘demand for data’. The ideals of AI come to then drive data harvesting. It is almost as if, as Crawford puts it, AI include a ‘moral imperative’ to gather data simply because those data serve to improve these systems. The emergence of certain types and myths of AI, Crawford goes on to observe, ‘produced a kind of moral imperative to collect data in order to make systems better, regardless of the negative impacts the data collection might cause at any future point’4.
If everything goes to plan, later this year my new book on algorithms will be published. I’ll post some updates on here once the publication date, cover and webpage are finalised. If you aren’t already a subscriber to this newsletter/blog you can sign-up below to get future emails about my upcoming book and other things.
Subscribe for free to receive new posts
This type of position is captured in claims such as this from Crawford’s framing of the book: ‘AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures’.
The myths that Crawford points toward turn up in other places too. There is, for instance, ‘the myth of clean tech’ (Crawford, 2021: 41).
On this point, Crawford talks in more haunt-logical terms adding that ‘AI can seem like a spectral force – as disembodied computation – but these systems are anything but abstract. They are physical infrastructures that are reshaping the Earth, while simultaneously shifting how the world is seen and understood’
On this point Crawford returns to two key aspects of the kind of data revolution that have occurred with objectivity and scale bundled together, adding that ‘behind the questionable belief that “more is better” is the idea that individuals can be completely knowable, once enough disparate pieces of data are collected.’