What I found particularly compelling about Kate Crawford’s recent book Atlas of AI was its interest in thinking in very material and connected terms about the problems that artificial intelligence poses. Rather than allowing it to be otherworldly, Crawford effectively seeks to explore how AI is a deeply integrated part of the world itself, in terms of social structures but also in terms of environmental and ecological factors. The book starts with a story about a performing horse, and this is just one of many instances in which nature is forgrounded in surprising and often revealing ways. In Crawford’s account AI are embedded in all sorts of connections. This has the effect of blowing-away any sense of immateriality that might have persisted this far.
To bring about this radical materiality agenda, as we might think of it, the book’s chapters start with the dirt of earth and the grit of labor before moving into the heat and environmental degradation of data and the reductionism of classification, this is before moving inwards to the tracking of affect and outwards to the structures of the state and the forces of power. Having grounded AI the conclusion chapter is followed by a coda that takes us into space.
As the chapter list suggests, this may be a book about AI and materiality, but it is also a book about AI and power. The point being that this focus on materiality is needed to bring about a stronger sense of how AI are part of power relations and social forces. As Crawford tellingly concludes, the book explores ‘the planetary infrastructures of AI as an extractive industry: from its material genesis to the political economy of its operations to the discourses that support its aura of immateriality and inevitability’. This closing reflection is indicative of the type of approach the book develops, bringing infrastructures together with operations and discourses.
The notion of extraction is a powerful one that does a significant amount of work in literally grounding the discussion of AI. AI is an industry, it is argued, that ‘depends on exploiting energy and mineral resources’. There is, inevitably, a spatial element to thinking of AI as an ‘extractive industry’. The result of such a phrase is that location becomes a key part of the analysis. Data centres, we are told, like the mining sector before them, ‘are far removed from major populations’. The materiality of AI brings with it an attention to the mapping of AI. That is really an implicit point in the book, Crawford's method is informed by the notion of the atlas. This is a methodological point that is explained in some detail, especially with regard to what it means to think in terms of the atlas and how an eye for topography enables AI to be incorporated into the world in more direct and specific ways.
For Crawford, an understanding AI needs to also turn-back upon the very label within which it is contained. There is, Crawford concludes, ‘much at stake in how we define AI, what its boundaries are, and who determines them: it shapes what can be seen and contested’. There is s strong sense in the book of Crawford’s concern with avoiding the distractions of myth making and techno posturing and trying, instead, to think about the agendas behind them, especially concerning the what and who of AI. The argument made here runs across the multi-scalar chapters of the book: there is a concentration of power when it comes to the development and ownership of AI. Crawford goes as far as to refer to ‘the empires of AI’. Such a concentration of AI has consequences for the future and for the way that these technologies develops, their obejectives and the type of power associated with such developments. Crawford is pushing toward the connection of ‘issues of power and justice’.
Thinking in material terms such as these brings to the fore the substances of such connections. At the same time, trying to think in terms of the atlas is itself extractive of the relations that are often left in the substrate. The question this poses then is whether something as vast as AI can be thought of in these wide-ranging terms or if it simply becomes too hard to maintain this combination of materialities in the analysis. What this book does is to bring out the materiality of AI in ways that could well change understandings of the conditions from which AI emerges and may even, with its arguments about extraction, change what we understand AI to be in the first place.
Entangled in those materialities are the myths shaping AI futures. In one important passage Crawford claims that ‘we can see two distinct mythologies at work’. These myths both deal with how AI are understood to lift intelligence out of what Katherine Hayles has called ‘cognitive assemblages'. The first myth, Crawford notes,
‘is that nonhuman systems (be it computers or horses) are analogues for human minds. This perspective assumes that with sufficient training, or enough resources, humanlike intelligence can be created from scratch, without addressing the fundamental ways in which humans are embodied, relational, and set within wider ecologies’
As we see here, Crawford focuses in upon two myths that are a part of the development of artificial intelligence in its broadest terms. This first myth is that machines will think in ways that are comparable with human thinking. Crawford points out that this myth, or set of myths, is based upon the idea that artificial intelligence is seeking to be progressed in ways that replicate human thinking. This is an important myth for Crawford because it frames how AI are understood in the first instance and then impacts on the direction they take. Such a myth of humanlike intelligence also permeates into how AI are approached and accepted. What this myth does, Crawford points out, is to also imagine that the human is discrete and without context in their modes of thinking. This forges a connection between the two myths Crawford identifies.
The second of these myths links into this decontextualised approach toward understanding thinking. Crawford suggests that ‘the second myth is that intelligence is something that exists independently, as though it were natural and distinct from social, cultural, historical, and political forces’. This myth here is that thinking can occur outside of its social context. In the case of artificial intelligence, this is to imagine the systems as operating outside of the environmental, corporate or infrastructural features that define them. This second myth, Crawford observes, lifts the thinker out of the surroundings and out of the conditions in which that thinking occurs. These myths matter because they have become part of artificial intelligence itself. Indeed, Crawford claims that ‘these mythologies are particularly strong in the field of artificial intelligence, where the belief that human intelligence can be formalized and reproduced by machines has been axiomatic since the mid-twentieth century’. Two existing myths that are particularly animated by AI.
These myths not only intersect with the materialities of AI, they also impact on how AI is understood and even defined. There is a framing of AI going on. Crawford argues that ‘each way of defining artificial intelligence is doing work setting a frame for how it will be understood, measured, valued, and governed’. For Crawford the definitions of AI are themselves to be explored in order to grasp how they are part of strategies and agendas. As Crawford puts it, ‘the task is to remain sensitive to the terrain and to watch the shifting and plastic meanings of the term “artificial intelligence” – like a container into which various things are placed and then removed – because that, too, is part of the story’. The definitions of AI are part of the story, they are not something to be approached as if they are fixed or natural in their form.
One of Crawford’s points here is about the importance of what might be left out of these definitions. There are, it is argued:
‘significant reasons why the field has been focused so much on the technical – algorithmic breakthroughs, incremental product improvements, and greater convenience. The structures of power at the intersection of technology, capital, and governance are well served by this narrow, abstracted analysis’.
In this sense, AI is something upon which things are projected in those definitions. It is ‘a two word phrase onto which is mapped a complex set of expectations, ideologies, desires, and fears’. The definitions incorporate these factors and so might be used to also reveal these things. AI is full of ‘discourses that support its aura of immateriality and inevitability’. The myths of AI support these kinds of auratic properties.
The myths of AI spring-up in different places in Atlas of AI. For instance, in one dedicated chapter, data gathering gets caught up with an idea that it is serving the development of new forms of intelligence. More and more data gathered leads to better refined AI. The idea here is that machine learning requires such data in order to learn from a big enough trough of ‘training data’. Crawford explains that:
‘data has become a driving force in the success of AI and its mythos and how everything that can be readily captured is being acquired. But the deeper implications of this standard approach are rarely addressed, even as it propels further asymmetries of power. The AI industry has fostered a kind of ruthless pragmatism, with minimal context, caution, or consent-driven data practices while promoting the idea that the mass harvesting of data is necessary and justified for creating systems of profitable computational “intelligence.”’
The result of this is that data harvesting can be justified through the need for more data that can then improve the machine learning. Crawford adds that training data are ‘used to assess how they perform over time’. This creates, Crawford identifies, a ‘demand for data’. The ideals of AI come to then drive data harvesting. It is almost as if, as Crawford puts it, AI include a ‘moral imperative’ to gather data simply because those data serve to improve these systems. The emergence of certain types and myths of AI, Crawford goes on to observe, ‘produced a kind of moral imperative to collect data in order to make systems better, regardless of the negative impacts the data collection might cause at any future point’.