After the algorithm?
130 years ago Emile Durkheim wrote of how the nature of work had the capacity to cause a ‘greater fatigue of the nervous system’. He was developing the concept of anomie in The Division of Labour in Society, which was originally published in 1893. When I arrived at York in 2008 I was asked to do four introductory lectures on Durkheim. I delivered them for a couple of years and then the slides were left to gather dust on a memory stick. When I noticed the book’s anniversary I went back to the relic of my old powerpoint slides to see what I’d said about it. I found a one hour lecture that I can hardly remember giving.
Looking at the slides now there are some passages I pulled out back then that still seem to echo. It made me want to go back to the book again and see what else was in there. As well as the section on fatigue and the abnormal and pathological aspects of the way labour is organised, there were a couple of other things that stood out in Durkheim's analysis.
One thing was the question he was asking. It seems simple whilst being deceptively tricky too. Durkheim asks ‘What explains the fact that, while becoming more autonomous, the individual becomes more closely dependent on society?’. His point was that Labour is specialised in a way that means we become more reliant on everyone doing that specialised labour for the whole thing to function. The subtext here though is about autonomy and reliance that seems to go beyond that. Durkheim's question leaves us to think about what dependence can mean, especially where we seem to be operating outside of eroding limits.
Of course, famously, the changing form of social solidarity is central to the conceptual framing Durkheim develops. The binding features of shared experience fall away with specialsiation and as, he puts it, ‘the conscience collective became weaker and vaguer as the division of labour developed’. What binds us, Durkheim wonders, when labour is so varied and we have less direct integration and less of a shared frame of reference. He observes for instance that:
‘Nowadays, the phenomenon has developed so generally it is obvious to all. We need have no further illusions about the tendencies of modern history; it advances steadily towards powerful machines, towards great concentrations of forces and capital, and consequently to the extreme division of labour. Occupations are infinitely separated and specialized, not only inside the factories, but each product is itself a speciality dependent on others’
It is not just work but also the products of that work that are dependent on the separation and specialisation of hightened divisions of labour. Each thing produced represents and encapsualtes that division. It's hard to resist the temptation to simply apply it to today’s circumstances, perhaps because it looks like a neat fit. That would probably be a mistake.
Durkheim notes that the direction is toward ‘powerful machines’. Along with this comes, he contends, ‘great concentrations of forces and capital’. That's the direction. With those things will come more extreme divisions of labour - leading to greater separation and specialisation too. And presumably also to a greater scope for its pathologies, such as that nervous fatigue he mentioned. Whatever we might make of Durkheim's arguments, the ambition to capture an overarching sense of the direction of things remains eye catching. It’s a book about social change, after all.
On the topic of accounting for social change, in a recent post Harry Lambert (see below) on his One Great Read Substack looked at the top 25 best selling writers of history. It’s an interesting list that reveals something of the stories that are most prominent. The post breaks things down and adds a little context for each author.
As I try, struggle, to move on from completing my book on The Tensions of Algorithmic Thinking, I’ve been wondering what is next
. Where will automated social ordering go? Perhaps a better way to put it is that I’ve been trying to think where to look in order to find what is next. I’ve been asking myself what comes after the algorithm - or where to look to find what comes after the algorithm.Algorithms are so embedded that there is no after as such, but there might be additions, developments, imbrications and maybe even some blindspots. As a rough starting point, and it is only a starting point, I’ve been thinking of three possible focal points for thinking after the algorithm. Some of these are already being tackled by researcher and writers in the field - I’m not suggesting these are entirely fresh areas but that there is scope for more. I thought I'd share a very brief sketch of the three here.
Focal point 1: Looking beyond the algorithm
The most obvious place to start would be to focus upon emergent and developing technologies. This is a literal consideration of what comes after the algorithm in terms of technical specifications and innovation. The question then will be whether the algorithm will be usurped by another technology or perhaps remediated into new forms. It may be that something different, another mode of computational decision making and thinking will emerge that is in some way distinct from the algorithm. Such a break might take us beyond the algorithm but, of course, it will be rooted in the history of algorithms even if its technical form is quite different. Some speculation might be needed. As might a look at things like patents and startups, or the activities in labs or workshops. This first focal point is on the horizon.
Focal point 2: Looking inside the algorithm
The second option is to press zoom. It might be to consider the smaller scale and to wonder about the bits and components that make up algorithms. Instead of the algorithm as being the object of the anlysis it might be something smaller, more atomised, something that is a key part of the algorithm.
This is not just about opening the black box, it is thinking of what makes-up any algorithm and considering a particular component’s role in the broader social power of algorithms and automation. This is to think on different scales within the algorithmic structure to find what influences outcomes and so on.
Instead of being the central concept, the algorithm might instead act as an umbrella within which different components are examined for their social relevance. The relations of the component parts within the algorithm code might also be important, rather than them being treated in isolation.
Focal point 3: Looking around the algorithm (at what resides alongside)
The final option is to slightly open the lens and look for close connections. This would be to look for other bits of the automtion and system surrounding or complementing the algorithm. It would require looking at the relations of algorithms with other parts of the system (and to then prioritise those other parts). Initially, the algorithm might be an opening for finding other potential objects of study that are within its orbit.
This focus would ask what works in conjunction with an algorithm and what the relative influences on each other and outside forces might be. The algorithm would be placed in context, and then the other parts of that context might take over centre stage.
There is no after…
Like I said, this is just a brief sketch and some of these focal points are already being explored by researchers.I've not offered any answers to my question, though I might try in future posts. This might end up being the first in a series of posts on the topic of ‘after the algorithm’. My first thought is where to look rather than what will be there. All if this is not to suggest that the analysis of algorithms should abandoned. It's more about analytical focus than a sudden social rupture. When thinking of the algorithm, the after is not a clean break from what has gone before. Exploring the possibility of their being an after the algorithm is not to imagine a social world that suddenly exists without their presences
. Instead it is to think in conceptual and analytic terms about where the analytical 'problem spaces' - as Celia Lury has recently described them - might be (for me that is, others will have already spotted them). This is partially about where the imagination can be exercised now and partially about where it might come to be exercised in the future. This short sketch has suggested that we could look beyond, inside and around the algorithm for potential angles, issues, spaces and ideas.I’m not quite in a writing gap at the moment. I know what I’m doing in the immediate and it involves algorithms. We are half way through the Nuffield funded project Code Encounters: Algorithmic Risk Profiling in Housing led by Alison Wallace. We held an event last week on the topic of ‘Algorithmic Dwelling’ and some little videos will be out soon. There is also a literature review of the field available through the above link.
I’m now wondering if it is possible to think outwards (rather than backwards) from the middle of a genealogy.
There are many other potential angles that I’ve left out here. I've left out the human dimensions and experiences for a start - I’m not sure if they would be developed by the three focal points I've outlined but it could be worth thinking of experience after the algorithm too. Then there is also the possibility of governance and regulation after the algorithm. And organisations and institutional structures after the algorithm would also need to be factored in.