The Hidden Human Labor Behind AI
A discussion with Craig Gent and James Muldoon about the colonial history of tech infrastructure, its human and environmental costs, and how workers around the world are fighting back.
Sarah Jaffe
We’re deep in the throes of another technology bubble, this one driven by so-called artificial intelligence, which very often is neither artificial nor intelligent. But what do AI and other forms of supposedly smart tech mean for working people?
To answer some of these questions, I spoke with the authors of two new books about tech and the workplace. Craig Gent is the author of Cyberboss: The Rise of Algorithmic Management and the New Struggle for Control at Work [full disclosure: I provided a blurb for this book, and co-host a podcast with Gent], which is an in-depth study of algorithmic management from the point of view of the workers who are hired, fired, and directed through their workday by computers. James Muldoon is the co-author, along with Callum Cant and Mark Graham, of Feeding the Machine: The Hidden Human Labor Powering AI, a journey around the world of the AI supply chain, from data annotators in Uganda to data centers in Iceland to venture capitalists in Silicon Valley.
We talk about the colonial histories shaping tech infrastructures, the forms of resistance that workers are employing day by day on the shop floor, and why unions too often miss the mark when it comes to negotiating around workplace technology.
SARAH JAFFE: To start, can each of you tell me a little bit about yourself and your book? How did you come to write it?
CRAIG GENT: I would describe myself as a researcher and writer and editor, and I’ve been involved with Novara Media for 11 years until recently, and I’m the author of this book, Cyberboss. Like many activists of a certain generation, I got interested in the relationship between the organization of work and the political forms available to workers.
It became clear to me that algorithmic management was an area that was quickly overflowing the bounds of the gig economy, reaching into other forms of work, we might say more conventional forms of work.
I focus in the book on logistical types of work, the reason being that in some ways they represent not just an apex of algorithms as a form of organization, but it also represents this nexus of anxieties that we have around the degradation of work, which obviously includes technology, but also includes things like contracts and so on.
JAMES MULDOON: I am an associate professor in management at the University of Essex at the Essex Business School, and I decided to write this book after conducting field work at research sites in East Africa where data annotators were having work outsourced from some of the major tech companies. We went there as part of the FairWork research project, which is designed to ameliorate conditions within digital work through rating platforms out of 10.
We had a new company we were visiting in East Africa, in Kenya and Uganda, and the conditions at this data annotation center were so appalling and so akin to a digital sweatshop, that when we heard the stories of the workers, we thought, “This is a story that everyone has to hear.” People are not aware, necessarily, of the supply chains that are going into making their technology products and in particular artificial intelligence possible. We wanted to show the hidden human labor behind AI and share that story with other people, particularly from the labor perspective of the workers who were creating these products.
SJ: There’s a tendency to talk about technology as if it’s inevitable, just a product of human evolution, that it’s coming whether we like it or not. But both of your books disrupt this narrative in different ways. So how should we as workers ourselves and as people who care about worker power, talk about so-called AI, algorithmic management, these fancy smart computers that are supposedly smarter than us?
JM: Silicon Valley executives would like us to see AI as akin to magic, that it’s just a creation machine that makes our dreams come true, that can do the superhuman, that allows new frontiers of creativity. The perspective that we adopt in the book and what we want to show readers is that it’s more accurate to see AI as an extraction machine.
What we mean by this is that when we see the outputs of AI, we can see these as the result of a very sophisticated process of extracting raw materials, labor, and our collective intelligence and repackaging these and selling it for a profit.
If you think across the different inputs that AI requires, anything from environmental impacts, electricity and water, the copious amounts of human labor, of manual labor that’s actually required to create this seemingly automated design or the creativity, the articles, the books, the paintings, the songs, all of this human creative labor that goes into it, these are very raw processes of extraction where things are often taken from the commons, they’re taken for free, they’re taken without remuneration or consent, and they allow companies to exert greater control over workers and over users, and they’re designed to make profits for the shareholders and investors of these companies.
CG: Unfortunately the labor movement is also sometimes guilty of taking technological development as this kind of objective process.
In recent years, quite well-meaning people have almost set out with a logic that we might be able to isolate some section of code that would have the explanatory power to account for inequitable work. What I’m trying to do in the book, and what I also think Feeding the Machine does very well is to look at the chains of labor that make the technology possible.
SJ: James, since you’ve actually been to some of these horrific outsourced factories, what did you learn from talking to these workers about why it’s so important to capital to conceal their existence?
JM: Firstly, there is an element of toxic content that big tech wants to conceal. When it comes to things like social networks or even large language models, there are horrific scenes, videos, images and texts that people have to observe and to moderate. There’s an entire industry of content moderation, and to a lesser extent, data annotation that requires this horrendous side of the internet to be purged from tech products, whether that’s social media networks or chatbots.
The motto that the tech industry has is really out of sight, out of mind. It serves their purposes both to outsource this physically to other countries where the media are going to be less likely to report on stories, but also to places where they can get this work done for much cheaper.
SJ: Reminds me of Eileen Boris’s work on domestic work and home care, where she talks about intimacy and dirt as things that make people uncomfortable and they have to be hidden away and also paid very badly and outsourced to people that we associate with those things.
There’s this idea that tech is somehow unbiased, that by being not human it’s therefore better than humans. Specifically when we’re thinking about management decisions, Craig, you write, “Workers are variously unpredictable, fallible, collaborative, creative, lazy, cooperative, uncooperative, and much else besides. In a word, they are free, and by their most basic and human capacity for autonomy, introduce uncertainty into a finely calibrated calculus for making money.”
CG: I remember seeing this advert by a company that was selling business dashboards saying that data doesn’t lie, that having these graphical representations of data within the workplace would minimize conflict and disagreement in the workplace because somehow there’s less to disagree about if you can see a number in front of you.
The workers I spoke to, it was unclear to them and to their supervisors how targets were calculated. A common theme was that workers would be compelled to just trust the system by their managers, as if the workers and even the human managers can get it wrong sometimes, but the system is beyond doubt.
JM: The real question with algorithmically determined decisions is whose interest do they serve? How is the algorithm designed and what structures of power is it upholding? Because I think you can get data that feels objective and it feels like it has this neutrality, but you always need to remember that algorithms are designed by particular people to serve a particular purpose, and they’re deployed in the world to achieve those aims. When you show that something has a history and it is constructed by certain people at a certain point in time, I think it’s easier to unveil how those relations of power impact the kinds of decisions that algorithms are producing.
SJ: Both of your books take us through this long history of this supposedly futuristic technology, the history of labor control, time and motion studies, but also colonial geographies and the way that warehouses and high-tech businesses and underwater cables are following old routes of colonial power, taking advantage of cheap land after de-industrialization.
All of that also made me think about the underwater cables that were used to dominate colonies from a distance, and how the algorithm works the same way with Uber drivers, Deliveroo riders. I wonder if we could talk a little bit about that history and why it’s so important to note that these fancy technologies come to us from very, very old structures of power and domination?
JM: One of the main theses of Feeding the Machine was that we can see the same add-ons of power that operated in explicitly colonial times impacting how this technology is developed today. You can understand that at different levels, but one of the most explicit ones was with what we saw as an international division of labor, that the kinds of work that was deemed fit for certain bodies was outsourced out of the wealthy North American and European markets, and sent to former colonies that often had English language skills because of histories of colonization, that often suffered histories of underdevelopment and had widespread poverty or corruption where opportunities were less, where this work could be outsourced to populations that simply didn’t have better opportunities available to them.
When we looked even within the structure of a single company, we could see an upper echelon that was often based in Silicon Valley or somewhere in San Francisco, then they had a kind of middle management structure of marketing and IT people elsewhere around America, but the people that were actually doing all of the work, all of the labor that made profit for the company were often on temporary contracts, often on precarious forms of work, really strict regimes of labor discipline, and all of the work was outsourced to the Philippines, to India, to Kenya, to all these countries that often had histories with the United States and with England and where you could see echoes of these colonial histories in the way in which advanced technology was produced today.
SJ: Craig, you write in the book at some point that Walter Reuther was warned before the United Auto Workers negotiated the contract known as the Treaty of Detroit in 1950 about the cybernetic revolution, and this coming of tech. Can you explain a little bit about what particularly he was warned about and why he essentially did the exact opposite?
CG: This is a tale that I first heard told by the media studies scholar Nick Dyer-Witherford, but it’s essentially that in I think 1949, the so-called Father of Cybernetics, Norbert Weiner, wrote to Walter Reuther, who at the time was head of United Auto Workers and warned him that he needed to make cybernetic technologies the union’s business or else campaign for their suppression.
And then shortly afterwards, Reuther did not do that, but [negotiated] the Treaty of Detroit, which initially was with one of the Big Three car manufacturers and then spread out. The power of the UAW had set the bar across the labor movement and settled certain things around pay and conditions, which were very good, but explicitly jettisoned the union’s rights to intervene in the construction of the labor process itself and recognized the company’s right to manage.
This is a norm that’s been upheld by the labor movement the world over. I still think it’s the case that although unions recognize that many of the forms of work that we’re concerned about in the present have something to do with the deployment of technologies at work — I should say that even since writing the book, the efforts on that front are improving — when it actually comes to strategies and tangible demands, they have much less to say on this.
Once upon a time, many unions would talk about things like workers’ control or the self-management of industry and so on, and that wouldn’t be fanciful. But since that time, operating on the grounds of things like health and safety, for example, has become the norm.
SJ: The tech that both of you are writing about often doesn’t replace workers, but is rather used to intensify work. The data point that always sticks in my mind is that Amazon warehouses with the robots have higher injury rates than those without. A thing I say in my recent book is that capitalism doesn’t care if it replaces us with robots or just makes us behave more like robots. I’d like both of you to talk a bit more about this.
CG: There’s been automation anxiety for a number of years about robots coming to take our jobs, and this is actually a very old discourse that comes around time and again, since at least the 1970s, or earlier even. I maintain that it’s cheaper and easier to exploit human workers than it is to build and maintain costly robots, which is not to say that experiments with robotics and things aren’t happening, but instead these technologies can be deployed on humans who are much cheaper to exploit.
Part of my interest is also how tech is used to break down things like sociality in the workplace and also reconfigure workers’ sense of time and space within workspaces in a way that takes away some of the basic prerequisites for political organizations. It’s not just what the technologies are doing to work, although I think that having a shift where you are absolutely squeezed for as much labor as possible within the time that you’re there is no good thing for anyone just on purely old-fashioned humanistic principles. But moreover, the technologies are deployed in such a way that makes it harder for those same workers to ever collectively advocate for themselves.
SJ: It’s common these days, particularly after the Shoshana Zuboff book to talk about surveillance capitalism, but in both your books, you really bring home the point that capitalism has always been about surveillance in the workplace and in the broader world.
CG: One thing that book does that’s helpful is it makes this distinction between automating and what she calls informating, and the production of data about the work as it’s being performed, but is happening in a fundamentally Taylorist paradigm.
JM: I always found the issue with the surveillance capitalism paradigm was not so much with the surveillance, but with the capitalism. Zuboff’s thesis is that there is a healthier, more consumer friendly kind of capitalism to which we can return.
She holds a company like Apple up as a possible exemplar of a tech company that could do it right, by creating really nifty consumer products that are very responsive to consumer feedback. The problem that I see with that framework is that there’s no understanding of the fact that this is just the most recent mutation of tendencies that are inherent in capitalism itself towards extraction and exploiting workers and the environment.
SJ: If you’re thinking about your nifty Apple gadget as a user rather than the nifty Apple gadget as the worker in the Foxconn warehouse that has suicide nets outside because the work is so miserable, you’re missing something fundamental about capitalism.
One of the things that I think both your books make clear is the relentless materiality of the “cloud,” of all of this supposedly weightless technology. There’s a recent piece in The Times documenting how much water AI is using. There are real vulnerabilities in the supply chain of the thing. Hurricane Helene wiped out a lot of western North Carolina, which includes a town called Spruce Pine, which is where they get high purity quartz sand that is used to make semiconductor chips. This description of the thing as the cloud masks all of these ways that it is built on a very, very physical system that requires a lot of energy, a lot of water, as well as a lot of human labor.
JM: Around 2018 and 2019, all of the tech companies began to talk really big about what they would do on the climate, and a lot of them pledged in 2019 to be carbon neutral or carbon negative. But the reality that’s developed over the past five years has been the exact opposite, that actually Google’s emissions have increased by 50% over the past five years, Microsoft’s have increased 30%.
All of this is directly a result of AI because it is so intensive with electricity, with water. The data centers that are required to take AI workloads need to increase the amount of electricity they use by a considerable margin, and that people are now estimating when it comes to electricity consumption that the AI industry could consume as much energy as a country the size of the Netherlands in a couple of years.
If you take somewhere like Ireland, people are saying that the data centers there are going to require about a quarter of the country’s total electricity. It is an enormous burden on an already destabilized and exhausted planet, and you could get these ridiculous ideas from tech executives like Sam Altman that they’re going to build a $7 trillion nuclear fusion reactor and these pie in the sky ideas that really are just designed to distract from the energy consumption that’s being created by AI.
A lot of the proponents for AI will talk about the energy savings that will be produced through greater efficiencies at the level of the firm, and people will be able to use their heating and cooling appliances more efficiently and other energy savings. But the vast majority of those savings will be completely offset by just how energy intensive AI is. The exact numbers are still being debated and you do see overestimates and underestimates, but we know that it’s a seismic increase over workloads that people were undertaking just on the internet or Google search or social media storage or things like that.
CG: I’m looking at this almost at like a micro level. It involves a number of things. The wearable technologies is a big one, in particular handheld scan guns, which direct the worker in their work, but also track and monitor and ultimately feed the calculations that are going to determine the future of their employment.
Within these distribution centers and warehouses, the calculations organize workers spatially across work. Ideally, for example, the algorithm is organizing workers across the workplace such that they’re not running into each other down fairly narrow aisles in an Amazon pick tower, because that would hold up production, but it also means that working in these workplaces is often quite a lonely experience, yet again, minimizing the possibilities for building any kind of sociality in the workplace, which would be a prerequisite for organizing.
In the case of many pickers, not even seeing people because you’re obviously operating inside these very large stacks full of items stacked high on shelves, and you can look in any direction and not see anyone for meters and meters.
SJ: That does bring me nicely to the place we’re going to spend a few minutes before we wrap up, which is resistance. I want to start with the forms of low-grade resistance or medium-grade resistance, that happens day to day within the apps. Craig, you write about some fun ways the ways that workers are figuring out how to get around the algorithmic boss.
CG: In the book, I detail more organized action and then a number of ways that people are trying to circumvent or in their own way negotiate with the algorithm. The more organized one is of a slowdown, it’s an action where the workers are agency workers and they’re paid 70% of what the in-house workers are paid, and they’re not represented by the union.
They get together and decide that they’re going to work at 70%. And the way that they know what their speed is going to be is because day in day out they have their work and then they wake up the following morning to a text message that tells them whether their shift for the day is confirmed or canceled based on their previous day’s productivity score. So they get quite a good idea of what it feels like within their bodies to meet the target. And so they go to work and aim for 70% and sure enough they find this to be borne out in the data.
I explore a number of tactics that people have found to, particularly, get breaks within work because the work is quite unrelenting and so breaks become something to aim towards if you can. Whether it’s fiddling about with devices to allow them to log you out without affecting your productivity score or whether it’s passing around codes that can take you out of the productivity calculation that allow you to go to the bathroom for a breather or whether it’s logging into supervisors’ computers behind their backs to check how you’re doing.
I think that it’s an important political point, when speaking about these technologies, to demonstrate that workers’ agency is not simply the remainder of what’s left to them by this inevitable technological development, but is actually a factor within the enactment of the technological power.
JM: In the case of outsourcing centers where with digital labor, the fact that most of these workers are on short-term contracts and the work that is being distributed to them can essentially go anywhere in the world, gives them very little bargaining power in the traditional struggle between capital and labor. Tech companies are very willing to move their centers elsewhere or to send their contracts to another country if they think that there is some disruption in the supply chain or any signs of resistance. And this puts pressure on middle management to discipline workers in order to continue securing contracts for the company, often in the name of worker wellbeing.
SJ: In your book, James, you talk about the two forms of resistance available to the workers as blocking the flow and sounding the alarm. Can you, for our readers who haven’t read your book yet, explain briefly what each of those are and maybe give us examples?
JM: Stopping the flow was most used when workers found themselves at an obligatory passing point in the supply chains of AI. So this could be workers who were required to monitor a particular product or a particular workplace, and were needed to perform actions within a certain period of five or 10 seconds. One example might be workers manually verifying IDs of people trying to log onto apps. Sometimes it looks like this process is automated, but actually some of them, when the app has problems recognizing IDs or recognizing faces, you need a worker there to do that. Workers like this, they could stop the flow. They could basically go on strike, engage in these more traditional tactics of labor unions to try to create points of pressure for the tech companies where they have to listen to worker demands and potentially to increase and improve working conditions throughout the supply chain.
The second tactic that we discussed was more apparent when we see these ethical struggles, ones that are not necessarily directly related to day-to-day conditions in the workplace, but are more about placing demands on tech companies for a set of minimum standards for the kinds of contracts that they go into and the business practices they engage in. An example of this might be something like the No Tech for Apartheid movement that seeks to shut down the way in which tech companies are being used by states like Israel to conduct an unjust war and to think about the forms of complicity that tech workers and tech companies engaged in when Israel is paying billions of dollars to use Amazon’s data centers and other tech products that they might use to prosecute their war.
SJ: And since we’re speaking on October 7th, we’re thinking about that and how unsuccessful we’ve been.
I want to talk about the Hollywood writers strike and also the port workers striking about automation. We have seen, at least in the last years, unions taking up various issues around tech. What was significant about the Hollywood writers strike and its wins around large language models, and what can we learn from that for the future?
CG: It was notable that the WGA’s demand around AI was in a way, its simplest, which was essentially to demand its suppression.
In some ways it was forced into this by virtue of the fact that in the context of that industry, it’s not that large language models are a health and safety risk, for example, and greater transparency or explainability is not going to make them more equitable. And similarly, one of the phrases we often hear is that unions want workers to be able to share the benefit of the technologies; in the context of AI and screenwriting this is quickly undermined by the reality of the existing pay disparities in that sector between studios and writers. It put the WGA a few steps ahead of where a lot of other unions are at. And moreover, they won on that and obviously it’s a time-limited agreement and perhaps it’s a stay of execution, but it does mean that now the bar is set where we would want it to be and that in the future, any changes from that are happening from a position that’s favorable to our side rather than to their side.
Sarah Jaffe is a writer and reporter living in New Orleans and on the road. She is the author of Work Won’t Love You Back: How Devotion To Our Jobs Keeps Us Exploited, Exhausted, and Alone; Necessary Trouble: Americans in Revolt, and her latest book is From the Ashes: Grief and Revolution in a World on Fire, all from Bold Type Books. Her journalism covers the politics of power, from the workplace to the streets, and her writing has been published in The Nation, The Washington Post, The Guardian, The New Republic, the New York Review of Books, and many other outlets. She is a columnist at The Progressive and In These Times. She also co-hosts the Belabored podcast, with Michelle Chen, covering today’s labor movement, and Heart Reacts, with Craig Gent, an advice podcast for the collapse of late capitalism. Sarah has been a waitress, a bicycle mechanic, and a social media consultant, cleaned up trash and scooped ice cream and explained Soviet communism to middle schoolers. Journalism pays better than some of these. You can follow her on Twitter @sarahljaffe.