What's Really Artificial About AI is AI Itself

“The phrase artificial intelligence is a marketing term that is used to sprinkle some magic fairy dust that brings the venture capital dollars.”

Sarah Jaffe

WASHINGTON, DC - JANUARY 21: OpenAI CEO Sam Altman (R), accompanied by U.S. President Donald Trump, speaks during a news conference in the Roosevelt Room of the White House on January 21, 2025. (Photo by Andrew Harnik / Getty Images)

We are in the middle of a massive hype-bubble around so-called artificial intelligence. A series of the world’s richest men (and they nearly are all men) have told us that AI” is soon going to outpace human intelligence, that it can replace workers at any number of jobs in any number of industries, and that its progress” is inevitable. But what do these profiteers actually mean by AI? 

Emily M. Bender and Alex Hanna explain what this tech is — and, most importantly, what it is not — in their new, mythbusting book The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. The two scholars note that calling things AI” is a marketing tool. In fact, they write, The set of technologies that get sold as AI is diverse, in both application and construction— in fact, we wouldn’t be surprised if some of the tech being sold this way is actually just a fancy wrapper around some spreadsheets.”

Rather than thinking machines, they point out, large language models like ChatGPT are extensive information about what sets of words are similar and what words are likely to appear in what contexts.” Or, in other words, nothing more than souped-up autocomplete,” relying on massive data theft and profligate energy use.”

All this manufactured hype around tech serves the purposes of capital accumulation: scaring workers into compliance, promising to save businesses lots of money (that is, if you don’t look at the prodigious costs of using all that computing power), and directing vast swaths of venture capital into any company that can attach AI” to its name. 

I spoke to Bender and Hanna about why, rather than using the term artificial intelligence,” we should be describing Large Language Models (LLMs) and other such programs as nothing more than a new form of automation, Taylorism for the twenty-first century, where the $100 billion problem that the system purports to solve is, well, humans. 

(A note: because we had limited time, we did not get into the climate impacts of all this computing, but they are legion and if you’re not already up on them, I recommend the work of Paris Marx and Kate Aronoff, among others.)

Emily M. Bender is a professor of Linguistics and an affiliate professor in the Information School and the Paul G. Allen School of Computer Science and Engineering at the University of Washington. She completed her PhD in Linguistics at Stanford University in 2001.

Alex Hanna is Director of Research at the Distributed AI Research Institute and a former member of Google’s Ethical AI team. She completed a PhD in Sociology from the University of Wisconsin- Madison in 2016

This interview has been edited for length and clarity. 

SJ: Your book draws on your expertise as scholars in two different fields. How did each of you get into studying AI, and then why choose to collaborate in this particular book?

EB: I continually resist being called an AI researcher, and it’s sort of a losing battle because I keep getting characterized that way. I’m a computational linguist, which means I use computers to study linguistics. Linguistics is the study of how language works and how we work with language. I was minding my own business doing grammar engineering, and all of this hype started coming in about language models, so I started working on pushing back against that in my field. Also, starting in late 2016, I got interested in the societal impacts of language technology. I really think it’s important that computational linguistics as a field is not subservient to the research project of artificial intelligence. 

AH: I’m a sociologist and I’ve been studying the intersection of technology and society for the past 15, 16 years. Originally I’d been working in this area of computational social science, looking at machine learning and social movements. I started to pay a lot more attention to the ways in which technological systems were being used in different areas, like automated decision making, and started looking at the data behind these systems and the ways in which data really screws people over, depending on how people are classified and how categorization works. 

Emily and I met online talking about this issue of data, these issues around datasets. The weird thing--as a social scientist, what I was confused about--is the way that computer scientists take data as given, they don’t really question it at all. That data also has a lot of labor behind it. We wrote a pair of papers around that and set up a group chat with the co-authors of those papers. One [of the papers] is called AI and the Everything in the Whole Wide World Benchmark,” and the other was Data and its (dis)contents.” [Their podcast, Mystery AI Hype Theater 3000, began as a Twitch-streamed takedown of a Medium post by a tech executive.]

We were approached by our agent, Ian Bonaparte, asking if we considered a book. 

EB: And the punchline to all of this is after two academic papers, a couple of op-eds that Alex didn’t mention, 50+ episodes of a podcast and a whole book, we finally met in person this March.

The interdisciplinary collaboration, I think, has been really important to the book and especially important to making it accessible. We have enough distinctions in our expertise that we were able to be very effective editors for each other in the form of, I don’t understand this, what’s going on?” And then also making the judgment call of what was too far in the weeds for the overall narrative of the book. 

Sign up for our weekend newsletter
A weekly digest of our best coverage

SJ: My running joke, which is showing my age and the era of Saturday Night Live I grew up on, is that artificial intelligence is neither artificial nor intelligent. I feel very validated in my refusal to use the term by your book. For our readers, what are the things that are currently being referred to as AI and why shouldn’t we call them that?

EB: The phrase artificial intelligence is a marketing term that is used to sprinkle some magic fairy dust that brings the venture capital and in theory brings the consumer and [business to business] dollars, although we’re not seeing too much of that, and it gets applied across a broad range of things. The ones that are getting the most attention right now are the media synthesis machines, and that includes synthetic text-extruding machines like ChatGPT and Gemini and Claude. 

SJ: Don’t forget Grok and its newfound Holocaust denial.

EB: The whole thing about Grok was going to be truth seeking,” which makes no sense. Because the heart of all these things is a large language model, which is already a bad name because language model suggests that it’s a model of language and it’s not, language is much more than the form of words. These large language models are actually just models of the distributions of word forms in large, stolen, undocumented and undisclosed training data. 

It’s not the only type of so-called AI system that we talk about in the book. We do some stuff on image generation, but we also talk about the UnitedHealth example, where they had this automated decision-making system that was effectively automating denying healthcare. 

It’s important not to call it artificial intelligence because that suggests that these systems have functionalities that they don’t. Also, the lumping of everything together [as AI] leads people to believe that we’ve got something that can assist scientists in modeling protein folding and can also create images and can also produce seemingly useful text, then that lends credibility across the different software. But nobody would say, My spreadsheet program does arithmetic effectively, and so therefore my spellchecker is also really good.” It’s separate pieces of software.

AH: I would add, it’s helpful to distinguish what unique harms are being done. There’s automated decision-making systems that are just classification systems--who is allowed to [leave jail] without bail being set vs. who has bail set on them, who has their kids snatched from them vs. who doesn’t. Those could be logistic regressions, the machinery of them is pretty basic vs large language models, which are designed to predict words from other words. 

NEW YORK, UNITED STATES - Members of Writers Guild of America rally on picket line at Amazon offices highlighting reproductive rights and lack thereof at Amazon Studios. (Photo by Lev Radin / Pacific Press / LightRocket via Getty Images)

SJ: When ChatGPT first dropped, I referred to it as like Google, but in full sentences. But you point out that it isn’t even a search engine, it’s just predicting words.

EB: Exactly. And it keeps getting marketed as a search engine. That’s part of this fantasy, people wish sometimes we could live in a world where anytime we had a question, we had a system that could give us the answer to it. Which is not how the world works. If you’re doing information access, it’s about figuring out how to refine your question, figuring out what sources speak to your question, figuring out how those sources are positioned with respect to each other and the broader information landscape and so on. 

People want to live in this world, which is basically hinging on what I think is a plot device from science fiction novels, that it’s convenient to have the onboard computer that can give the characters information about the planet they’re approaching as they approach it. 

SJ: That just makes me think of the computer in Alien, which deems the crew expendable.

You write pretty definitively in the book too about how AI is a bubble.. I’m wondering how your observations of previous tech bubbles and tech doomerism have shaped your thoughts on the one we’re currently in.

AH: The hype bubbles that we pay attention to are the ones that are more recent. I know less about the dotcom bubble, but know more about the things that crypto or NFTs or blockchain were supposed to do to the world. And we still have crypto.

You can formulate these bubbles in terms of, X is meant to solve Y. So crypto is meant to solve currency, and in particular fiat currency and the instability of central banking, which is very silly because crypto is the most volatile thing you could ever think of. 

SJ: Crypto is not money any more than ChatGPT is a search engine.

AH: And then blockchain is supposed to solve trust. I don’t know what NFTs are supposed to solve, art, I guess. AI is supposed to solve labor, effectively. It’s promising to be a way of solving the high cost of labor. 

And so in terms of the cycle, it’s hype hype hype hype hype. This bubble is getting very, very big and it probably has some of the largest valuations that we’ve seen. The most recent funding round that OpenAI got, led by SoftBank, is $40 billion.

But to me it’s sort of the make-or-break one because this stuff is not turning a profit. David Cahn at Sequoia Capital had this piece from last year, writing, AI has a $600 billion question where it needs to basically produce that much in revenue--in revenue, not in profit--to even fill the gaping hole that’s been left by all the infrastructure investments. And David Cahn, wildly enough, thinks that that’s possible. He was like, well, the railroads, they had to build all the rails first, but once they got on the rails, it was on. 

What a terrible metaphor! It’s not like you build this bigger and bigger and then you put the trains on it. The thing is out there and you’re already selling it, and it’s only getting you about $10 to $20 billion in revenue per year at the most optimistic. You’re running at a huge loss. How is this not a bubble?

EB: A big difference now is that the way the sales pitch for the doomerism relates to the technology. What is being promoted with the doomerism? Maybe preppers were sold stuff for the Y2K apocalypse, but it wasn’t sort of locked into the sales of technology in the same way that’s happening with AI. 

SJ: Management consultants were selling something.

EB: For the AI doomers versus boosters, the AI boosters say AI is a thing, it’s inevitable, it’s going to be here really soon, it’s going to be super powerful and it’s going to solve all of our problems. And the doomers say, AI is a thing, it’s inevitable, it’s going to be here really soon, it’s going to be super powerful and it’s going to kill us all. 

This is the same position. Especially when it makes people think, oh, well those products must be good because they’re so powerful.” Or policymakers, I think it’s part of the pulling the wool over their eyes, this is too complicated and too fast-moving and you can’t possibly understand.” 

SJ: The too complicated” stuff just reminds me of the financial crisis where the derivatives were the things that were too complicated for us to understand. 

AH: It’s kind of funny because from what I understand of the housing crisis, there was still predictive risk modeling in the mix. I think that would be called AI now, if you were using the same kind of thing. You could say, retrospectively, AI caused the 2008 financial crisis. 

SJ: Can you talk about general intelligence, and the way that you connect the notion to Silicon Valley’s, let’s say, problematic history with eugenics? 

AH: There’s this idea that AI is a thing, and then it goes further and it says that AGI is a thing, artificial general intelligence, and that certainly is not a thing. That’s a boogeyman that gets waved around, this is the thing that’s on the OpenAI charter, and they define it very vaguely as this thing that’s going to have many different capabilities that are economically valuable. 

They don’t really define what capabilities it is. What is economically valuable? You could pick apart all of those in a particular way. There’s also a leak between Microsoft and OpenAI, where they basically said, it’s going to be the tool that generates $100 billion in profit. Okay, well you just gave away the game. 

There isn’t even really a notion of intelligence that is well-defined within AI research. This very indicative thing happened with this paper called Sparks of AGI. The lead author was Sébastien Bubeck, who Emily had a debate with in March. There originally had been a definition of intelligence that was cited in the paper that referenced this op-ed from Linda Gottfredson. It had been called something like Mainstream Science on Intelligence: An Editorial With 52 Signatories, History, and Bibliography.” 

It was a position that was wholly white supremacist and fringe, that espoused many different white supremacist beliefs including a rank order of innate intelligences. It suggested that any variation that say, black people, had intelligence was due to admixtures of white blood. 

This tendency of eugenicism is well-established in Silicon Valley. Malcolm Harris talks about this at length in Palo Alto. From the origins of the IQ tests in the work of Alfred Binet, who is this French psychologist who developed the original IQ test not as a singular score, but as a way to aid students who were falling behind in class. Then people in the UK had taken it and said, we’re going to use this as a way to rank order people,” especially people who are disabled, and use that as a means for eugenicist policies, including sterilization and institutionalization. Americans, of course, took that and they ran with it to the racial dimension, and they’re like, we’re going to use this as a means of ranking people by race. 

Intelligence is very multifaceted. There’s no singular score of intelligence. It’s something that’s still debated by psychologists, cognitive scientists, but this obsession with intelligence as a singular kind of number metric, is very much the Silicon Valley obsession that has these eugenicist origins.

Microsoft Vice-Chair and President Brad Smith speaks during a US Senate Commerce Committee hearing on artificial intelligence (AI) on Capitol Hill in Washington, DC, on May 8, 2025. (Photo by BRENDAN SMIALOWSKI / AFP via Getty Images)

SJ: Now I do want to switch gears and talk about automation, and how you put AI in the tradition of machines that make people work faster and harder for less money. Why is it important to understand this stuff in the tradition of automation and of worker resistance to automation?

EB: When you talk about things as artificial intelligence, it hides the fact that this is automation and that obscures both what’s happening to the workers and also takes you away from what you would be asking: is this a sensible thing to automate in the first place?” We have some recommendations in the book that say, think about it in terms of automation. What’s the input? What’s the output? How was it evaluated? How was it built? Whose labor is being exploited? How’s it being used? Who’s benefiting and who’s being harmed? And a really important class of who’s being harmed is workers.

AH: What we did try to do with the book is historicize many of these struggles. We tied it to the original Luddites, folks that were not anti-technology but anti-technology making their jobs worse by forcing work to be sped up, to be moved from the private sphere into the public sphere, into the area in which people were being forced to sell their labor power on an open market, on a factory line, rather than the traditional crafts. We traced that through to Fordism and the coal miners strike of the 1930s, and how automation played a major factor in that.

There had been a notion that we’re going to have fully automated luxury communism because automation is going to solve everything. What’s happening here is automation is this means of breaking the backs of labor. A Ford executive coined the term [automation]. Prior to that, it was called automatic control or automatization, which ties even the language change to the change in the lording over particular workers.

The big promise of the things that are marketed as AI is taking humans out of the equation. 

I don’t know if you also saw this headline where the CEO of Duolingo said AI is a better teacher than humans, but school still will exist because you need childcare.

Across the board it’s like, what is the problem to be solved? And someone put it very succinctly on Bluesky: the trillion dollar problem that AI is supposed to solve is wages. How can you do more with the least amount of people possible? The hype is around it is technology doesn’t get tired, it doesn’t have to be provided breaks, it doesn’t have a union, so many different things. 

SJ: There’s also what Astra Taylor and Joanne McNeil called fauxtomation” happening. Humans are actually still required to make these things work. Why is it so useful to companies to pretend that there are no humans, and particularly to hide where those humans are located?

EB: It’s a way to reduce costs and therefore boost shareholder price. They can fire the people who had relatively stable jobs, take in the fauxtomation, and then it’s not going to work. You end up with either those same people hired back in a gigified role without benefits, or you get offshoring, outsourcing. One of the funniest examples of this was the supposedly self-driving cars that were being monitored by a workforce in Mexico, and also the Amazon Go store that was supposedly using AI to watch what you were buying. That was being monitored in India by people.

AH: It’s a continuation of the 90s and 2000s trend of business-process outsourcing. We have these well-worn outsourcing patterns to certain workers typically in the majority world. So Venezuela, Columbia, Kenya, India, places where there are people that are proficient enough at English or have some understanding of what things in the western world look like if they had to do video annotation. 

One project that we have at DAIR is called the Data Workers’ Inquiry, which has been facilitated by Adio-Adet Dinika, Milagros Miceli and Krystal Kauffman. Different workers are telling their own stories about what this automation — or what this fauxtomation — looks like.

One that I think they’re releasing pretty soon is somebody that was working on a chatbot for romantic something or other. This thing doesn’t work as a chatbot, so he is literally interacting with people pretending to be a romantic partner, which is really a mindfuck. 

You have people doing all kinds of tasks from content moderation to driving the cars to driving these little food delivery things that are common on college campuses. The promise that these companies make is, we just need enough data and then it’s going to be able to do this on its own. But that’s never actually going to happen. If the past is any indication they’ll get 70% to 80% there, but they’ll still need that 20%, in the same way self-driving cars self-drive about 95% of the time. You still need people that are at the ready, always ready to take over. This is what Mary Gray calls the last mile of automation.

There’s always some kind of labor embedded in it. The training itself is dead labor that gets reused the same way in which Marx talks about dead labor embedded in [capital]. But the labor, it’s never fully automated. You still have hundreds of thousands of people around the world that jump in when the automation cannot handle something or makes a mistake, which is often.

EB: My first research position was one of these jobs. The goal was to collect samples of how people would talk to a computer, to use in the context of building automatic transcriptions. People were expected to speak differently to a computer than if they thought a person was listening. I was in the back room listening, the context was restaurant recommendations. I had a database that I was driving and I had to decide if I was going to be cooperative or uncooperative if they had asked clearly enough. At the end of the experiment, I always came out and introduced myself to let the people know, this is like 1992 or 1993, that there was not in fact a computer that was capable of doing this.

WASHINGTON, DC - APRIL 30: Jensen Huang (R), President and CEO of Nvidia, speaks on AI and the return of American manufacturing on a panel at the U.S. Capitol on April 30, 2025. (Photo by Kevin Dietsch / Getty Images)

SJ: The gig economy goes hand in hand with automation. First you destroy labor standards in the industry and get people used to thinking of tech as the solution, when it’s really just deregulation, like Uber, or BetterHelp for therapy. And then you supposedly automate away the humans entirely. 

This pattern also showed up in charter schools or Teach for America, where the thing isn’t tech so much as it is this idea of intelligence, that what you really need is smart people who can do a year in a charter school in New Orleans after graduating from Yale or whatever. That still has set up these tech takeovers, and connects back to general intelligence.

EB: There’s two themes there. One is, as you’re saying, the gigification and commodification, if the work can be broken down into little replicable units that people can slot into, then you don’t need to see workers as people with expertise, with careers, with relationships. They are just providers of content or annotations or labeling or, in big scare quotes, intelligence.”

Something that is characteristic comes back to this notion of intelligence. Because we’re talking about synthetic text extruding machines, and because we use language in so many different areas of activity, it seems like we have this one-to-one replacement for anything a person might do that involves producing language. 

There’s this process of reducing the work of someone whose observable work products involve linguistic artifacts to just those artifacts. Coming back to your idea about Teach for America, if you take one of these people who brings a lot of intelligence,” and you put them somewhere for a year. Well, now can you put them somewhere for an hour? 

SJ: I did a recent interview with Craig Gent and James Muldoon, and Craig wrote about algorithmic management as another form of de-skilling. We discussed the way that the algorithm is spoken about as being unbiased, less biased than human managers. This has only been amplified with the advent of LLMs. You have a great line in the book about mathy maths that are capable of producing art, science and journalism.” And you note the way that our own biases and devaluing of certain things have brought us to this moment.

EB: There’s a kind of bias called automation bias, which names our tendency to give credence to an automated system because we imagine it as being objective. But also there’s this desire for a view from nowhere. People will talk about, well, [LLMs have] been trained on the whole internet,” But the internet is not a thing that you can go and download. It’s distributed. 

This is something that we get into in the stochastic parrots paper, the idea that if you have a very large dataset, it must therefore be diverse enough to represent all the points of view. But in fact, when you dig into these datasets, they have a particular history and a sort of laissez-faire, we’re not going to try to curate it” [attitude] is still a set of decisions and still leads to representing particular points of view more than others. It’s always going to be the hegemonic points of view that are overrepresented.

AH: There’s no such thing as an unbiased dataset or machine. One of the very basic insights from science and technology studies is that, to quote Langdon Winter, artifacts have politics. 

The argument goes, well, humans are biased, and so we can get as least-biased as we can with this machine. People think it is somehow going to be value-free, but it is very value-full.

Sign up for our weekend newsletter
A weekly digest of our best coverage

SJ: You wrote about automation in public services and therapy and the way that the people who are getting rich off of these things would never use an AI therapist themselves, or send their kids to a school that was taught by ChatGPT. Alex, you and I met because of Scott Walker’s austerity and attacks on public sector unions. You note that inequality is just going to be expanded the more we use this stuff. 

EB: It’s got this particularly pious, we are helping everybody” wrapping on it.

SJ: We’re democratizing” it. 

EB: That’s not what democracy means. They’ll be talking about democratizing art or democratizing AI because now everybody has access supposedly to this thing called AI. 

People will identify problems: there are not enough resources going into our education system; there are not enough trained mental health professionals; and then say, this can be a band-aid over that problem. Just because you’ve identified a genuine problem doesn’t mean a large language model is a good solution for it.

AH: We quote Greg Corrado [in the book], who is the principal scientist on Google AI’s healthcare team, talking about this medical LLM, and he is quoted saying that he would not want this to be part of his family’s healthcare journey.” 

It’s very much we are bringing this thing to the poors.” You see this in all quarters, you see it in the schools--something else that Adrienne talks about is how private schools have very little technology in the classroom. Much more face-to-face analog instruction. Or if technology is used, it’s used in this very different way. We talk about the work of Matt Rafalow--there might be smart boards in classrooms in a poorer school or richer school, but they become an axis of surveillance in a poorer school, in which teachers are then trying to track everything that the students are doing. In the richer school, it doesn’t become this thing where all the attention is driven into this technology.

It becomes this axis of inequality because it’s like, well, it’s better than nothing. But when you say it’s better than nothing, what kinds of things is that foreclosing in terms of teacher hiring? 

EB: And why is the other option nothing? What are the social systems that put us in this corner? 

Members of the Alphabet Workers Union (CWA) hold a rally outside the Google office in response to recent layoffs, in New York on February 2, 2023. (Photo by ED JONES / AFP via Getty Images)

SJ: I did want to discuss the New York Magazine, Everyone is cheating their way through college” piece. In the book, you point out that this fear is overblown. But I wanted to ask you both sides in a way: what should we be afraid of if people are relying on these things that make stuff up rather than reading and writing themselves? And also how much is this playing into, once again, the doom hype? 

EB: I haven’t done the studies to know to what extent students are using this. And it’s true that the particular thing we cite is from 2022 or 2023, so probably there is more usage. 

One [response] is we have to make sure no one cheats.” And the other one is, but it’s okay for us to use it to design our curriculum, which makes no sense at all. 

I don’t want to be policing what my students are doing. My goal is not to make sure that there’s no ChatGPT use. Anything I ask students to do, I do my best so that they understand why I’m asking them to do it. When a student turns to ChatGPT instead of doing homework, that’s a missed learning opportunity. The students are basically not getting the benefit of doing that homework. I think the job of university administrators is to look to the allocation of resources. And I like the idea of ChatGPT as a contrast dye test. Anywhere that ChatGPT is getting used is a place where there wasn’t enough resources, so we ended up with a better-than-nothing scenario.

AH: It’s not that the students are doing this because they are trying to put one over on the teacher and they don’t want to learn. I mean, some students don’t want to learn. But there’s also a case where many students are faced with difficult demands on their time, care responsibilities. What can we as instructors do to make it a conducive learning environment that is creative and attentive? 

The universities have, on one hand taken this AI-is-cheating view, and on the other hand, bought into these massive contracts with many of these companies. So at Cal State for instance, there’s a $16.9 million contract that Cal State made with OpenAI and for a site license for every student in that system. Why don’t you hire more faculty to teach these classes and ensure that we can think critically about this and redesign curriculums to not force people to use this, and then we won’t be in this mess and you won’t waste $16 million?

SJ: Last question, which is of course, the big one. We are not getting the tech that we want. We’re getting the tech that these people want to use to automate us out of existence. But I still want fully automated luxury communism. What would it take to get the tech that would improve our lives? 

AH: In a word, I think the communism comes first. This is because I’m not a technological determinist. But that said, under capitalism there’s a few different models that have been helpful to look to in thinking about ways to improve life under the current system. 

One of the examples in the book is Te Hiku Media, an organization that’s built machine translation and automatic speech recognition tools for the Māori language in Aotearoa, also called New Zealand. They have data that belongs to the people, using indigenous sovereignty principles. They own the servers, they have differential access to the data. Only certain types of data can be used to train the systems. That’s a good example of a system that is working for a community. 

If you want to talk about an interstitial political experimentation, there are opportunities to think about what it would look like to have locally controlled media and well-scoped technological systems. That’s definitely something that gives us hope. 

The fully automated luxury communism is the dream, but I don’t think we’re going to get to it by a tool that is designed to break the back of labor. LLMs are as they’re currently constructed are one of these hegemonic authoritarian technologies.

EB: I think that under current conditions, there is value in regulation. The tech companies would like to say that regulation stifles innovation and that sentence only makes sense if you read innovation” as the ability of a few people to amass all the wealth.” What regulation does is it channels innovation toward whatever it is that--if regulation is well constructed--the people collectively creating that regulation have been trying to channel it towards. 

Sarah Jaffe is a writer and reporter living in New Orleans and on the road. She is the author of Work Won’t Love You Back: How Devotion To Our Jobs Keeps Us Exploited, Exhausted, and Alone; Necessary Trouble: Americans in Revolt, and her latest book is From the Ashes: Grief and Revolution in a World on Fire, all from Bold Type Books. Her journalism covers the politics of power, from the workplace to the streets, and her writing has been published in The Nation, The Washington Post, The Guardian, The New Republic, the New York Review of Books, and many other outlets. She is a columnist at The Progressive and In These Times. She also co-hosts the Belabored podcast, with Michelle Chen, covering today’s labor movement, and Heart Reacts, with Craig Gent, an advice podcast for the collapse of late capitalism. Sarah has been a waitress, a bicycle mechanic, and a social media consultant, cleaned up trash and scooped ice cream and explained Soviet communism to middle schoolers. Journalism pays better than some of these. You can follow her on Twitter @sarahljaffe.

June 2025 issue cover: Rule of Terror
Get 10 issues for $19.95

Subscribe to the print magazine.