How to Stop AI From Eating Journalism

Ethical standards can also save a lot of human jobs.

Hamilton Nolan

(Photo by Jakub Porzycki/NurPhoto via Getty Images)

All at once, it seems, the many dire warnings about artificial intelligence are coalescing into reality. Earlier generations of automation came for vast swaths of America’s manufacturing and service sector jobs, but now AI is coming for the creatives” — the white collar workers who always imagined themselves insulated from such humdrum threats. No industry is at greater immediate threat than journalism. But there is a way to get ahead of this menace now, before the algorithms eat us all alive.

ChatGPT, the OpenAI program that is able to spit out coherent (if not poetic) pieces of writing, was publicly released less than three months ago. But already, media companies are rushing to experiment with ways to use the program and similar technology to replace humans on the payroll. Buzzfeed saw their wilted stock price jump after they announced they would use ChatGPT to write quizzes and listicles. Men’s Journal is using AI to spit out articles that are rewrites of old material in its archives. And CNET was quietly using AI to write stories for months before saying in late January they would pause” the operation, after a number of the articles were found to contain errors and plagiarism.

It’s safe to say that all of this is only the beginning. Currently available AI can write (crappy) stories, draw illustrations and even replicate your voice to read text. Google is set to roll out 20 separate AI products this year. The algorithms are growing more refined by the day. Everyone working on the editorial side of journalism — writers, artists and radio reporters alike — is now competing with a computer that can produce a simulacrum of our work, for less than it costs a company to pay us. The threat to many thousands of jobs is, potentially, existential.

Is this an urgent labor issue? Certainly. But the way for media workers and their unions to fight back may not be through picket lines. The basic premise here (“new technology decimates entire existing industry at breathtaking speed”) is a familiar one. It happened to matchbook salesmen and telegraph operators and assembly line workers, and there is nothing surprising about the possibility of it happening to journalists — except for our own healthy sense of self-regard. When these stories are passed down as conventional wisdom, they are usually framed as lessons about not falling behind the fast-moving modern world; the specter of the proverbial horse-and-buggy driver is a familiar cautionary tale in American culture. The distinct lack of sympathy for the left-behind workers is in the nature of these capitalist rules of wisdom. Those horse and buggy drivers should have become auto mechanics! And laid-off journalists should learn to code! Etcetera, etcetera.

We in the journalism industry have some slight advantages over many other fields of employment. We have strong and ubiquitous unions, and we have a widely accepted code of ethics that dictates how far standards can be pushed before something no longer counts as journalism. These are the primary tools we have in our looming fight with AI. Instead of pretending that we can hold back a tidal wave of technological change by arguing that it would be bad for us, we need to focus on the more salient fact that it could be apocalyptic for journalism itself.

It’s important to note here that, for the most part, there are no agreed upon or well-established rules around AI and the ethics of journalism. The technology just hasn’t existed long enough for those rules to have come about. We better hurry up with that, or it is guaranteed that a lot of bad things will be done in the absence of industry standards. Let me suggest one bedrock rule to start with: Journalism is the product of a human mind. If something did not come from a human mind, it is not journalism. Not because AI cannot spit out a convincing replica of the thing, but because journalism — unlike art or entertainment — requires accountability for it to be legitimate. 

News outlets do not just publish stories. They can also, if necessary, explain exactly how a story came about and why. Why is this news? Who were the sources? How did you draw your conclusions? How did you ensure that conflicting points of view were presented fairly? How did you determine that the headline and the lede and the anecdotes and the quotes in the story were the appropriate ones to produce the fairest and most accurate and engaging story possible? Did you leave anything out that might have gone against your thesis? Is the story improperly slanted? These are not just aesthetic questions. They are questions that news outlets must be able to answer in order for us to all agree that their journalism is justified and ethical. It is taken for granted that real journalists can answer these questions, and can make a case for their answers in the event of conflict. And one thing that all of these fundamental questions have in common is that they cannot be coherently answered by appealing to AI.

Yes, AI can spit out a sentence in response to any of these questions. But does this constitute actual transparency? When you tell an AI program to write a story, can you definitively say whether it left anything out? Can you definitively describe the process that it used to reach its conclusions? Can you definitively vouch for the fact that it was fair and accurate, and that its work is not the flawed product of any number of latent biases? No, you cannot. You don’t actually know how the AI did what it did. You don’t know the process it used to produce its work. Nor can you accurately describe or assess that process. It’s very likely that many publications will rush to use AI to churn out low-cost content, and then have a human editor look it over before its published, and use that human glance as justification for its publication. But that process is an illusion — that human editor does not and cannot ever know how the AI produced the story that it produced. The technology is, effectively, a black box. And that makes it fatally flawed in our particular field.

Human journalists are flawed too. But we are accountable. That’s the difference. Institutions in journalism live on credibility, and that credibility is granted as a direct result of the accountability that accompanies every story. When stories have errors or biases or leave things out or misstate things or bend the truth, they can be credibly challenged, and credible institutions are obligated to be able to demonstrate how and why the story is how it is, and they are obligated to acknowledge and fix any deep flaws in their reporting and writing and publishing processes on an ongoing basis. If they don’t do that, they lose their credibility. When they lose that, they lose everything. This process of accountability is the foundation of journalism. Without it, you may be doing something, but you ain’t doing journalism.

You don’t have to convince me that the media is often lazy, stupid, sensationalistic, or full of clueless Ivy League hacks making stupendously ignorant pronouncements about the world. That is why there has arisen, over the past century, a body of journalism ethics that broadly aims to make the industry accountable, and therefore credible. Accountability requires a human mind that can answer all of these questions. Because AI can never truly be accountable for its work, its work is not journalism. Because of that, publishing such work is unethical. And because of that, we must, as an industry, collectively agree to standards that ensure no news outlets publish journalism that is produced directly by AI. The technology can be a tool to assist humans in news gathering, but it should never replace any humans in a newsroom.

We are entering an era of media that will be populated by swamps full of videos and audios and photos and pieces of writing that are all completely computer-generated and designed to mislead people. If you thought all the cries of fake news” during the Trump era were bad, just wait. The public is about to have a very, very hard time distinguishing what is real from what is fake. It is more important than ever that credible news outlets exist, and remain credible. In order to do that, we need to hold the line against AI taking over the work of human journalists. We need to unify around the idea that such a thing is not ethical. If we don’t, you can bet that companies will move as fast as possible to save a dollar — and utterly destroy journalism along the way.

Please consider supporting our work.

I hope you found this article important. Before you leave, I want to ask you to consider supporting our work with a donation. In These Times needs readers like you to help sustain our mission. We don’t depend on—or want—corporate advertising or deep-pocketed billionaires to fund our journalism. We’re supported by you, the reader, so we can focus on covering the issues that matter most to the progressive movement without fear or compromise.

Our work isn’t hidden behind a paywall because of people like you who support our journalism. We want to keep it that way. If you value the work we do and the movements we cover, please consider donating to In These Times.

Hamilton Nolan is a labor writer for In These Times. He has spent the past decade writing about labor and politics for Gawker, Splinter, The Guardian, and elsewhere. More of his work is on Substack.

Illustrated cover of Gaza issue. Illustration shows an illustrated representation of Gaza, sohwing crowded buildings surrounded by a wall on three sides. Above the buildings is the sun, with light shining down. Above the sun is a white bird. Text below the city says: All Eyes on Gaza
Get 10 issues for $19.95

Subscribe to the print magazine.