AI Is Already Being Used to Kill Palestinians in Gaza
Artificial intelligence is currently being deployed by the Israeli military in its brutal assault. The tech isn’t a future threat—it’s here.
Saqib Bhatti
There is widespread concern about the ways artificial intelligence could impact the future. Yet somehow that concern seems detached from the fact that AI presents an immediate threat to millions of people in the present. Since last year, whistleblowers and investigative reporters have provided some insight into how AI is being used in the context of the ongoing genocidal campaign perpetrated by the Israeli military against Palestinians in Gaza, a campaign which Israel’s own leaders have repeatedly described as aimed at the annihilation of a people. Artificial intelligence and those selling it to Israel (including U.S. tech companies) are helping.
The thought leaders and CEOs who warned us all last year about the “existential threat” posed by an imaginary “artificial general intelligence” are notably silent on the use of AI, as it currently exists, to kill and maim.
In early April, Israeli publications +972 and Local Call released a groundbreaking report about “Lavender”, an AI machine that generates kill lists for the Israeli military, using algorithms to identify individuals whose behavior fits patterns that supposedly resemble Hamas operatives.
In a New Yorker article published on April 12, +972 reporter Yuval Abraham explained the rationale behind Lavender’s deployment: “When [high ranking Israeli officials] decided to expand the list to so many people, it became impossible, and that’s why they decided to rely on all of these sophisticated automation and artificial-intelligence tools.” As long as the individuals Lavender identifies are male, the AI-generated targets are effectively considered final, even though, by the military’s own estimates, the AI makes a mistake in approximately 10% of cases.
According to the +972 report, “During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based.” In other words, AI is being deployed precisely because it can kill on a scale and at a speed that exceeds human capacity.
AI is being used to decide which bloodlines will continue to exist in Palestine. In just the first four weeks of bombing, Israel killed all of the members of more than 600 families in Gaza. According to Abraham’s reporting, once the machine puts a person on the kill list, the Israeli military uses a surveillance program called “Where’s Daddy?” to pinpoint the moment that person enters their home, and then their home gets marked to be bombed, killing everyone inside.
Here in the United States, the tech industry is continuing a legacy that goes back to IBM supplying punch card data technology to the Nazis. The New York Times has reported that the Israeli military has matched targets using Google Photos — a consumer product integrated with the company’s AI that has a well-documented track record of misidentifying Black and brown faces.
Recently, Time published a leaked contract addendum from late March documenting that Google negotiated an expansion of the AI and Cloud services it provides to Israel’s Ministry of Defense in the midst of the ongoing genocide in Gaza, which has seen more than 34,000 Palestinians killed, more than a million facing famine and more than 80% of the population displaced. Amazon and Palantir are also known to maintain largely classified contracts with the Israeli government and military.
This reporting has elicited no direct public comment from the AI thought leaders who in the past year have been raising concerns about the “existential threat” posed by an imaginary “artificial general intelligence.” In the meantime, AI as it currently exists is being integrated into U.S. military and intelligence operations and offered for sale to U.S. allies. Again, most of what is publicly known about such contracts arises from leaks and dogged paper-chasing by investigative reporters.
While the details remain largely classified, much of the AI being integrated into weapons systems are provided and supported by familiar Silicon Valley corporations which compete for funding, market share and technological prestige in the consumer market. This is not new. Well before they offered their AI products to the public, companies such as Google, Amazon and Microsoft inked deals with the Department of Defense and an array of three-letter intelligence agencies to provide artificial intelligence “solutions” they promised could enhance U.S. military capabilities.
When news broke in 2018 that Google, Amazon and Microsoft were all involved in Project Maven, a joint venture between the nation’s largest technology corporations and the U.S. military, it aroused controversy and backlash. Workers at Google demanded that executives cancel the contract, which Google had signed to develop AI for U.S. military drones, and after months of sustained pressure, the executives caved. Google backed out of the $250 million annual deal and renounced future contracts with the Department of Defense. This was rightly hailed as a victory for worker empowerment and corporate accountability.
But that was five years ago. In the time since, Microsoft and Amazon have continued their Project Maven work. Google has returned to the defense contractor fold, working with the Department of Defense to support their Joint Warfighting Cloud Capability initiative, and relative newcomers like Anduril, OpenAI and Palantir have joined them.
Introducing AI into the decades-long global war on terror only exacerbates its underlying injustices and racism. It means less accountability, less transparency, less civilian oversight and more bloodshed. When Google workers stood up and forced an end to the company’s involvement in Project Maven, it appeared as if we were on the cusp of something new, a world in which worker power might chasten the U.S. war machine by refusing to build its weaponry. That hasn’t happened. Not yet, at least. But it is still possible and more urgent than ever.
In recent days, Google workers organizing as part of the group No Tech for Apartheid led sit-ins at Google campuses across the country to protest the company’s Cloud contract with the Israeli government and military. Google responded rapidly, firing around 50 people and having nine arrested. It’s time to stop ignoring the fact that the tech industry is currently deeply entwined with the U.S. war machine, and is profiting from the slaughter of civilians in Palestine. The introduction of AI into the arsenal of the War on Terror should be a wake-up call for the developers building these tools to find their voice — and the power to refuse.
A breathtaking and unprecedented wave of organizing continues to hold our elected officials’ feet to the fire over their support for the Israeli assault on Gaza. Workers, consumers, and shareholders could be a crucial part of the effort to bring the suffering of Gaza to an end and hold the CEOs of big tech firms like Google, Amazon, and Microsoft accountable for the death and destruction their AI has already caused.
The next time we hear tech leaders warning that AI could present an existential threat to humanity, we need to remind them that it already does — and that we have a say in our future.
SPECIAL DEAL: Subscribe to our award-winning print magazine, a publication Bernie Sanders calls "unapologetically on the side of social and economic justice," for just $1 an issue! That means you'll get 10 issues a year for $9.95.
Saqib Bhatti is the Executive Director of the Action Center on Race & The Economy.