Skip to main content

Concern Over AI Use in Gaza War

UN Secretary-General Antonio Guterres has expressed serious concern over reports that Israel was using Artificial Intelligence to identify targets in Gaza, resulting in many civilian deaths.

According to a report in independent Israeli-Palestinian magazine +972, Israel has used AI to identify targets in Gaza – in some cases with as little as 20 seconds of human oversight.

Guterres said that he was “deeply troubled by reports that the Israeli military’s bombing campaign includes Artificial Intelligence as a tool in the identification of targets, particularly in densely populated residential areas, resulting in a high level of civilian casualties.”

“No part of life and death decisions which impact entire families should be delegated to the cold calculation of algorithms,” he said.

The +972 report claims that “the Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties”.

The report said that, according to “six Israeli intelligence officers”, a system dubbed Lavender had “played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war”.

“According to the sources, its influence on the military’s operations was such that they essentially treated the outputs of the AI machine ‘as if it were a human decision’,” +972 reported.

Two sources said “the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians”.

The Israeli army, known as the IDF, on Friday rejected the claims.

“The IDF does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist,” it said.

Instead it has a “database whose purpose is to cross-reference intelligence sources… on the military operatives of terrorist organisations” to be used as a tool for analysts, it added.

“The IDF does not carry out strikes when the expected collateral damage from the strike is excessive,” it said, using a term that includes civilian casualties.

Israel began hyping AI-powered targeting after an 11-day conflict in Gaza during May 2021, which commanders branded the world’s “first AI war”.

The military chief during the 2021 war, Aviv Kochavi, told Israeli news website Ynet last year the force had used AI systems to identify “100 new targets every day”, instead of 50 a year previously.

Weeks into the latest Gaza war, a blog entry on the Israeli military’s website said its AI-enhanced “targeting directorate” had identified more than 12,000 targets in just 27 days.

An unnamed Israeli official was quoted as saying the AI system, called Gospel, produced targets “for precise attacks on infrastructure associated with Hamas, inflicting great damage on the enemy and minimal harm to those not involved”.

But an anonymous former Israeli intelligence officer, quoted in November by +972, described Gospel’s work as creating a “mass assassination factory”.

In a rare confession of wrongdoing, Israel on Friday admitted a series of errors and violations of its rules in the killing of seven aid workers in Gaza, saying it had mistakenly believed it was “targeting armed Hamas operatives”.

Alessandro Accorsi, a senior analyst at Crisis Group, said the +972 report was “very concerning”.

“It feels very apocalyptic. It’s clear… the degree of human control is very low,” he said.

“There are a thousand questions around this obviously – how moral it is to use it – but it is hardly surprising it is used,” he said. — AFP

Source Credit:

Need Help? Chat with us