The IDF used AI to rapidly refill their "target bank," a list of Hamas and Hezbollah terrorists to be killed during military operations, along with details about their whereabouts and routines, according to a report published in The Washington Post on Sunday.
Some experts consider the target bank to be the most advanced military AI initiative ever to be deployed.
One such AI tool referenced is called Habsora — or "the Gospel" — which could quickly generate hundreds of additional targets.
However, the Washington Post report discusses previously unreported details of the inner workings of the machine-learning program, along with the secretive, decade-long history of its development.
The report also revealed a debate within the IDF's senior echelons about the quality of intelligence gathered by AI, whether the technologies' recommendations garnered sufficient scrutiny, and whether the focus on AI weakened the IDF's intelligence capabilities.
Some critics argue the AI program has been a behind-the-scenes force accelerating the death toll in Gaza.
"What's happening in Gaza is a forerunner of a broader shift in how war is being fought," said Steven Feldstein, senior fellow at the Carnegie Endowment, who researches the use of AI in war. "Combine that with the acceleration these systems offer — as well as the questions of accuracy — and the end result is a higher death count than was previously imagined in war."
The IDF said claims that its use of AI endangers lives are "off the mark."
"The more ability you have to compile pieces of information effectively, the more accurate the process is," the IDF said in a statement to The Washington Post. "If anything, these tools have minimized collateral damage and raised the accuracy of the human-led process."
No autonomous AI
The Gospel and other AI tools do not make decisions autonomously, according to an Israeli intelligence official who spoke with The Washington Post.
Reviewing reams of data from intercepted communications, satellite footage, and social networks, the algorithms spit out the coordinates of tunnels, rockets, and other military targets. Recommendations that survive vetting by an intelligence analyst are placed in the "target bank" by a senior officer.
Using the software's image recognition, soldiers could unearth subtle patterns, including minuscule changes in years of satellite footage of Gaza suggesting that Hamas had buried a rocket launcher or dug a new tunnel on agricultural land, compressing a week's worth of work into 30 minutes, a former military leader who worked on the systems told The Washington Post.
In the Israel-Hamas war, estimates of how many civilians might be harmed in a bombing raid are derived through data-mining software, using image recognition tools to analyze drone footage alongside smartphones pinging cell towers to tally the number of civilians in an area, two of the people who spoke to The Washington Post said.
The IDF says its assessments of collateral damage adhere to the Law of Armed Conflict, which mandates nations differentiate between civilians and combatants and take precautions to protect lives.
Some proponents of Israel's use of the technology argue that aggressively deploying innovations such as AI is essential for the survival of a small country facing determined and powerful enemies.
"Technological superiority is what keeps Israel safe," said Blaise Misztal, vice president for policy at the Jewish Institute for National Security of America, who was briefed by the IDF's intelligence division on its AI capabilities in 2021. "The faster Israel is able to identify enemy capabilities and take them off the battlefield, the shorter a war is going to be, and it will have fewer casualties."
The "human bottleneck"
However, the technologies, while widely recognized as promising, had limitations. Sometimes, the sheer volume of intercepts overwhelmed Unit 8200's analysts. For example, Hamas operatives often used the word "batikh," or watermelon, as code for a bomb, one of the people familiar with the efforts told The Washington Post.
However, an internal audit found that the system wasn't smart enough to understand the difference between a conversation about an actual watermelon and a coded conversation among terrorists. Issues with other key slang words and phrases were also found. "If you pick up a thousand conversations a day, do I really want to hear about every watermelon in Gaza?" the person told The Washington Post.
The military invested in new cloud technologies that processed algorithms quickly in preparation for an anticipated conflict with Hezbollah on Israel's northern border.
Lavender, an algorithmic program developed in 2020, pored over data to produce lists of potential Hamas and Islamic Jihad terrorists, giving each person a score estimating their likelihood to be a member, three people familiar with the systems told The Washington Post.
Factors that could raise a person's score included being in a WhatsApp group with a known militant, frequently switching addresses and phone numbers, or being named in Hamas files, the people said.
Estimates from the various algorithms fed into the umbrella system, Gospel, which could be queried by intelligence analysts.