The future of AI in warfare and counterterrorism

From a ‘Minority Report’ world and killer robots to avoiding battlefield misfires on civilians and catching lone-wolf terrorists.

Artificial intelligence (photo credit: INGIMAGE)
Artificial intelligence
(photo credit: INGIMAGE)
It was 1983 and the world almost ended.
Not since the Cuban Missile Crisis of 1962 had the US and the Soviet Union come so close to all-out global nuclear war.
Only one officer, Russian Col. Stanislav Petrov, who decided to exercise his human judgment and overrule the sensors of the early warning center he commanded – which was warning that a US nuclear strike on the USSR was in progress – saved the world from an apocalyptic ending.
Fast-forward 37 years into 2020 and the same situation may soon involve artificial intelligence technology, which moves so fast that no Petrov or anyone else would be able to intervene in time to stop a nuclear war based on a false alarm or computer error.
Artificial intelligence (AI) is reshaping every area of our lives, but two areas where its impact has massive potential paradoxical upsides and downsides are warfare and counterterrorism.
In fairness, the above scenario is the worst-case potential usage of AI; it has not happened yet, and there are a variety of extremely positive potential uses of AI in warfare and in counterterrorism.
IN A recent Interdisciplinary Center Herzliya conference on AI and warfare, Dr. Daphné Richemond-Barak discussed AI’s ability to increase conventional battle speed and accuracy.
While this means that armies could be more lethal against adversaries, it also means they could be less likely to make mistakes, such as hitting civilian targets.
Richemond-Barak also discussed how AI can be used to assign specific military units more appropriately to specific duties in real time to avoid waste and mismatches which are routine in the fog of war.
Advertisement
All of this is based on the idea that AI capabilities collect far more intelligence, accurately, in real time, and get it much faster to decision-makers.

Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


Another issue discussed by Richemond-Barak was the possibility of AI-enhanced soldiers.
At least some soldiers are expected in the future to be given suits and gear that can help them withstand blood loss, extreme fatigue and extreme temperatures.
AI in war can also be used to help civilians directly, posited Richemond-Barak.
She said that AI can be used to more speedily identify where civilians are in a war zone, how much danger they are in from side impacts of the war, and who can best rescue those in need of rescue.
Moreover, AI can be used to better calculate and strategize humanitarian needs for civilians stranded in or near war zone areas, in order to more efficiently and accurately provide the food, water and other resources civilians need to survive until order is restored.
AI could also be used to identify flooded areas and impassable roads to help direct civilians to avoid such situations, she told the Magazine.
However, the most controversial aspect of war that Richemond-Barak discussed with the Magazine was likely the question of incorporating AI into the military chain of command.
Once this issue is opened up, the question becomes how to balance AI capabilities, which are superior to human ones, with human judgment, which – at least to date – is still superior to AI in addressing unexpected scenarios and being able to improvise.
She asked whether AI could be put in direct command of small or large groups of troops or vehicles, whether AI could serve as a deputy to human commanders, or if AI would remain just a technical resource used by human commanders when they saw fit.
Will militaries issue directives that declare areas of operations where AI can take over?
While detractors of AI worry that an AI mistake in judgment, AI being hacked or a technical error could lead to weapons being directed by AI to go on a massive and fast killing spree against civilians, Richemond-Barak suggested that AI could be programmed to never attack certain flagged targets.
MOVING INTO the arena of counterterrorism, IDC International Institute for Counter-Terrorism director Boaz Ganor discussed at the same conference the new capabilities that combining AI and big data were opening up for thwarting terrorist plots.
Ganor quoted M. From the Shin Bet (Israel Security Agency), who said:
“In the world of intelligence, the key to creating relevant research, using the methods and tools of big data, is being aware of the possibility of asking new questions.”
A striking point was the idea that “data not only produce quantitative differences that enable us to answer old questions using new tools, but actually create a new reality in which totally new questions can be asked.”
“The response to the questions is given by an intelligence agent using a much more sophisticated and complete picture of the enemy and of the environment in which he operates,” said M.
In other words, Ganor explained, AI was not just helping to get intelligence to thwart terrorist plots on a quantitative basis. Rather, the radical increase in quantity of available intelligence was also creating new qualitative ways to analyze intelligence in greater depth.
Next, Ganor cited statistics from the Shin Bet of thwarting close to 500 terrorist plots, which were presumably further along toward completion, and over 1,000 potential attacks, which could include arresting or visiting individuals exhibiting the early signs of a likely attacker on social media.
It was during the 2015-2016 “knife intifada” that the Shin Bet started to systematically use Facebook and other social media platforms to anticipate potential lone-wolf attackers.
Lone-wolf attackers, often one-time spontaneous terrorists with no connection to terrorist groups, had been impossible to stop. They had no trail of logistics planning, purchasing weapons or communications with their coconspirators to follow.
Into that vacuum, the Shin Bet pioneered using a variety of algorithms, tracking social media posts and other information it collected about certain individuals (for example, individuals who had family members involved in terrorism or killed by the IDF might be viewed as greater risks) to anticipate attacks before they occurred.
Some arrested attackers admitted that they were going to, or likely to, attack.
But Ganor then asked a crucial follow-up question to this seeming success: Is thousands of suspects and a very large number of detainees a suitable measure for success and intelligence effectiveness?
Alternatively, he warned that the Shin Bet’s high numbers might actually indicate “the use of a filter that is too broad and has too many holes.” Essentially, Ganor was concerned about the slippery slope of “over-foiling” attacks by falsely labeling certain persons “potential attackers.”
Instead of throwing out a dragnet that would set off all sorts of false alarms, AI could be channeled to nail the actual terrorists, Ganor said, by working hard to make sure the premises and questions of the AI intelligence gathering are not overbroad.
He quoted Col. Y. from Israeli intelligence, who said, “According to the traditional intelligence, for any good question, relevant accessibility can be created in order to expose the adversary’s secrets.”
“In the era characterized by a flood of information, one must assume that there is no question whose answer cannot be found in data. The trick is to know how to ask the right question from the data... and to know that when we do not get the answer, we must assume that we have asked the wrong question,” Y. continued.
A major concern that Ganor warned about was a scenario where scientists and programmers “are unable to explain the guiding principles, work processes and decisions of AI, which are made via machine learning and the use of big data.”
He said that this could occur when, “in order to optimize their work process, these [AI] systems are likely to change the guidelines that they were given.”
Big data and machine learning could lead to catching “a terrorist prior to his carrying out an attack, but it is not possible to explain how they got to him.”
He quoted M. again from the Shin Bet, who said, “In a world of vast data, there is no point and no need to try to investigate and characterize the activity model of the research object... even if we cannot explain the activity model of the object under examination.”
“Even if we cannot prove that a certain phenomenon stems from it, it is sufficient that the algorithm finds a correlation between the two phenomena for us to use this connection effectively,” added M.
In contrast, Ganor quoted Yoelle Maarek, a vice president at Amazon, who said, “It is not responsible for the scientist to say that the reason he got certain results is because that is what the machine decided... Of course, this is even more important in the case of security and intelligence agencies that use algorithms to make life-and-death decisions.”
Ganor said that “despite the success of the use of AI and big data in the field of counterterrorism, and in light of the huge number of arrests and foiled attacks that have taken place in recent years in Israel, there is a need to develop guidelines for the use of AI in counterterrorism.”
He said the guidelines should be developed by a combination of computer scientists, security experts, terrorism experts, strategists, jurists and philosophers.
Conceptually and in terms of placing limitations on it, he said that AI technology “combined with big data, should be treated as a means of mass surveillance and tapping... The use of databases that involve compromising people’s privacy should be conditioned on the prior approval of a judge and on the scope and nature of the terrorist threat at the time.”
Another safeguard that he urged is to perform regular and separate assessments of each different kind of database.
In other words, merely checking how a database or an AI platform functions with a database when launched, without regular follow-up checks, or generic checks which do not take into account the specific database’s character, would be insufficient.
Next, he said that “the incrimination of ‘potential terrorists’ identified by using big data technology should be considered only when there is additional supporting incriminatory evidence.”
Regarding both the use of AI and developing that additional evidence, Ganor warned that safeguards need to be introduced to combat cultural and ethnic bias.
Maybe most importantly, Ganor said that “the use of AI and big data technology to prevent terrorism should be avoided when the results of the algorithms cannot be explained.”
The number of alleged foiled attacks carried out as a result of using AI and big data should not be used as a measure of the success and effectiveness of the security forces.
WHILE GANOR highlighted both the positives and negatives of AI in counterterrorism efforts, and Richemond-Barak highlighted its benefits on the battlefield, there are other concerns on the battlefield.
In fact, in some ways Richemond-Barak’s presentation was an attempt to balance a discussion of AI and war that has mostly revolved around the worldwide civil society Campaign to Stop Killer Robots.
The campaign was launched in 2013, and by 2016 a meeting of the UN Convention on Certain Conventional Weapons (CCW) had established a special working group to try to reach a consensus on banning or limiting autonomous weapons systems.
These autonomous weapons systems are all or mostly expected to incorporate AI.
In an article in mid-December via the Bulletin of Atomic Scientists, Neil Renic of the Institute for Peace Research and Security Policy analyzed whether the campaign has been a total failure.
Renic and some other campaigners blamed the US, Russia, the UK and other major powers for blocking any major initiatives, since the CCW requires consensus.
In fact, Renic said that since many countries are perceiving themselves to be in an AI arms race, the trend among countries working on AI and autonomous weapons systems is to invest more funds in such systems, not less.
One reason Renic said that the campaign has not succeeded to date is that “autonomous weapons systems that most trouble humanitarians have yet to emerge. There is no egregious incident to cite, no cautionary tale to draw upon to make the case for reform.”
At the same time, Renic praised the campaign for bringing an awareness of some of the dangers of incorporating AI and autonomous weapons systems into militaries’ arsenals.
He suggested this awareness will lead to some self-regulation by militaries, even if it is limited and less enforceable than a solid multilateral convention.
According to Renic, as of October 30 countries supported a full ban on autonomous weapons systems. Furthermore, even if the CCW is a dead end for the campaign, a French/German declaration in early December to develop “a normative framework” on autonomous weapons received the support of dozens of foreign ministers.
He also noted that Brazil offered to host a symposium on the ban in February 2020.
So it can be said that the campaign has influenced aspects of world opinion to be concerned about autonomous weapons systems, but that the main countries involved in developing these systems do not want set limits.
Into this conversation, Richemond-Barak and others like her point out that these new systems are not only about more efficient killing; they can save lives, both by avoiding errors and through being focused directly on saving civilians.
HOWEVER ONE views AI and autonomous weapons systems, it does seem that they will be deeply ingrained in many countries’ conventional weapons systems not too long from now.
All of this comes back full circle to our hero Petrov, who saved the world from nuclear holocaust in 1983.
Apparently, Petrov was not the only savior. There are documented cases of false nuclear alarms from US technical systems in 1979-1980 in which, eventually, the human operators ignored what their technological systems were telling them.
So there is a real risk of even the best and most expensive technologies, and in our era of the newest AI-run systems, malfunctioning or getting trigger-happy.
There is also a danger that once human operators are too used to relying on AI, then even if human operators are in the loop, they may start to undervalue their human intuition and base too much of their final decision on trusting a generic AI checklist.
In other words, AI and checklists connected to it, if too automatic, can disincentivize creative human problem-solving.
In 1983, top Soviet leader Yuri Andropov as well as the general staff were predisposed to believe the worst about the US and were ready to jump on even a small amount of seemingly “objective” intelligence provided by their emergency warning systems.
In Petrov’s case, it turned out that the reason for the false alarm was simply that the computer’s algorithm was too sensitive to the sun’s reflection off clouds.
Humans in the field were needed to figure this out and to reset the computer with a higher detection threshold.
If all of the above questions are dangerous with conventional weapons, one lesson could be not to jump so quickly from moving AI from the conventional weapons field into the nuclear field.
As problematic as a malfunctioning autonomous conventional weapons system might be down the road, there is a decent chance that damage by such a system could be contained relatively quickly.
The same is not true for nuclear weapons.
As Ganor suggested, the key – whether in counterterrorism or war – may be to realize that AI will always be only as good as the questions and parameters set for any new system, and will require maintaining a constant human ability to second-guess the system where needed.
Those who want to simply ban AI in war or counterterrorism are likely to have a rude awakening, as the field has a momentum of its own, and they are ignoring the significant benefits to be gained from the field.
Yet those who ignore the risks and fail to set their own limits and safeguards do so at their own – and the world’s – peril.