The age of the ‘terminator’

Starting Tuesday, the UN is set to hold a conference that will debate whether to ban the use of lethal autonomous killer robots.

Protesters against the use of killer robots jump-start their campaign during a demonstration outside the Frontline media club in London on April 23. (photo credit: STOPKILLERROBOTS.ORG)
Protesters against the use of killer robots jump-start their campaign during a demonstration outside the Frontline media club in London on April 23.
(photo credit: STOPKILLERROBOTS.ORG)
Are we just around the corner from terminators, silons or other sci-fi robots turning against humanity, transforming their human “masters” into hunted prey?
From Tuesday this week, the UN is set to hold a conference that will debate whether to ban the use of lethal autonomous weapon systems, or as they are more affectionately known, killer robots.
What makes killer robots different from any other weapon system is that there is no human telling the machine whether or not to pull the trigger.
The process of ban consideration began with UN special rapporteur Christoff Heynes’ April 9, 2013 presentation that expressed concern over the prospect of robots violating the laws of war.
It continued with a November 2013 UN group agreement to hold this week's conference.
The ban also has organized support from a coalition of at least 45 nongovernmental organizations in 22 countries, with Human Rights Watch taking a leading role.
Human Rights Watch said it hoped that the May conference would eventually lead to a complete ban on killer robots.
The conference relates to the Convention on Certain Conventional Weapons, relating to a variety of weapons, from incendiary bombs to blinding lasers.
The CCW has been ratified by 117 countries, including those Human Rights Watch said are known to be advanced in autonomous weaponry – the United States, China, Israel, Russia, South Korea and England.
Human Rights Watch is pushing for Protocol VI, banning killer robots, and has said that such a ban would be the crowning achievement of the CCW.

Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


Those supporting a ban said there is no guarantee that robots will respect the laws of war, adding that robots lack human judgment necessary to distinguish between combatants and civilians – maybe the foremost humanitarian principle of the laws of war.
Furthermore, supporters of a ban said that there is a major risk that robots could go beyond what their human programmers would want to allow.
This risk could lead to the automation and exponential escalation of war, without a safety switch.
On a separate front, ban-supporters say that the use of robots undermines the fundamental principle of culpability – in other words, there is no way to hold any human responsible for a war crime committed by an autonomous robot.
With no responsibility, humans could use robots to commit mass atrocities with impunity.
However, even among those who admit there could be issues with robots’ observance of the laws of war, there are those who oppose the ban.
Many argue that instead of a ban there should be a temporary moratorium on using robots – until their risks are better understood.
While others take a wait-andsee approach, saying robots should be used, but used carefully and monitored closely.
How do they answer the critics who support a ban? Regarding the concerns about distinction, the first response is that robots may actually be better than humans at distinguishing combatants from civilians in the fog of war.
The reason would be that they can process more sophisticated information, as well as a greater volume of it, about nearby persons – including potential facial recognition capabilities.
Their judgments about a person’s manner, speed, aggressive or passive movements and speech can avoid acting on fear, anger or vengeance as humans may act.
Also, in comparison to even the best human aim, robots’ precision may prove superior.
Expectations are for it to function like precision-guided munitions bombs – which are dropped from airplanes and cause much fewer casualties when compared with less computerized bombs.
Those against a ban said that there is no need for a special convention to ban the use of killer robots, as any specific robot whose inherent function did not comport with the laws of war would be illegal anyway – as there is a law already in place that bans the use of any specific weapon that breaks the laws of war.
If there is an issue with particular robots lacking the ability to distinguish combatant from civilian, that would merely mean they should be limited to being deployed in a clear battlefield context.
For example, they might not be usable in urban warfare, but could be used to fight a tank formation, where all persons on the battlefield are combatants.
Regarding the problem of who to charge with a war crime committed by a robot, it has been argued that the US formally accepts the principle that whoever has programmed, ordered the use of or released a killer robot into battle could be held responsible.
In other words, while countries may theoretically try to avoid responsibility for a robot’s war crimes by claiming distance from the action, there are principles that can be invoked to block such an abuse of the laws of war.
One of the strongest objections by those against a ban is that it would be preemptive. As while many countries are getting closer to using killer robots, no one has even used them.
The argument goes that it is against logic and common sense to ban their use before their advantages and disadvantages can be determined by experimental use.
However, those opposing a ban admit that there are unique difficulties with robot killers that mean that once they are used, things could get out of hand.
While dismissing the scenario where robots rise up to destroy mankind, as is depicted in the Terminator movies or Battle Star Galactica, they acknowledge that robots could be hacked by adversaries or could malfunction and attack friendly forces or civilians.
However, they said, any weapon, including a missile or a bullet, can malfunction and most computerized weapons today can be hacked on some level.
But, whereas most computerized weapons systems can be manually disabled there is no real answer for how to stop a hacked killer robot or how to gage whether a robot exercised reasonable judgment, since reasonableness is an inherently human trait.
One thing is certain, after this week’s high profile conference, the stakes for killer robot supporters and opponents have been raised to a new level.