In the early hours of July 8 US police used a remote-controlled robot to kill a person for the first time: Micah Xavier Johnson, the 25-year-old ex-soldier who murdered five police officers in Dallas, Texas.
In the early hours of Friday morning US used a remote-controlled robot as a weapon to kill a suspect for the first time, raising the question: is it right to use robots to kill?
Someone correct me if I'm wrong, but have we seen the first use of a lethal robot in American policing?— Elizabeth Joh (@elizabeth_joh) July 8, 2016
The perpetrator of the Dallas killings, Micah Xavier Johnson, was 25-years-old and had served as an army reserve veteran between 2009 and 2015. It appears he was motivated to commit the murders by anger over police shootings of black men.
He was killed by the detonation of a C4 explosive attached to a F5 model tactical robot made by Northrop Grumman's subsidiary Remotec, after shooting dead five policemen during a stand-off at El Centro College garage in Dallas, Texas.
American police said all attempts to negotiate with Johnson failed during an exchange of gunfire and they were left no other choice but to use extraordinary means to neutralise the suspect.
"We saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the suspect was," Dallas Police Chief David Brown said in a news conference held at City Hall after the incident.
"Other options would have exposed our officers to grave danger."
Until a few days, the type of robot used to kill Johnson had been used only for non-lethal purposes such as dismantling and disposing of explosives. Last year, an Andros F6A – a police robot similar to the one used by Dallas police – helped Californian cops convince a man to not commit suicide.
"Negotiators attempted to talk to him via a remote PA system, but that is not the optimal way of talking to him, so the decision was made to deliver a phone to him. The safest way to do that and not cause confrontation was to use a robot," Chris Sciba, a sergeant with the San Jose Police Department's Mobile Emergency Response Group and Equipment Unit told IEEE Spectrum.
"[Because] delivering food is a way of encouraging someone to do something we want them to do, we sent pizza with phone. We [instructed the subject] that if he wanted the pizza released, to pick up the phone. The robot was holding the pizza, it released the pizza once the subject picked up phone to talk to negotiators."
Even before the death of Johnson an ethical debate had arisen over the use of robots as weapons.
Last year, hundreds of scientists including Stephen Hawking and Max Tegmark sent an open letter to the United Nations calling for an international ban on the development and use of weapons with artificial intelligence.
Several months later, the UN held a meeting on the issue of "killer robots" which failed to bear fruit.
The scientists feared that lethal autonomous weapons systems would "require no costly of hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce."
Although the robot used by Dallas police was guided by a human, rather than artificial intelligence, people still have reasons to worry.
Some law enforcement experts believe that by using human-operated robots to kill suspects a precedent could be set for using robots without human operators to kill in the future.
Even if there is no such danger, using robots in law enforcement operations could lead to excessive force being normalised.
This is the view of nonresident Senior Associate at the Center for Strategic and International Studies (CSIS), Rick Nelson, who said: "The further we remove the officer from the use of force and the consequences that come with it, the easier it becomes to use that tactic."
"It's what we have done with drones in warfare. Yet in war, your object is always to kill. Law enforcement has a different mission."
The point is that using robots makes it easier and less dangerous to kill people. This takes away the need to negotiate to ensure the safety of police officers and possibly endangers the suspect's right to a fair and public hearing by an independent and impartial tribunal.
Even in cases where there is no doubt that the subject has committed a crime, they have a right to a fair trial, according to the tenth article of the Universal Declaration of Human Rights adopted by the United Nations.
On the other hand, there are also supporters of the decision to use a robot to kill Johnson for exactly the same reason – because it eliminated the element of danger faced by Dallas police officers.
Dallas Mayor Mike Rawlings praised the police for making the "right" decision saying he would have no doubts about following the same strategy in the future.
"When there's no other way, I think this is a good example," he said. "The key thing is to keep our police out of harm's way."
Ryan Calo, an assistant professor at the University of Washington School of Law, stated that the use of the robot was not illegal:
"No court would find a legal problem here […] When someone is an ongoing lethal danger, there isn't an obligation on the part of officers to put themselves in harm's way."
In addition, security personnel occasionally accidentally shoot people because they feel stressed and in danger. Using robots may allow them to to make better decisions.
But aside from the ethical questions above, there are legitimate concerns about the practical consequences of weaponising robots which have other non-lethal uses. If suspects begin to view robots as possible threats this might hinder them from being used to negotiate or conduct other operations.
Whatever the case may be, it appears that a comprehensive legal framework is needed to regulate the uses of such robots in the future.
As Elizabeth Joh, a law professor at the University of California, says: "This surely won't be the last instance we see police robots."