Autonomous weapons systems: Boys' toys or killer robots?

Weapons systems are far from having a mind of their own. But the debate around them has already begun.

They are not autonomous, but getting there, step by step.
Getty Images

They are not autonomous, but getting there, step by step.

Thousands of AI researchers and organisations signed a pledge earlier this month, saying they won’t help build killer robots. The signatories include many prominent names such as Elon Musk and the founders of Google's DeepMind.

Recently, there's been a lot of talk of an "AI apocalypse," of an "impending doom."

But before we delve further into this debate, let's take a look at what we're potentially dealing with.  

Lethal autonomous weapons (LAWS)

Lethal autonomous weapons can identify and engage a target based on the particular descriptions or limitations programmed into them. To the best of our knowledge, all LAWS currently require a human hand, a person to give the final approval to “pull the trigger.”

Who to punish?

This “final human hand” concept is crucial.

If machines, without a human operator, are being used to kill people, when things go wrong who will be responsible? The engineers who built the thing? The funders who secured the project? The governments that allowed it to be created? Or the robots themselves? But how do you even punish a robot?

We could also argue that LAWS can be treated the same way we treat animals when they harm people. We judge animals according to our "moral codes" and they are simply put down or locked away. 

But given the state of artificial intelligence, it is hard to foresee a future with conscious robots taking autonomous action no matter how smart they get.

Are 'autonomous' weapons really autonomous?

Autonomous is an interesting word. It embodies a different meaning in every different field. In engineering, it may refer to a machine that can operate without an operator continuously monitoring or operating it. But in politics it connotes sovereignty while in philosophy perhaps most relevant to our topic, it suggests a sense of moral independence that would require accountability.

Deep learning in AI systems is developing so fast and in such a way that it looks like robots sometimes make autonomous decisions. When DeepMind programmed an AI program to play the board game "Go", they did not teach it every single move. They simply programmed it to “learn” how to play by studying thousands of games and it “decided” on its own which move to make at the time. 

This has the look and feel of autonomy even though it was engineered. So, how do we distinguish real moral autonomy from engineered autonomy since it is many steps removed from its original source, the engineer?

Getty Images

What does it take to get a weapons system autonomous?

What about emotions?

One of the points of concern in this debate is emotion.
Humans have empathy (well at least most of them), so they are better equipped to deal with the decision to kill as opposed to 'heartless metal', some argue.
There a number of problems with this argument. First of all, people are capable of the most vile things. Second of all, these machines might make it way more easier to kill.
Aerial bombing and even the pilots of the atomic bomb are often cited as examples of this "detached" process of killing. 

So the pledge? Does it actually mean anything?

Yes and no.

Yes, it is important to raise collective global awareness of the dangers that await us in the near future. Famous scientists getting together and putting the issue in the spotlight and saying they do not support the use of the knowledge they create to kill and destroy is a meaningful gesture.

But it is only that, a gesture. Science and technology have always in large part been driven by military money and the human need to conquer, destroy and dominate.

Deputy Defense Secretary Robert Work said that America would “not delegate lethal authority to a machine to make a decision”, but might reconsider this since “authoritarian regimes” might do so. 

This was in 2016.

The arms race is going strong. No country would sit and watch while the other countries, allies or rivals, develop incredibly powerful, yet morally questionable weapons. 

Nuclear proliferation is a case in point. 

The killer robot example is an interesting one as we could have people argue that it could lead to an arms race with less collateral damage than before.

With the improvements in the guided bombs, armies are now much more efficient in hitting their intended targets thus reducing the civilian casualties. Especially, if we were to compare the current conflicts to WW2 and the Vietnam War, where carpet bombing was common practice. 

LAWS could potentially be efficient as they would know exactly what to destroy or who to kill. In a way, you would have more scary devices on the battlefield, but they would be focused on a specific target set. 

You can take the swarm drone concept as an example for this. Or guided missiles that only strike if armor units (tanks, armored personnel carrier, etc.) are present. This could lead to a reduction in loss of lives thanks to the precision these tools would offer us.

The pledge is not a new move. A non-profit named "Campaign to Stop Killer Robots" got 1,000 signatures on an open letter warning against the dangers of LAWS in 2015. Later in 2017, another open letter by Future of Life Institute, the same institute behind this most recent pledge, was also published and signed by many scientists.

What about hacking?

The arguments against LAWS also bring up the fact that the tech could end up in black markets, or in the hands of armed non-governmental organisations i.e. terrorist groups. Even if these groups did not acquire the hardware they could still hack into systems.

The damage done by hacking would get worse if a state actor manages to hack LAWS of another country. In 2004, the UN began a process to come up with a set of rules of cyberspace for all countries. But there has been little consensus the UN's Group of Governmental sessions since then. 

With no ground rules or legal system to deal with it, the issue of accountability would be even more blurred. The hacked state would have to, first, stop the threat, try to understand what happened, and then find out who is to blame. 

That’s all assuming the hackers didn’t cover their tracks. In case you were to think that would never happen, Stuxnet would be a good reminder. However, so far no armed drone has been hacked as we know it.

Loading...

A future with lethal autonomous weapons will be one with many moral dilemmas. 

Things like accountability, an arms race and hacking will need to thought through. 

The current use of AI in the military has been mostly on the optimisation of logistics and the other relatively dry subjects. 

Self-driving cars are still not outperforming human navigators and their crash rate is still higher than human drivers. 

So there is still a lot of time until LAWS become a serious concern for us.

Route 6