The future of war and deterrence in an age of autonomous weapons

Artificial intelligence and autonomous systems will significantly alter the future battlefield and challenge strategists to come up with new models of deterrence.

Innovation in the field of emerging technologies – broadly encompassing developments such as artificial intelligence (AI), robotics, drones, quantum computing, 3D printing, biotech – is evolving at breakneck speed with the potential to have far-reaching consequences on everything from governance and commerce to geopolitics.

When it comes to warfare, many of these critical technologies possess the power to completely upend the terms of human conflict and alter future battlefields.

“AI and robotics will smash the status quo that exists in the world today,” geopolitical futurist Abishur Prakash told TRT World, adding that new technologies will “reduce the gap between advanced military powers and the rest of the world”.

With traditional concepts of state power gradually intertwined with national expertise and investment in AI, a global arms race is already underway, with the US and China at the forefront.

As wider adoption accelerates, conventional notions around deterrence are set to come into question too. What happens to deterrence and escalation when decisions can be made at machine speeds and are carried out by forces that do not risk human lives?

“We will need to rethink the central tenets of deterrence. AI and autonomous systems challenge the way that nuclear and non-nuclear operations are conducted, as well as the way these systems can be held vulnerable to attack,” says Mikhail Sebastian, a London-based political risk analyst specialising in cybersecurity and digital diplomacy.

“At the same time, they offer a new suite of options for deterring nuclear attacks.”

Prakash warns we’ve now reached a point of no return.

“We are exiting the era where the most damaging behaviour could be deterred. Now, as technology gives nations and organisations new capabilities, governments are faced with threats they cannot stop or limit,” he says.

“They can only be managed.”

Autonomous battlefields

If there is one military technology proven to be a gamechanger thus far, it’s drones.

After gunpowder and nuclear weapons, many have referred to automated killer robots as the “third revolution in warfare”.

Late last year amid the pandemic, the Second Nagorno-Karabakh War between Azerbaijan and Armenia amounted to a showcase for autonomous weapons – and provides a glimpse of the battlefield of the future.

Azerbaijan deployed a range of drones, purchased from Israel and Turkey, to rout the otherwise conventionally superior Armenian army in a short space of time. Azeri forces used to devastating effect Israeli-made ‘Harop’ loitering munitions, designed to hover high above the battlefield while waiting to be assigned a target to crash their explosive payload into, earning them the moniker “Kamikaze drones”.

Azerbaijan spent years investing in loitering munitions and accumulated a stock of over 200, while Armenia had only one domestically made model with a limited range. Being the first war won by autonomous weapons, an uptick in interest from national armies acquiring unmanned aerial systems followed shortly after.

In the US, a new report from the National Security Commission on AI discusses how autonomous technologies are enabling a new paradigm in warfighting and urges massive amounts of investment in the field.

Countries are intensely competing to build or purchase cutting-edge drone systems: China and Russia intend to pursue the development of autonomous weapons and are investing heavily in R&D. The UK’s new defence strategy puts AI front and centre, as does Israel.

And a much more transformative drone technology could be just on the horizon.

Advances in Li-ion batteries have given rise to cheaply made miniature quadcopters. Multiple air forces are now beginning to test networked swarms of drones that can overwhelm radar systems.

Sebastian points out that while on its own a single unmanned and autonomous unit is no match for a fighter jet, when algorithmically linked together a fleet of thousands can conceivably overwhelm larger platforms.

“Once refined, low-cost autonomous drones coordinating their actions at machine speed provide a unique coercive tool that undermines high-cost legacy weapon systems, while potentially augmenting the feasibility of an offensive attack,” he told TRT World.

Other

During a live demonstration to celebrate India's 73rd Army Day in New Delhi on January 15, 2021, the Indian military showed off a swarm of 75 drones destroying a variety of simulated targets in explosive kamikaze attacks.

Possibly the scariest development are autonomous quadcopters equipped with computer vision technology that can recognise and kill a specific target, or so-called assassination drones.

“As opposed to other military drone applications, assassin drones don’t have to be confined to the battlefield. They can lurk as an omnipresent threat outside of wartime,” says Sebastian.

Until now, deterrence has primarily involved humans attempting to affect the decision calculus and perceptions of other humans. But what happens when decision-making processes are no longer fully under human control?

‘How does one deter an event that has not happened yet?’

What sets the new technology arms race apart from the past is AI’s dual-use.

During the Cold War, the development of nuclear weapons was driven by governments and the defence industry. Beyond power generation, there wasn’t much commercial use for nuclear technology.

But that model doesn’t apply anymore.

“The creeping ubiquity of AI means developments in technologies cannot be contained, and they are bound to bleed across the civilian and military realms,” Sebastian notes.

In an article published last year James Johnson, an assistant professor in the School of Law and Government at Dublin City University, argued the dual-use and diffused nature of AI compared to nuclear technology will make arms control efforts problematic.

“When nuclear and non-nuclear capabilities and war-faring are blurred, strategic competition and arms racing are more likely to emerge, complicating arms control efforts,” he wrote.

“In short, legacy arms control frameworks, norms, and even the notion of strategic stability itself will increasingly struggle to assimilate and respond to these fluid and interconnected trends.”

Johnson underscores what is now referred to as the nascent “fifth wave” of modern deterrence (the “fourth wave” followed the Cold War and continues to the present, coinciding with multipolarity, asymmetric threats and non-state actors) is defined by a conceptual break by including non-human agents into deterrence.

It then follows that asymmetric AI capabilities will inform deterrence strategies. To fight autonomous weapons, you need those same weapons – driving actors to adopt these technologies to shore up their defence against autonomous attacks.

The mix of human and artificial agents could affect escalation between actors in the process. In a RAND report, researchers emphasise how widespread AI and autonomous systems could make inadvertent escalation more likely because of “how quickly decisions may be made and actions taken if more is being done at machine, rather than human, speeds.”

Two conflicting sides might equally find it necessary to use autonomous capabilities early to gain a coercive and military advantage to prevent an opponent from gaining the upper hand, raising the possibility of first-strike instability.

These dynamics could have fateful consequences for how wars begin.

“Because of the speed of autonomous systems having to be countered by other autonomous systems, we could find ourselves in a situation where these systems react to each other in a way that’s not predictable,” Sebastian says.

”Before you know it, a rapid escalation leads to a military conflict that wasn’t desirable in the first place.”

Prakash, who is the author of The Age of Killer Robots, believes governments are going to have to rethink deterrence in an era when AI is making military decisions.

“Deterrence has so far revolved around stopping a nation or actor from doing something today. But as nations use technology to predict future events on the world stage – or what I call ‘Algorithmic Foreign Policy’ – a new challenge emerges,” he says.

“How does one deter an event that has not happened yet?”

Other

People take part in a 'Stop killer robots' campaign at Brandenburg gate in Berlin, Germany, Thursday, March 21, 2019. Presently there are no barriers to countries wishing to develop autonomous systems that can decide on their own when and whom to kill. Though the technology is still in its infancy, militaries and manufacturers are working to develop and test weapons that could one day be deployed to fight on their own.

Prakash adds that because of how integrated and fragile global systems are today, the world is shifting from the threat of being annihilated (nuclear weapons) to the threat of having critical infrastructure targeted.

“Today, a cyber attack that cripples energy, water and supply chains, will create as much if not more damage,” he argues.

Can a new consensus be achieved?

Given the unpredictability of a new era of armed conflict and AI’s inevitable ubiquity in military applications, what actions could be pursued by policymakers to control the risk of unwanted escalation?

The UN Convention on Certain Conventional Weapons, launched in the 1980s to regulate the use of non-nuclear weapons, has been one avenue. But an effort by the body to ban lethal autonomous weapons systems fell apart in 2019, when resistance from the US, Russia, South Korea, Australia, and Israel thwarted any consensus that could have led to a binding decision.

“The old approach of arms control and treaties don’t apply anymore to these systems. We’re talking about software not hardware,” says Sebastian. “Before it was about allocating a certain number of systems. You can’t do that with AI-enabled systems.”

Much like how it was done for nuclear weapons, fresh international treaties must be forged for new weapons technologies.

“We might end up with rules and norms that are more focused on specific use-cases than systems or technologies. For example, there might be an agreement to use certain capabilities only in a specific context or only against machines.”

But powerful states are often sceptical of multilateral forums regulating technologies and narrowing their ability to gain strategic advantage. For now, the prospect for any transnational solution is nowhere on the horizon.

“Agreeing to or implementing any framework will not be easy, especially when there’s a lack of trust between great powers,” adds Sebastian.

Furthermore, the attempt to achieve consensus around AI is likely to highlight moral asymmetries and introduce several dilemmas that could determine the future of deterrence.

In their paper New Technologies and Deterrence: Artificial Intelligence and Adversarial Behaviour, Alex Wilner and Casey Babb claim that while some states might be against providing AI with the right to kill individuals without human intervention, others might not be so hamstrung by those issues.

According to Wilner and Babb, ethical concerns might end up playing a pivotal role in influencing the development of AI and the nature of alliance politics.

“Allies with asymmetric AI capabilities, uneven AI governance structures, or different AI rules of engagement, may find it difficult to work together towards a common coercive goal,” they wrote.

“Allies who differ on AI ethics might be unwilling to share useful training data or to make use of shared intelligence derived from AI. Without broader consensus, then, AI may weaken political cohesion within alliances, making them less effective”.

Route 6