The hidden race for military artificial intelligence is already here

Militaries working hand in hand with technology start-ups are taking the cork out of the genie’s bottle.

Artificial intelligence operates in a black box, meaning human operators are unable to piece together how an AI makes its decisions.
Getty Images

Artificial intelligence operates in a black box, meaning human operators are unable to piece together how an AI makes its decisions.

A month ago, the US Air Force quietly announced that it let an artificial intelligence act as a co-pilot on a U2 spy plane, effectively marking the first time an AI has directly controlled a military system in a simulated mission. The AI was charged with finding an unspecified enemy's missile sites after a launch. The announcement didn’t make the headlines.

The AI was endearingly named Artuµ, after the beeping robot droid co-pilot in the Star Wars franchise, while giving a nod to what effectively counts as its grandfather: the µZero algorithm created by the DeepMind AI, renowned for defeating world Go board game champion Lee Sedol in one try in 2016.

The ancient game relies as much on probability as it does on chance. By DeepMind’s second match against Sedol, it was already rewriting the rules. Five years later, DeepMind has evolved to the point where it can master any game without being told the rules.

Enter Artuµ, traditionally designed to win at Chess and Go, much like his predecessor. In five weeks, the AI had learned how to use the spy plane’s radar to perfection. A million virtual missions later, Artuµ wasn’t just a radar operator on the plane, but the mission’s effective commander

Artuµ’s next educational experience is the realization of a dystopian nightmare to many: mastering electronic warfare, and testing it in the field. 

With that, a new era of algorithmic warfare begins, on a battlefield that spans the world, made possible by innocuous tech startups far removed from the consequences of their efforts: gamifying warfare. 

For Artificial Intelligence labs like DeepMind and Artuµ whose sole purpose is to discover all the rules, master them and win, that’s problematic.

,,

But let’s take a step back. The link between warfare and games are nothing new, with an entire wargaming industry built upon it. In many senses, games have come to resemble reality. 

Take flying a spy plane for instance. You’re rewarded for finding enemy positions. Being shot down or caught are penalties. The rules? Physics. 

Artificial intelligence, often referred to as deep learning machines are given the same set of carrots and sticks and told to figure it out, and they do. Their reasoning remains a mystery however, lost in a black box of millions of permutations and correlations so dense no human mind can grasp.

,,

Traditional AI’s are told the rules up front, and forced to endure millions of simulations running at incredible speeds where they survive, or die. If they survive, their little discovery is passed on to their own next generation, and so the AI continues to evolve, much like biological beings are subject to natural selection. 

The difference here is that this all occurs at lightning-fast speeds. An industry-standard artificial intelligence can carry out around 10,000 algorithmic permutations per second, per processor. An average computer has between 2-5 processor cores, and computers running AI’s aim for dozens if not hundreds.

That’s too fast for humans to catch up and figure out what just happened by several orders of magnitude. 

Artuµ is a different kind of AI though. It had to learn things the hard way. For instance, no one told him that enemy air defenses won’t fire on their own forces. But it figured that out. 

The young AI can only improve for so long before it hits a wall, and masters every simulation thrown at it. So it’ll face a new mission: facing off against itself.

This won’t mean countering an exact clone. If two identical AI’s faced off, they’d end in a stalemate in much the same way you can’t play a board game against yourself. Instead another AI is being created at the U-2 Federal Laboratory.

Artuµ and his opponent will go through millions of learning simulations to master sensing and jamming, the bread and butter of electronic warfare. 

With the future already here, artificial intelligence seems poised to make more inroads into military systems, and perhaps most importantly, into decision-making that directly impacts the world. 

The future of warfare promises human-machine teams facing off against each other, as militaries try their best to minimize the vulnerabilities and exposure of their carbon-silicon operators.

,,

It could end there, but it likely won’t.

For one, most transformative technological leaps took piggy-backed off of prohibitively expensive military funding. Artuµ, and DeepMind, it’s predecessor came out of one commercial lab, and it’s hardly alone. 

Sprinkled throughout the globe and in nearly every commercial technological field, small hard-hitting technological firms are pushing the known frontiers of autonomous design, quantum computing, space exploration, and machine biology. It’s safe to say the boom in big tech is only just beginning.

That makes the life of a military planner or strategist incredibly difficult. How do you prepare to fight tomorrow’s war, instead of fighting yesterday’s? You can’t. But you can shape what it looks like.

Consequently, technological innovation has become the latest battleground. Militaries still matter, but utilizing the potential of military-private partnership is the only investment that ensures any nation a semblance of a grip on its future.

For many, there’s a troubling premise at the heart of the mad race for electronic wizardry: The notion that survival requires releasing the Djinni from the bottle before anyone else, and letting the cards fall where they may.

Route 6