Why the Pentagon suddenly woke up to embracing artificial intelligence

The United States fears that China’s rapid adoption of artificial intelligence (AI) to revolutionise warfare could accelerate its decline in global influence. The Biden administration has now made efforts to close the gap.

The Pentagon's heightened focus on AI is, in part, a response to international competition in emerging technologies, especially as it perceives China to be making considerable strides on this front. / Photo: AP
AP

The Pentagon's heightened focus on AI is, in part, a response to international competition in emerging technologies, especially as it perceives China to be making considerable strides on this front. / Photo: AP

As the US government’s primary goal is to contain China’s tech ascendence, the Biden administration has been exploring ways to leverage the capabilities of artificial intelligence (AI) to enhance military planning, decision-making, and intelligence gathering.

Dr Kathleen Hicks, Deputy Secretary of Defense, announced the launch of Task Force Lima on August 10 to investigate the use of generative AI technologies in the military domain and throughout the Pentagon.

Currently, the Department of Defense (DoD) is running more than 600 AI projects and many of them are weapons-related.

Generative AI, a subfield of artificial intelligence, is characterised by its ability to produce a wide array of content, ranging from text and audio to code, images, and videos, based on the data and prompts it has been trained on.

According to Hicks’ memo, Task Force Lima will assume the mission of keeping an eye on the new generative AI applications and make sure the Pentagon uses the technologies in a number of missions and tasks including doctrine development and training effectively.

"The tremendous pace at which China closed the technological gap has genuinely alarmed US officials," Murat Akca, a researcher focusing on US-China tech rivalry at the National Defense University in Istanbul, told TRT World.

"Emerging technologies allow US competitors to even the playing field and bridge the gap”.

Although the Defense Department has exercised caution, citing the likelihood of these models producing inaccurate information, the Pentagon has taken a keen interest in embedding AI in their work culture and getting ahead of America’s rivals.

“We must also consider the extent to which our adversaries will employ this technology and seek to disrupt our own use of AI-based solutions,” said Dr Craig Martell, the DoD Chief Digital and Artificial Intelligence Officer while announcing DoD’s foray into AI.

Competition with China

The Pentagon's heightened focus on AI is, in part, a response to international competition in emerging technologies, especially as it perceives China to be making considerable strides on this front.

A report published by the Australian Strategic Policy Institute (ASPI) validates such concerns, claiming Chinese researchers have surpassed the US in 37 of the 44 technologies surveyed, including a broad range of sectors - robotics, biotechnology, artificial intelligence, advanced materials, and quantum.

"In the long term, China’s leading research position means that it has set itself up to excel not just in current technological development in almost all sectors, but in future technologies that don’t yet exist," the report states.

Alexandr Wang, CEO of Scale AI, a Silicon Valley-based company, echoed a similar view while testifying before a House Armed Services subcommittee. Wang said: "The country that is able to most rapidly and effectively integrate new technology into war-fighting wins."

He noted that China has recognised the potential of AI in warfare and is putting in far greater efforts than the US in the field of AI.

In his testimony, Wang drew an analogy between China's investment in AI and the Apollo project of the 1960s, arguing, much as the Apollo program marked a turning point in the US's space exploration endeavours, China's investments in AI highlight its commitment to asserting dominance in this field.

Schuyler More, Chief Technology Officer of US Central Command, emphasised the critical nature of AI in modern military strategy in May at a Center for Strategic and International Studies event on AI stating “We are fully bought into the idea that data-centric warfare is the only way to conduct business going forward.”

Margie Palmieri, Deputy Chief Digital and Artificial Intelligence Officer at the Pentagon, has articulated the department's intention to integrate commercial AI practices into their military operations. She said that the query, "How do we bring these commercial practices into the defence department?" remains a persistent focus in the department's strategic discussions.

Turning to Silicon Valley

To answer this question, the Pentagon has increasingly turned to Silicon Valley for assistance in its AI endeavours. This collaboration became even more pronounced last year when the Pentagon appointed Dr Craig Martell as its first Chief Digital and Artificial Intelligence Officer. Before joining the Pentagon, Martell had led machine learning initiatives at Lyft and Dropbox.

Previously, collaborations between Silicon Valley firms and the Pentagon faced hurdles. A prime example from 2018 involves Google when a significant number of its employees raised objections over Project Maven, a Pentagon initiative that harnessed the tech giant's AI prowess to analyse drone surveillance data. The internal unrest led Google to withdraw from Project Maven.

However, the tides appear to have turned; just three years later, the company secured a multi-billion dollar contract with the Pentagon.

Istanbul-based military technology analyst Akca noted that the US defence establishment has successfully shifted the narrative, securing much of Silicon Valley’s support.

"Silicon Valley now seems more amenable to aiding the US government in its rivalry with competitors, echoing the 'free world versus evil empire' rhetoric reminiscent of the Cold War era," Akca said.

One example of Silicon Valley's growing involvement with the Pentagon is Saildrone, a company that initially set out to develop autonomous vessels for civilian oceanic exploration. Now, these AI and machine learning-equipped vessels have become essential contractors for the US Navy, aiding in intelligence collection and surveillance in oceans.

Another Silicon Valley company the Pentagon has entered into a partnership with is DeepMedia. The company aids the Pentagon in the development of technology specifically designed to detect deep fakes and other media distortions, which have become increasingly prevalent due to the widespread adoption of AI technologies.

This alliance between the tech sector and the Pentagon has very vocal proponents in Silicon Valley. Palantir CEO Alex Karp has made it clear that his company is committed to supporting the Pentagon's efforts to counter "adversaries to the West." Addressing employees who might not share this mission, he unambiguously declared, "Don't work here."

Palantir has collaborated with the Pentagon and the US intelligence community to manage and analyse their data for nearly twenty years. Palantir’s software platforms, such as Gotham, are part of DoD functions from strategic planning to operational decision-making. The CIA was among the initial backers of the firm, and both the intelligence agency, along with the FBI and NSA, utilise their services.

The promise of large language models

In line with its ambitions, Palantir has recently introduced a large language model (LLM) based technology that integrates seamlessly with the systems of the Pentagon. LLMs, which undergo training on extensive internet data, have the ability to predict and generate responses that closely resemble human communication in response to user prompts.

Prominent examples of LLMs include OpenAI's ChatGPT and Google's Bard. However, the Pentagon has shown no intention towards utilising the existing ones. Instead, it aims to train its own models using its internal data.

The Pentagon's focus at the moment is on improving its decision-making by analysing data from sensors and formulating possible courses of action.

To achieve this, the Defense Department is actively conducting experiments with five LLMs to investigate their potential for data integration and digital platforms in a military context. These experiments involve collaboration between the Pentagon's digital and AI office, top military officials, and select US allies.

In light of all the AI-related moves the US defence establishment has made in the recent past, several alarm bells have been raised over the reality of AI-enabled warfare.

Recent demonstrations from Palantir and Scale AI—technology companies working with the Pentagon to explore AI's potential in the military—illustrate the allure of removing the fog of war. Palantir's Artificial Intelligence Platform (AIP) promises to link AI-enabled chat-based functionality with intelligence collection and battle planning, while the US Army is working with Scale AI on its Donovan platform to experiment with similar LLM applications.

However, the use of AI in the military raises significant ethical and legal considerations. Military personnel have expressed scepticism about deploying AI-enabled autonomous weapons due to concerns over safety and reliability.

Concerns regarding the Pentagon's adoption of AI solutions like Palantir's AIP and Scale AI's Donovan highlight the need for a careful evaluation of AI's role in military planning and decision-making.

Route 6