POLITICS
4 min read
Trump orders US government to cease using Anthropic following Pentagon's feud with AI firm
Washington gave AI startup until Friday afternoon to agree to unconditional military use of its technology — even where that clashes with the company's own ethical standards.
Trump orders US government to cease using Anthropic following Pentagon's feud with AI firm
Anthropic has refused to allow its Claude models to be used for mass surveillance of US citizens or deployed in fully autonomous weapons systems. / Reuters
2 hours ago

US President Donald Trump has said he was directing every federal agency to immediately cease all use of Anthropic's technology, adding there would be a six-month phase out for agencies such as the Defense Department who use the company's products.

"I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!" Trump said in a post on Truth Social on Friday.

Trump's directive comes amid a feud between the Pentagon and top artificial intelligence lab Anthropic over concerns about how the military could use AI at war.

Spokespeople for Anthropic did not immediately respond to a request for comment.

The dispute intensified over whether the US military can use the company's AI system, Claude, in life-and-death scenarios, including a hypothetical nuclear missile attack, according to a report by The Washington Post.

The report, citing a defence official, said tensions flared during a meeting last month when a senior Pentagon technology official raised a life-and-death scenario involving a possible intercontinental ballistic missile strike on the US and asked whether Claude could be used in such a situation.

According to the official's account, Anthropic CEO Dario Amodei responded in a way the Pentagon viewed as hesitant, with the official characterising his reply as, "You could call us, and we'd work it out."

Noting that the exchange was described as a key moment in the deepening standoff, the report said an Anthropic spokesperson rejected that version of events as "patently false," and said the company has agreed to allow Claude to be used for missile defence.

RelatedTRT World - AI company Anthropic rejects Pentagon's request to loosen safeguards

'Critical military operations'

At the heart of the standoff is the Pentagon's demand that Anthropic permit the use of its AI for "all lawful purposes," according to the report.

In a post on the US social media platform X, Pentagon spokesman Sean Parnell said the agency has "no interest in conducting mass domestic surveillance nor deploying autonomous weapons," but wants to ensure AI can support lawful military operations without restrictions that could "jeopardise critical military operations."

Anthropic has resisted lifting limits related to autonomous weapons and large-scale surveillance. In comments cited by the Washington Post, Amodei said, "In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."

The Pentagon has reportedly given Anthropic a deadline to drop its objections or risk being excluded from future defence contracts, in a dispute that could shape how AI companies engage with military institutions.

Meanwhile, experts told the Washington Post the outcome could shape how AI companies engage with militaries, raising broader ethical and political questions about the future role of artificial intelligence in warfare.

RelatedTRT World - US military used Anthropic's Claude AI in Maduro abduction raid: report

Rivals support Anthropic

Meanwhile, hundreds of employees at AI giants Google DeepMind and OpenAI have urged their companies to set aside their bitter rivalries and rally behind Anthropic in its standoff with the Pentagon.

An open letter titled "We Will Not Be Divided," signed as of Friday by 336 Google DeepMind staffers and 68 from OpenAI, called on tech leaders to hold the line together.

"We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight," the letter said.

"They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand," it added.

OpenAI CEO Sam Altman told employees on Thursday that he too was seeking an agreement with the Pentagon that would include red lines similar to Anthropic's, and that he hoped to help broker a resolution, the Wall Street Journal first reported.

"We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions," he wrote.

SOURCE:TRT World and Agencies