AI company Anthropic rejects Pentagon's request to loosen safeguards
The company refuses to remove safeguards from its AI model over concerns about surveillance and autonomous weapons.
Artificial intelligence company Anthropic has said it will not comply with a US Defence Department request to relax safeguards on its AI systems, citing concerns over mass surveillance and autonomous weapons.
In a statement on Thursday, CEO Dario Amodei said the company opposes allowing its AI model, Claude, to be used for “mass domestic surveillance” or “fully autonomous weapons.”
He said advanced AI systems are not reliable enough to operate such weapons without human oversight and require safeguards that “don’t exist today.”
He also said AI can support national security but warned that large-scale, AI-driven surveillance could pose risks to civil liberties.
Anthropic and the Pentagon have been negotiating for weeks.
The Trump administration has threatened to invoke the Defense Production Act, which allows the government to compel companies to prioritise national defence needs, and has considered labelling Anthropic a “supply chain risk.” Such a designation would prevent the Defense Department contractors from using its software.
Axios reported that the Pentagon has begun steps towards that designation and asked Boeing and Lockheed Martin to detail their reliance on Claude.
Pentagon spokesperson Sean Parnell denied the department intends to use AI for unlawful surveillance or fully autonomous weapons without human involvement.
In a post on X, he said the Pentagon is seeking to use Anthropic’s model for “all lawful purposes” and would not allow a private company to dictate operational decisions.