Public debate on artificial intelligence is increasingly shaped by two main narratives.
On one hand, there is concern that AI will replace human labour, leading to mass unemployment. On the other, there is a techno-utopian view that presents AI as a solution to everything from economic stagnation to climate change.
Yet amid this debate, we may overlook a quieter shift. The Pentagon-Anthropic crisis provides a vital chance to reconsider how this technology is viewed. AI is not just a civilian tool but a military and strategic instrument of power.
To understand AI’s military focus, it is useful to examine the origins of modern communication technologies.
Contrary to popular beliefs, many of these technologies did not initially start as civilian projects.
The internet’s predecessor, ARPANET, was developed by the Pentagon as a defence network designed to maintain communication during a nuclear attack.
Today’s AI systems carry traces of this same technological lineage.
From this perspective, the Pentagon’s interest in artificial intelligence is not new. It reflects the technology’s historical roots.
The history of AI research reinforces this point. During the so-called “AI winters,” when private investment slowed, and many academics questioned the field's future, military funding continued to support research.
In the 1980s, for instance, the Strategic Computing Initiative launched by DARPA under the US Department of Defence played a key role in advancing AI’s computing power and targeting capabilities.
The Pentagon-Anthropic tension
This long-standing relationship between artificial intelligence and defence institutions has recently resurfaced.
The tensions between the Pentagon and Anthropic reveal how fragile the technology sector’s rhetoric around “safe and ethical AI” can be.
Sharp criticism from the Trump administration and Secretary of War Pete Hegseth has reopened the debate over whether technology is truly neutral.
Trump’s description of the company as “radical left” on Truth Social, followed by further attacks, should not be dismissed as simple political rhetoric.
The statements can also be read as a signal from Washington that the state remains the ultimate authority over strategic technologies.
Anthropic’s designation as a “supply chain risk” and its legal response exposed the delicate balance of power between Silicon Valley and Washington.
Support from major tech companies such as Google, Amazon, Microsoft and Apple reflected not only corporate solidarity but also concern that similar political pressure could eventually target them.
At this critical moment, Sam Altman’s OpenAI moved quickly to fill the strategic vacuum created by the crisis.
While Anthropic distanced itself from government institutions due to disagreements over ethical “red lines,” OpenAI entered into a new partnership with the Pentagon.
Founded in 2015 with a non-profit, idealistic vision, OpenAI now finds itself at the heart of the national security architecture with a market valuation exceeding $700 billion.
Yet the most significant reaction to this move emerged among users of AI platforms themselves. After the announcement of the agreement, the #QuitGPT campaign spread across social media, reportedly causing a record increase of up to 300 percent in the deletion rate of the ChatGPT application.
In response to the backlash, OpenAI was forced to emphasise that the agreement included additional safeguards, such as restrictions preventing the surveillance of US citizens and requirements for human accountability in autonomous weapons systems.
Despite this, Anthropic’s Claude model rapidly climbed the app store rankings. Many users began to view Claude as an ethical alternative to OpenAI’s perceived alignment with the military.
Ethics as a reputation shield
However, this picture is not as simple as it appears. While Anthropic strongly emphasises its ethical discourse, it is one of the first AI companies to receive clearance to work within the Pentagon’s classified networks.
The active use of Claude in targeting analyses that led to the capture of former President Maduro during the Venezuela operation in January, as well as in strikes carried out in Iran, draws the boundaries of this "safe AI" rhetoric quite clearly.
The fact that Anthropic’s founder and CEO, Dario Amodei, opposes the surveillance of US citizens while leaving the door open to mass surveillance and operational analysis abroad shows how ethical boundaries can shift along geographical borders and passports.
This situation reveals that the ethical and human-centric brand identities of AI companies have turned into tools for reputation management.
AI giants are carefully trying to manage the global backlash that would arise from being directly associated with fields like autonomous weapons.
This effort can be explained not by a moral stance but by the desire to create a "reputation shield" to protect the brand and market share.
For AI giants, ethics often amounts to nothing more than "Ethics Washing" that decorates the corporate storefront rather than a deep responsibility. As Anthropic’s claim of centering the human is crushed under the commercial gravity of its $380 billion market value, the chasm between discourse and reality deepens.
The architecture of mass assassination
The integration of AI into military systems does more than provide a strategic advantage. It is reshaping the nature of warfare and creating what legal scholars call an “accountability gap”.
Today, conflict zones have become massive laboratories for tech giants and militaries.
Some of the most terrifying examples can be seen in AI-driven systems such as “Habsora” and “Lavender,” reportedly used in Israeli attacks on Gaza. These systems treat civilian casualties as a statistical margin of error. As journalist Yuval Abraham has argued, such technologies risk turning into a “mass assassination factory.”
Human-in-the-loop oversight is often presented as an ethical safeguard. In practice, however, it often amounts to little more than symbolic approval.
When algorithms generate thousands of targets within minutes, it becomes physically impossible for the operator on the ground to scrutinise the resulting data in detail.
Faced with limited time and overwhelming information, soldiers increasingly rely on AI recommendations. Human oversight then shifts from genuine decision-making to a process that merely legitimises automated outcomes.
The evaporation of responsibility
This situation turns AI into an attack tool where responsibility evaporates. The systems used by Israel target the homes of suspects, directly endangering civilians, children, and the elderly in those homes.
AI no longer just processes data; it reduces humans to statistical deviations in a "target bank."
This problem is not unique to a single conflict zone is evident in other recent events. Initial reports suggest that the United States’ attack on a school in Iran on February 28, killing 168 people, including over 100 children, stemmed from faulty intelligence.
As investigations continue, it is possible that outdated intelligence or an AI tool reporting a wrong target was responsible for the deaths of these civilians.
When flawed data or algorithmic bias leads to loss of life, a troubling question arises: who is responsible? Is it the engineer who designed the algorithm, the company that deployed the system, or the soldier who pressed the approval button?
This uncertainty has become one of the darkest aspects of modern warfare.
Research by Professor Kenneth Payne of King’s College London highlights another dimension of the danger. His work suggests AI models are 95 percent more likely than humans to use nuclear weapons during moments of crisis.
Humanity has so far refrained from pressing the nuclear trigger, learning from the painful memories of Hiroshima and Nagasaki. However, we cannot say that algorithms possess either this historical memory or a moral conscience.
At this stage, the Skynet nightmare from the Terminator films is no longer just a sci-fi dystopia; it is a new realpolitik forged by uncontrolled technological determinism.
Algorithmic siege
AI is evolving beyond a tool of efficiency into a central element of global security architecture. The limited number of companies that control this infrastructure is not simply writing code.
They are also constructing digital borders that define the sphere of movement for individuals and society.
For languages and cultures that lack representation in training datasets, the danger goes beyond technological dependence.
It risks a deeper form of cultural imperialism. AI models carry the cultural codes, value judgments, and ideological biases of the datasets they are trained on, presenting them as universal truths.
Without diverse datasets, societies risk sacrificing their collective memory and unique cultural identities to the homogenised logic of global tech giants.
In this context, the issue is not just a technology race; it is a matter of protecting society and the individual from being coded as anonymous coordinates on algorithmic terror lists or as "statistical margins of error" in a software’s target bank.
For users, the most unsettling reality may be how limited their choices actually are. Switching from one model to another often amounts to choosing between different branches of a massive tree rooted in the same soil.
The names of the branches change, but the roots remain the same: strategic control, military networks, and a global surveillance infrastructure.
In the final analysis, AI is no longer an office assistant promising efficiency; it is turning into a digital weapon that encircles human existence and societal boundaries from every direction.











