US confirms use of 'advanced AI tools' amid debate if AI error led to deadly attack on Iran school
CENTCOM confirms AI use in Iran war as investigators examine intelligence failure behind school bombing.
The United States military has confirmed for the first time that it is deploying a 'variety' of advanced artificial intelligence (AI) tools in its war against Iran.
The admission by Admiral Brad Cooper, head of US Central Command (CENTCOM), comes as a preliminary government investigation has found American forces responsible for one of the most devastating military errors in recent decades: the bombing of an elementary school.
Speaking in a video message on Wednesday, Admiral Cooper defended the use of the technology, asserting that it allows "warfighters" to navigate the complexities of modern battlefields.
"Our warfighters are leveraging a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react," Cooper stated.
He stressed that whilst the technology turns processes that used to take days into seconds, "Humans will always make final decisions on what to shoot and what not to shoot."
The Minab Targeting Fiasco
Despite Cooper's assurances of human oversight, The New York Times has reported that a preliminary probe into the 28 February strike on the Shajarah Tayyebeh elementary school in Minab points to a "targeting fiasco."
The Tomahawk missile strike killed 175 people, including 150 schoolgirls and staff.
Investigators believe that officers at CENTCOM generated the strike coordinates using outdated intelligence provided by the Defence Intelligence Agency.
The school building—painted in bright blue and pink with sports fields clearly visible on the asphalt—had been partitioned off from an adjacent military base in 2016.
However, the site remained in military databases as an active target.
This has triggered an intense debate over whether AI tools failed to identify the school's civilian status or if the "fatal chain of assumptions" was entirely human-driven.
Dr Craig Jones of Newcastle University told The Times: "At this point, we can't rule out that AI may have failed to identify the school as a school and instead identified it as a military target."
AI Without Oversight Taking Lives
The confirmation of AI usage has sparked global alarm, with critics arguing that the speed of the "kill chain" has eroded ethical restraints.
The Iranian Red Crescent Society reported on Wednesday that nearly 20,000 civilian buildings and 77 healthcare facilities have been damaged.
Beijing has joined the criticism, with the Chinese Defence Ministry warning that giving algorithms the power to determine life and death risks a "technological runaway."
The Trump administration, however, remains defiant.
Following a legal battle with the tech firm Anthropic over the ethical use of AI, Pentagon spokeswoman Kingsley Wilson stated that US forces would not be "held hostage by Silicon Valley ideology."
As investigators continue to piece together how such a "picture-perfect" precision strike could hit a building full of children, the incident has exposed the lethal risks of marrying high-speed AI processing with flawed, outdated intelligence and a lack of meaningful human oversight.