The Rise of AI in Military Operations: What It Means
2026-03-30
On February 28, 2026, the United States and Israel launched Operation Epic Fury — striking over 1,000 targets inside Iran within the first 24 hours, nearly double the pace of the 2003 shock-and-awe campaign in Iraq.
The difference, according to the Pentagon, was AI warfare. The Maven Smart System, built on Palantir's data infrastructure and powered in part by Anthropic's Claude, compressed targeting timelines that once took hours into seconds.
Admiral Brad Cooper, head of US Central Command, put it plainly: advanced AI tools allow leaders to "cut through the noise and make smarter decisions faster than the enemy can react."
But hours into the same operation, a Tomahawk cruise missile struck the Shajareh Tayyebeh girls' elementary school in Minab, southern Iran — killing at least 168 people, more than 100 of them children under 12.
The school had been separated from an adjacent IRGC naval base since 2016, visibly repainted in blue and pink with sports fields on the asphalt and an active social media presence. The US military's own preliminary investigation found that the strike was likely based on outdated targeting data.
That single incident has since become the defining case study for what artificial intelligence in warfare actually means — both its capacity and its catastrophic failure modes.
Key Takeaways
- The US military used AI targeting tools including the Maven Smart System to strike over 1,000 targets inside Iran within 24 hours — double the operational pace of the 2003 Iraq campaign.
- A preliminary Pentagon investigation found that the February 28 Minab school strike, which killed at least 168 civilians mostly children, was likely caused by stale human-curated intelligence fed into the targeting system — not an AI malfunction.
- A UN resolution passed in December 2025 and a three-day multilateral meeting planned for June 2026 are the first formal steps toward international governance of AI in armed conflict, but a binding framework remains unlikely in the near term.
Trade with confidence. Bitrue is a secure and trusted crypto trading platform for buying, selling, and trading Bitcoin and altcoins.
Register Now to Claim Your Prize!
How AI Is Actually Being Used on the Battlefield
The public picture of AI in modern warfare tends to oscillate between science fiction and denial. The reality is more specific. The Pentagon's Maven Smart System uses AI to fuse data from satellite imagery, sensor feeds, and signals intelligence into rapid targeting recommendations.
According to Arizona State University co-director Daniel Rothenberg, what once required hours of human analysis now takes minutes.
Drone operations benefit differently — AI enables autonomous navigation when electronic jamming makes remote human control impossible, and it enables swarm coordination that one operator could never manage manually.
Space Force orbital sensors now detect Iranian ballistic missile launches within milliseconds using infrared signature recognition, feeding interception calculations to automated defense systems before a human analyst has read the first alert.
These are not experimental deployments. They are operational realities confirmed by US military officials during active combat operations in February and March 2026.
Read Also: Can AI Predict How the Stock Market Moves?
The Minab School Strike: What the Data Says
The facts around Minab have been reconstructed by multiple independent investigations. The Shajareh Tayyebeh school sat fewer than 100 yards from an IRGC naval installation — a site that had been part of the same compound until a wall was erected between 2013 and 2016.
By February 2026, the building's civilian status was clearly visible in open-source satellite imagery: painted walls, a sports field, three public entrances, and years of documented school activity online.

The Defense Intelligence Agency had never updated its classification of the site. When CENTCOM generated strike coordinates, it drew on that stale record.
Former military officials confirmed to Semafor that the error was human in origin — specifically, outdated data fed into the Maven platform. The AI processed and executed on the information it was given with precision.
Former CENTCOM director of intelligence Lt. Gen. Karen Gibson framed the accountability principle directly: "A commander somewhere will ultimately be held responsible — not a machine or a software engineer." But Human Rights Watch argued that framing misses the structural problem.
Faster workflows and AI-assisted target generation create compressed review windows. The speed that makes AI valuable in warfare is the same quality that can turn a stale database entry into a catastrophic strike before any human intervention catches the error.
Read Also: Gold in 2026: The Ultimate Macro-Geopolitics Hedge
The Governance Gap Nobody Has Solved Yet
The Minab case surfaced an accountability structure that has not kept pace with the technology it governs. Over 120 House Democrats wrote to Defense Secretary Pete Hegseth demanding answers on AI's role in target selection.
China's Defense Ministry publicly warned against "unrestricted application of AI by the military," calling it a risk of "technological runaway."
The tension inside the US government became visible when the Pentagon sidelined Anthropic — one of its primary AI suppliers — just one day before Operation Epic Fury launched, over a disagreement about restrictions on autonomous systems.
A UN resolution passed in December 2025 on AI in the military domain is the most concrete international response so far. It opens a multilateral process, with a three-day stakeholder meeting set for June 2026 to develop shared best practices.
Chatham House's assessment is frank: a binding international framework is unlikely in the short term, but developing internal rules is in militaries' own interest.
The core problem, as TNGlobal's governance analysis identified it, is that "capability has advanced faster than accountability" — and that faster processing does not fix stale intelligence, better models do not resolve weak verification, and the presence of a human approval step is not sufficient if that human is working from flawed inputs at machine speed.
Read Also: Lobster Coin Listing on Bitrue: How to Buy It
Conclusion
The 2026 Iran conflict has done something that years of academic debate could not: it made the risks of AI in modern war impossible to abstract. A system that enabled 1,000 strikes in 24 hours also fed a strike into a school that had been a school for a decade. The AI did not malfunction in any technical sense.
The data it worked with was wrong. That distinction matters enormously for how militaries, governments, and the public think about what "human oversight" actually means in an AI-enabled targeting chain.
Oversight that occurs at the final approval step, but after AI has already shaped what targets are visible and which are filtered out, is structurally incomplete. The June 2026 UN meeting will not produce a binding treaty.
But the Minab investigation, the congressional letters, the Anthropic-Pentagon dispute, and China's public warnings have collectively shifted the center of gravity of this debate. AI battlefield decision making is no longer a future concern. The governance question is whether accountability can catch up to deployment — and the current evidence suggests it is running behind.
Read Also: 3 AI Models Predict BlockDag's Price in 2026: Does It Offer Profits?
FAQ
What is Project Maven and how is it used in combat operations?
Project Maven is the Pentagon's flagship military AI program, initially launched in 2017 to use machine learning for processing drone surveillance footage. By 2026 it had evolved into the Maven Smart System — a broader targeting and intelligence fusion platform that integrates satellite imagery, sensor data, and signals intelligence to generate and prioritize strike packages at speed.
Two anonymous sources confirmed to NBC News that Palantir's implementation of Maven, incorporating Anthropic's Claude, was actively used for target identification during Operation Epic Fury in Iran.
Did AI cause the Minab school strike that killed 168 people?
Based on the preliminary US military investigation and multiple independent analyses, AI did not independently select the school as a target. The likely cause was that the site remained classified as an IRGC military target in Defense Intelligence Agency databases — a classification that was never updated after the school was physically separated from the adjacent base in 2016.
The AI system processed and acted on that outdated human-curated data. Former military officials confirmed to Semafor that "humans — not AI — are to blame," but critics including Human Rights Watch argue the compressed AI-assisted workflow left insufficient time for human review to catch the error.
Is AI making autonomous life-or-death decisions in warfare?
Not yet, according to official statements. Both CENTCOM's Admiral Cooper and Pentagon chief spokesperson Sean Parnell have publicly stated that humans make all final decisions on lethal strikes. Lauren Kahn of the Center for Security and Emerging Technology confirmed to NPR that "AI is not making decisions about who lives and who dies at this moment."
However, the speed and volume at which AI-assisted targeting operates — 1,000 targets in 24 hours — raises structural questions about how meaningful human review can be at that pace, a concern shared by members of the Senate Armed Services Committee.
What international rules govern the use of AI in war?
The primary international framework is the UN resolution on "Artificial intelligence in the military domain and its implications for international peace and security," passed in December 2025. It encourages multilateral discussion and opens a formal stakeholder process, with a three-day meeting scheduled for June 2026. However, it is non-binding.
The laws of armed conflict — including International Humanitarian Law and the Geneva Conventions — technically apply to all AI-assisted operations, but as Chatham House noted, there is a growing debate about whether AI introduces dimensions that require additional rules specifically governing autonomous targeting.
How should AI governance in the military catch up to current deployment?
Defense scholars and independent experts are converging on several specific requirements: mandatory data freshness protocols to prevent stale intelligence from being processed at operational speed, workflow auditability so post-incident investigations can trace exactly what AI outputs shaped which human decisions, clear escalation thresholds defining when AI recommendations require additional human review, and post-incident traceability standards.
TNGlobal's governance analysis argued that "the presence of a formal approval step" is not sufficient — the quality of the entire process, from intelligence input to final authorization, determines actual accountability, not just the last human signature in the chain.
Disclaimer:
The views expressed belong exclusively to the author and do not reflect the views of this platform. This platform and its affiliates disclaim any responsibility for the accuracy or suitability of the information provided. It is for informational purposes only and not intended as financial or investment advice.
Disclaimer: The content of this article does not constitute financial or investment advice.





