National Security & AI Policy
Pentagon Uses Claude in Iran Campaign Despite Blacklisting Anthropic
The U.S. military continued deploying Claude AI during strikes on Iran over the weekend — even as the Trump administration officially blacklisted Anthropic as a “supply chain risk.”
The U.S. military continued to deploy Anthropic’s Claude AI in operational planning during its campaign of strikes against Iran over the weekend, according to three defense officials with direct knowledge of the systems in use — even as the Trump administration’s executive order officially designating Anthropic a “supply chain risk” remained in effect. The contradiction underscores the gap between political directives and the practical reality on the ground, where military commanders have become dependent on Claude’s analytical capabilities and have no near-term alternative that matches its performance in the specific intelligence-synthesis tasks for which it was deployed.
The blacklisting was triggered when Anthropic CEO Dario Amodei refused Defense Secretary Pete Hegseth’s demands to remove safeguards preventing mass domestic surveillance and fully autonomous weapons targeting. Amodei’s refusal, which the company framed as a matter of principle rather than policy negotiation, prompted the administration to move swiftly to cut Anthropic out of the federal technology stack — a process that is proving far more difficult in practice than it was on paper.
Meanwhile, OpenAI has revised its own Pentagon contract in the wake of the controversy. CEO Sam Altman admitted in a CNBC interview that the deal, struck within 72 hours of Anthropic’s standoff, “looked opportunistic and sloppy.” The amended agreement now includes explicit prohibitions on domestic surveillance of U.S. persons, though critics warn that significant loopholes remain — particularly around commercially acquired geolocation data and financial records, which do not fall under the same legal protections as communications intercepts.