Federal Response
Anthropic Refuses Pentagon Demand to Drop AI Safeguards; Administration Blacklists Company
Defense Secretary labels Anthropic a “supply chain risk” after company declines to enable autonomous weapons and mass surveillance. Claude surges to No. 1 in App Store.
The Trump administration on Thursday ordered all federal agencies to immediately cease business with Anthropic, the San Francisco-based artificial intelligence company, after the firm refused a Pentagon directive to remove safety restrictions that prohibited the use of its Claude model for autonomous weapons targeting and warrantless mass surveillance of American citizens. The executive action, which took effect at midnight Eastern time, marks the first time a sitting administration has attempted to blacklist a major domestic technology company over its refusal to comply with military demands.
The confrontation had been building for weeks. According to documents reviewed by CNN, the Department of Defense in early February sent Anthropic a formal letter demanding that the company modify Claude’s acceptable use policy to permit “unrestricted deployment in national security contexts,” including autonomous lethal targeting systems and bulk communications monitoring. Anthropic’s leadership, led by Chief Executive Dario Amodei, declined in a written response that cited both constitutional concerns and the company’s long-standing position that current AI systems lack the reliability required for life-or-death autonomous decisions.
Defense Secretary Pete Hegseth escalated the dispute by publicly designating Anthropic a “supply chain risk” to national security — a classification normally reserved for foreign adversaries and entities like Huawei or Kaspersky Lab. The designation empowers the administration to bar federal contractors from using Anthropic products and could affect the company’s eligibility for government-adjacent work across the intelligence community. Anthropic said late Friday that it would challenge the designation in federal court, calling it “an unprecedented and unconstitutional attempt to coerce a private company into abandoning safety principles.”
The public backlash to the administration’s move was swift and commercially significant. Within 48 hours of the blacklist announcement, Anthropic’s Claude chatbot rocketed to the No. 1 position on Apple’s App Store, surpassing both ChatGPT and TikTok, with the company reporting record-breaking signups. Technology executives, constitutional scholars, and civil liberties organizations issued a flurry of statements in support of Anthropic, with several warning that the government’s actions set a dangerous precedent for the regulation of private AI development.