Legal Battle
Anthropic Sues Trump Administration After Pentagon Labels Company a “Supply Chain Risk”
The AI safety company filed two federal lawsuits after the Defense Department applied a designation historically reserved for foreign adversaries like Huawei — a label that could cost hundreds of millions in revenue.
Anthropic filed two federal lawsuits against the Trump administration on Monday after the Pentagon designated the company a “supply chain risk” — a classification historically reserved for foreign adversaries like Huawei and ZTE. The designation came after negotiations broke down over Anthropic’s two red lines: that Claude would not be used for autonomous weapons or mass surveillance of U.S. citizens. The Pentagon insisted on using Claude for “all lawful purposes,” a framing the company found unacceptable.
The lawsuits, filed in the Northern District of California and the D.C. federal appeals court, allege First Amendment violations and argue the supply chain risk label could jeopardize “hundreds of millions of dollars” in revenue as defense contractors must now certify they do not use Claude. The practical effect is immediate: any company doing business with the Department of Defense must demonstrate that Anthropic technology is not embedded in their products or services, creating a chilling effect that extends far beyond direct military contracts.
The confrontation marks the most significant clash between a frontier AI company and the U.S. government over the ethical boundaries of military AI deployment. While other AI companies — including OpenAI, whose own robotics leader resigned over Pentagon concerns last week — have quietly accepted broad military use terms, Anthropic’s willingness to litigate rather than comply sets a precedent that could reshape how the defense establishment procures AI technology. The outcome may determine whether AI companies can maintain ethical red lines when the government decides those lines are inconvenient.