Pentagon threatens to cut ties with AI firm Anthropic over military use restrictions

The US Defense Department is considering designating artificial intelligence company Anthropic as a "supply chain risk" amid a dispute over restrictions on military use of its Claude AI model. An anonymous Pentagon official warned the firm will "pay a price" if it continues resisting demands for expanded military applications.
Defense Secretary Pete Hegseth is close to severing the Pentagon's relationship with Anthropic following months of contentious negotiations, Axios reported Monday. If the supply chain risk designation is imposed, all Defense Department contractors would be required to either cease doing business with Anthropic or lose their Pentagon ties. "It will be an enormous pain in the * to disentangle, and we are going to make sure they pay a price for forcing our hand like this," an anonymous Defense Department official told the online news outlet.
Claude's role in classified operations
Claude is currently the only AI model authorized for use within the Defense Department's classified systems and has been deployed in sensitive military operations, including the January mission that captured former Venezuelan President Nicolas Maduro. The AI tool was reportedly used through Anthropic's partnership with data firm Palantir Technologies, whose platforms are widely used across the Pentagon and federal law enforcement. The deployment marked the first time a commercial AI model was integrated into classified military operations.
Core disagreements over AI safeguards
Anthropic has drawn firm limits on military applications, refusing to allow its software to be used for mass surveillance of Americans or for developing weapons capable of firing without human oversight. The company's usage policies explicitly prohibit Claude from being used to facilitate violence, develop weapons, or conduct surveillance. The Pentagon, meanwhile, is seeking assurances that it can use software from Anthropic and three other major tech firms—OpenAI, Google and xAI—for what it describes as "all lawful purposes."
Pentagon's position and industry response
"The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight," Pentagon spokesperson Sean Parnell said in a statement. He added that "ultimately, this is about our troops and the safety of the American people." Anthropic told Axios it is "having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right." Meanwhile, OpenAI, Google and xAI have reportedly agreed to lift some internal safeguards for Pentagon use, though only for unclassified activities.
Broader implications for AI and defense
The standoff carries significant implications for the future of AI in military applications. A contract with Anthropic worth up to $200 million—awarded last summer—could be at risk. The dispute highlights growing tensions between technology companies' ethical safeguards and military demands for unrestricted AI capabilities on future battlefields increasingly dominated by autonomous systems. For international observers including Türkiye, which develops its own defense technologies and maintains AI partnerships, the outcome could influence global standards for military AI deployment and the balance between innovation, ethics, and national security requirements.
Advertisement
Comments you share on our site are a valuable resource for other users. Please be respectful of different opinions and other users. Avoid using rude, aggressive, derogatory, or discriminatory language.