Anthropic rejects Pentagon demand to loosen AI safeguards over surveillance fears

The company refuses to allow its Claude model to be used for mass domestic surveillance or autonomous weapons, as the Trump administration threatens to invoke the Defense Production Act.
Artificial intelligence company Anthropic announced Thursday it will not comply with a US Defense Department request to relax safeguards on its AI systems, citing ethical concerns about mass surveillance and fully autonomous weapons. CEO Dario Amodei stated the company opposes allowing its Claude AI model to be used for "mass domestic surveillance" or "fully autonomous weapons," warning that current AI systems lack the reliability needed for such applications.
Ethical Stance and National Security
Amodei acknowledged that AI can support national security but cautioned that large-scale, AI-driven surveillance could threaten civil liberties. He emphasized that operating autonomous weapons without human oversight would require safeguards that "don't exist today." The statement comes after weeks of negotiations between Anthropic and the Pentagon, with the Trump administration signaling it may take coercive action.
Government Pressure and Threats
The administration has threatened to invoke the Defense Production Act, which would compel the company to prioritize national defense needs, and has considered labeling Anthropic a "supply chain risk"—a designation that would prevent Defense Department contractors from using its software. Axios reported that the Pentagon has initiated steps toward that designation and has asked Boeing and Lockheed Martin to document their reliance on Claude.
Advertisement
Pentagon Response
Pentagon spokesperson Sean Parnell denied the department intends to use AI for unlawful surveillance or fully autonomous weapons without human involvement. In a post on X, he stated the Pentagon seeks to use Anthropic's model for "all lawful purposes" and asserted that no private company should dictate operational decisions. The confrontation highlights growing tensions between the defense establishment and tech companies over the ethical boundaries of artificial intelligence in military applications.
Comments you share on our site are a valuable resource for other users. Please be respectful of different opinions and other users. Avoid using rude, aggressive, derogatory, or discriminatory language.