US military used Anthropic's Claude AI in Maduro capture operation

The US military utilized Anthropic's artificial intelligence model Claude during the operation to capture Venezuelan President Nicolás Maduro in January, marking the first known instance of a commercial AI system being deployed in a classified Pentagon mission, according to The Wall Street Journal. The AI tool was accessed through Anthropic's partnership with defense contractor Palantir Technologies, though specific details about how Claude assisted in the raid remain undisclosed as the company faces tension with the Pentagon over usage restrictions.
The United States military employed Anthropic's artificial intelligence model Claude during the operation that led to the capture of Venezuelan President Nicolás Maduro in early January, The Wall Street Journal reported Saturday. The revelation marks the first time a commercial AI developer's technology has been confirmed for use in a classified mission by the Department of Defense, signaling a new frontier in military applications of artificial intelligence.
According to anonymous sources cited in the report, Claude was deployed through Anthropic's existing partnership with Palantir Technologies, a data analytics firm that serves as a contractor for US defense and federal law enforcement agencies. Following the January 3 raid on Venezuela's capital Caracas, which involved strikes on multiple locations and resulted in Maduro's apprehension, an Anthropic employee reportedly contacted a counterpart at Palantir to inquire about how Claude had been utilized during the operation.
Pentagon-Contractor Partnership
The operation to capture Maduro and his wife Cilia Flores, involved US special operations forces striking targets across Caracas, including the massive Fuerte Tiuna military complex, before transporting the Venezuelan leader to New York to face narcoterrorism charges. Witness accounts described the use of an "advanced, mysterious weapon" that reportedly incapacitated Venezuelan soldiers.
Anthropic's Claude, a large language model capable of tasks ranging from document analysis to potentially guiding autonomous drones, was accessed through Palantir's platforms, which are widely integrated into Pentagon and federal law enforcement operations. The exact role Claude played in the mission remains unclear, though the military has previously employed similar AI tools for analyzing satellite imagery and intelligence data.
Company's Cautious Stance
Anthropic's usage policies explicitly prohibit deploying Claude for activities that "facilitate violence, develop weapons, or conduct surveillance" . When approached for comment, an Anthropic spokesperson declined to confirm or deny Claude's involvement in the operation, stating: "Any use of Claude—whether in the private sector or across government—is required to comply with our Usage Policies, which govern how Claude can be deployed. We work closely with our partners to ensure compliance".
The revelation comes amid reported tensions between Anthropic and the Pentagon. Defense Secretary Pete Hegseth has publicly stated the department will not "employ AI models that won't allow you to fight wars," a remark reportedly directed at safety-conscious AI developers. The Pentagon is currently reassessing its $200 million contract with Anthropic, with officials expressing frustration over the company's restrictions on using AI for autonomous weapons and mass surveillance.
The United States and other militaries are increasingly incorporating artificial intelligence into their arsenals. Israel has employed autonomous drones in Gaza and extensively used AI for targeting purposes, while US forces have utilized AI-assisted targeting technology in airstrikes across Iraq and Syria in recent years.
Advertisement
Comments you share on our site are a valuable resource for other users. Please be respectful of different opinions and other users. Avoid using rude, aggressive, derogatory, or discriminatory language.