Report: Pentagon weighs cutting ties with Anthropic over military AI limits

Dispute said to center on Anthropic’s refusal to allow use of Claude for mass surveillance or fully autonomous weapons, as Pentagon pushes for access to AI tools for 'any lawful purpose,' including classified operations

The Pentagon is considering ending its partnership with artificial intelligence company Anthropic over the firm’s insistence on limiting certain military uses of its AI models, Axios reported Sunday, citing a senior U.S. administration official.
According to the report, the Defense Department has been in talks with four major AI companies to enable the use of their tools for “any lawful purpose,” including sensitive areas such as weapons development, intelligence gathering and battlefield operations.
2 View gallery
דאריו אמודיי בשימוע בסנאט
דאריו אמודיי בשימוע בסנאט
Dario Amodei
(Photo: SAUL LOEB / AFP)
AI systems are capable of processing large volumes of data and generating real-time insights, a capability viewed as critical in military operations.
Anthropic, the maker of the chatbot Claude, has declined to accept the broad “lawful purpose” definition. The company maintains restrictions in two areas: mass surveillance of U.S. citizens and the development or deployment of fully autonomous weapons.
After months of inconclusive negotiations, Pentagon officials are increasingly inclined to sever ties with the company rather than negotiate case by case or accept preemptive limits on certain uses, Axios reported.
The senior official cited by the publication said the administration views the two restricted areas as containing significant “gray zones,” where it may be difficult to determine what qualifies under the definitions. “Everything is on the table, including scaling back the partnership with Anthropic or cutting it off entirely,” the official said.
An Anthropic spokesperson said the company remains “committed to the responsible use of AI in support of U.S. national security.”
Tensions reportedly intensified following the U.S. military’s alleged use of Claude during an operation to capture Venezuelan President Nicolás Maduro. The software was accessed through a product developed in collaboration with Palantir, an AI company with extensive Pentagon contracts that integrates Claude into its platforms.
Pentagon officials said an Anthropic executive contacted a Palantir counterpart to inquire whether Claude had been used in the raid and suggested such use might not be approved in the future. Defense officials viewed the exchange as raising concerns about reliance on the software for operational missions.
“Any company that would jeopardize the operational success of our warfighters is a company whose partnership we need to reassess,” the official told Axios.
2 View gallery
הצ'אבוט קלוד
הצ'אבוט קלוד
(Photo: Getty Images)
While Claude has previously been used for satellite imagery analysis and other intelligence tasks, sources cited in the report said it was also used during active operations in the Venezuela mission, in which dozens of Venezuelan security personnel were reportedly killed.
Anthropic denied that it formally contacted Palantir or the Defense Department regarding the specific operation. “Claude is used across a wide range of intelligence-related applications throughout the government, including at the Department of Defense, consistent with our usage policies,” the spokesperson said. “The company has not had discussions with the Department of Defense regarding the use of Claude in specific operations.”
The spokesperson added that discussions with the Pentagon have focused on Anthropic’s strict policies regarding fully autonomous weapons and domestic mass surveillance, and that neither issue was related to the Venezuela operation.
The administration official acknowledged that replacing Claude could prove difficult, as competing AI models lag behind in certain specialized government applications. Anthropic signed a contract with the Pentagon last summer worth approximately $200 million, and Claude was the first AI model integrated into classified Defense Department networks.
Rival systems — including OpenAI’s ChatGPT, Google’s Gemini and xAI’s Grok — have agreed to lift certain restrictions for Pentagon use and are in negotiations to expand into classified environments. According to Axios, at least one company has already accepted the Defense Department’s terms, while the others have shown greater flexibility than Anthropic.
Anthropic has positioned itself within the AI industry as emphasizing safety and user privacy. Chief Executive Dario Amodei has frequently expressed concern about the risks of unregulated AI development. The company is also reportedly facing internal debate among engineers over cooperation with the Pentagon.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""