The War Department is "close" to severing business ties with Anthropic and designating the AI company a "supply chain risk," Axios reported Monday.
Such a move would require any U.S. military contractor to drop Anthropic's technology or be shut out of Pentagon business.
War Secretary Pete Hegseth and senior Pentagon officials are nearing a decision to downgrade the military's relationship with Anthropic after months of bitter negotiations, Axios reported Monday.
A senior defense official told the outlet bluntly, "It will be an enormous pain in the *ss to disentangle, and we are going to make sure they pay a price for forcing our hand like this."
A "supply chain risk" designation is normally reserved for foreign adversaries and hostile actors, making the potential penalty for Anthropic's stance unusually severe for a domestic tech firm.
A chief Pentagon spokesman told Axios that the War Department's AI partnerships are under review and stressed, "Our nation requires that our partners be willing to help our warfighters win in any fight."
The friction centers on Anthropic's refusal to let the Pentagon use its Claude AI model for all lawful purposes — specifically limiting its use in mass domestic surveillance and fully autonomous weapon systems.
Anthropic says these restrictions protect Americans' privacy and prevent unchecked AI from targeting civilians, but the Pentagon argues the limits are too restrictive and could hamper battlefield effectiveness.
Reuters confirmed the Pentagon's ultimatum came after months of talks with Anthropic and other AI labs, including OpenAI, Google, and xAI.
Anthropic's Claude is the only AI model currently cleared for classified military systems — even reportedly assisting during the January operation to capture Venezuela's Nicolas Maduro, according to The Wall Street Journal.
But the company's insistence on safety guardrails has frustrated senior defense officials and sparked public disagreement between the Pentagon and Anthropic leadership.
If the Pentagon follows through on stripping Anthropic of its military status, companies that do business with the War Department would have to certify that they do not use Claude in their workflows — a difficult requirement given the tool's popularity across the private sector.
Axios noted that eight of the 10 largest U.S. firms reportedly use Claude in some capacity.
While OpenAI, Google, and xAI have agreed to remove guardrails for military use on unclassified systems and are negotiating access to classified networks, sources told Axios that Anthropic remains the most resistant to loosening its policies.
Senior administration officials believe the others are more likely to comply with the "all lawful use" standard.
Anthropic officials counter that current U.S. law already forbids domestic mass surveillance and that AI's rapid capabilities simply outpace outdated statutes.
They say ongoing talks with the War Department are conducted in good faith to resolve complex policy issues.
A contract with Anthropic worth up to $200 million — a small portion of its $14 billion annual revenue — could soon be at risk if the Pentagon proceeds with the supply chain risk designation, Axios reported.
Charlie McCarthy ✉
Charlie McCarthy, a writer/editor at Newsmax, has nearly 40 years of experience covering news, sports, and politics.
© 2026 Newsmax. All rights reserved.