A federal judge is hearing Anthropic's bid to temporarily block the Pentagon's designation of the company as a supply chain risk, a fast-moving case that could shape how far the Trump administration can go in punishing a developer of artificial intelligence over limits on military use of its tools.
Anthropic sued on March 9 in California federal court, arguing that Secretary of War Pete Hegseth unlawfully branded the company a national security supply chain risk after it refused to remove restrictions barring the use of Claude for domestic surveillance or for fully autonomous weapons.
The designation was the first public use of that label against a U.S. company under a procurement statute intended to shield military systems from sabotage, and Anthropic says the move violates its rights to free speech and due process.
The hearing came after President Donald Trump and Hegseth publicly escalated the fight on social media.
Trump wrote on Truth Social that he was directing every federal agency to "IMMEDIATELY CEASE" using Anthropic's technology, subject to a six-month phaseout for agencies already using its products.
Hegseth separately announced on X that he was directing the Pentagon to designate Anthropic a supply chain risk.
The court moved quickly.
A March 12 order from U.S. District Judge Rita Lin granted several amicus motions and set an accelerated briefing schedule tied to Anthropic's preliminary injunction request, with support briefs due March 13 and opposition briefs due March 17.
On Tuesday, Lin held a hearing in San Francisco on Anthropic's request for a preliminary injunction to temporarily block the Pentagon's "supply chain risk" designation while the lawsuit proceeds.
The proceeding focused on whether Anthropic could win emergency relief from a designation that the company says unlawfully bars War Department components and contractors from using its technology.
The Pentagon has argued, in Reuters' account of its court filing, that Anthropic's contractual restrictions could create uncertainty about how Claude AI may be used and could endanger operations if the company constrains or alters the model during military use.
Anthropic denies that it controls deployed models in that way and says the government's actions threaten billions of dollars in revenue and broader reputational damage.
Axios, which reported from Tuesday's hearing, said the judge called the government's three-part campaign against Anthropic "troubling" and said it appeared aimed at crippling the company rather than narrowly addressing the Pentagon's stated security concern.
The hearing ended without an immediate ruling, but Lin signaled sharp skepticism toward the Pentagon's position, saying the designation appeared to be an attempt to punish or "cripple" Anthropic over its public stance on AI safety.
Reuters contributed to this report.
Jim Thomas ✉
Jim Thomas is a writer based in Indiana. He holds a bachelor's degree in Political Science, a law degree from U.I.C. Law School, and has practiced law for more than 20 years.
© 2026 Newsmax. All rights reserved.