A San Francisco federal judge on Thursday blocked the Trump administration from enforcing a sweeping ban on artificial intelligence firm Anthropic, intervening in a high-stakes clash over who controls the military's use of emerging AI technology.
U.S. District Judge Rita Lin, a Biden appointee, granted Anthropic's request for a preliminary injunction, halting a Feb. 27 directive by President Donald Trump ordering all federal agencies to stop using the company's technology, along with related actions by the Department of War.
The ruling also prevents the Department of War and Secretary Pete Hegseth from enforcing a designation labeling Anthropic a "Supply-Chain Risk to National Security," a move that had cut the company off from defense contractors and key federal work.
Lin said the order restores the status quo while the case proceeds but does not require the government to continue using Anthropic's products or bar it from switching to other AI providers. A final verdict in the case could still be months away.
"Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," Lin wrote in a 43-page ruling.
Anthropic signed a $200 million contract with the Pentagon in July. But as the company began negotiating deployment of its AI model, Claude, on the War Department's AI platform in September, talks stalled, according to CNBC.
The War Department wanted Anthropic to grant it unfettered access to its models across all lawful purposes, while Anthropic wanted assurance that its technology would not be used for fully autonomous weapons or domestic mass surveillance. The sides could not reach an agreement.
"Everyone, including Anthropic, agrees that the Department of [War] is free to stop using Claude and look for a more permissive AI vendor," Lin said during a hearing Tuesday. "I don't see that as being what this case is about. I see the question in this case as being a very different one, which is whether the government violated the law."
In her ruling, Lin said Anthropic is likely to succeed on claims the government's actions violated the First Amendment, finding evidence the measures were aimed at punishing the company for publicly opposing the administration's position.
She wrote the actions appeared driven by objections to Anthropic's "rhetoric" and "ideology," rather than a clearly established national security threat.
The court also raised concerns about applying a "supply chain risk" label — typically used for foreign adversaries — to a U.S. company without detailed justification or standard procedures.
The administration had argued it must retain full authority over how AI is deployed in defense operations, while Anthropic maintained its restrictions reflect safety limits of current technology.
The clash highlights a growing divide between Silicon Valley firms and federal policymakers over control of rapidly advancing AI systems, particularly in military and surveillance contexts.
Newsmax reached out to Anthropic and the Department of Justice for comment.
Michael Katz ✉
Michael Katz is a Newsmax reporter with more than 30 years of experience reporting and editing on news, culture, and politics.
© 2026 Newsmax. All rights reserved.