Skip to main content
Tags: anthropic | artificial intelligence | pentagon | security risk

Anthropic Disputes Pentagon Claims of Its AI Tech in Court

Wednesday, 22 April 2026 08:40 PM EDT

Anthropic on Wednesday told an appeals court that it can't manipulate its artificial intelligence tool Claude once it is deployed in classified Pentagon military networks — an assertion aimed at debunking the Trump administration's attempt to brand the rapidly growing technology company as a supply chain risk.

The statement made as part of 96-page filing with the U.S. Court of Appeals in Washington, D.C., provided a glimpse at the arguments that Anthropic's attorneys intend to make as part of a lawsuit filed last month in the fallout of a contract dispute over how AI technology can be used in fully autonomous weapons and potential surveillance of Americans.

San Francisco-based Anthropic contends the Pentagon is illegally retaliating against it by stigmatizing it with a designation meant to protect against sabotage of national security systems by foreign adversaries.

Earlier this month, the appeals court rejected Anthropic’s request for an order that would have blocked the Pentagon's actions while the panel is still collecting evidence about the case.

Anthropic's new filing is meant to directly address some of the court's questions ahead of oral arguments scheduled for May 19. The Trump administration will have an opportunity to file its response before that hearing.

Anthropic's temporary setback in the case came after it had prevailed in a separate case focused on the same issues in San Francisco federal court. That decision prompted the Trump administration to remove the stigmatizing labels from Anthropic, according to court filings.

But the lack of a similar order in the Washington case continues to cast a cloud over Anthropic, whose AI tools have turned it into a rising tech star along with rival OpenAI. After the Pentagon canceled a $200 million contract with Anthropic in the wake of their disagreement, OpenAI struck a deal to provide its technology to the U.S. military.

Copyright 2026 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.


Newsfront
Anthropic on Wednesday told an appeals court that it can't manipulate its artificial intelligence tool Claude once it is deployed in classified Pentagon military networks - an assertion aimed at debunking the Trump administration's attempt to brand the company as a supply chain risk.
anthropic, artificial intelligence, pentagon, security risk
302
2026-40-22
Wednesday, 22 April 2026 08:40 PM
Newsmax Media, Inc.

Sign up for Newsmax’s Daily Newsletter

Receive breaking news and original analysis - sent right to your inbox.

(Optional for Local News)
Privacy: We never share your email address.
Join the Newsmax Community
Read and Post Comments
Please review Community Guidelines before posting a comment.
 
TOP

Interest-Based Advertising | Do not sell or share my personal information

Newsmax, Moneynews, Newsmax Health, and Independent. American. are registered trademarks of Newsmax Media, Inc. Newsmax TV, and Newsmax World are trademarks of Newsmax Media, Inc.

NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved
Download the Newsmax App
NEWSMAX.COM
America's News Page
© Newsmax Media, Inc.
All Rights Reserved