A federal judge on Thursday temporarily blocked the Trump administration from labeling Anthropic a “supply chain risk” and cutting off the artificial intelligence firm’s access to federal contracts.
US District Judge Rita Lin granted Anthropic’s request for a preliminary injunction, finding that the Trump administration’s “broad punitive measures” against the company “were likely unlawful” and could “cripple Anthropic.”
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government,” Lin wrote in her ruling.
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
The dispute centers on the Pentagon’s demand to use Anthropic’s Claude AI for “all lawful purposes,” while Anthropic wanted to prohibit the military from using it for mass domestic surveillance or for fully autonomous weapons systems. After Anthropic refused to meet the government’s demands, President Donald Trump and Secretary of Defense Pete Hegseth said they would declare the company a “supply chain risk,” prohibiting the use of its products in defense contract work.
Anthropic responded with a lawsuit filed earlier this month in federal court challenging the designation, calling it an “unprecedented and unlawful” attack on the company’s right to free speech.
Lin wrote that the administration’s measures don’t appear to reflect the government’s national security interests but rather seem punitive in nature.
“If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic,” Lin wrote.
Lin also delayed her order for one week to allow the Pentagon to seek a stay of the order.
Anthropic said in a statement that it was “grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
The White House and Pentagon didn’t immediately respond to a request for comment.
Read the full article here

