
Photo Credit: Getty Images
A federal judge has halted the Pentagon's attempt to sideline artificial intelligence firm Anthropic, delivering a sharp rebuke to a controversial national security designation that threatened to cut the company off from lucrative government work.
In a detailed ruling issued Thursday, US District Judge Rita Lin blocked the Defense Department from enforcing its decision to classify Anthropic as a supply chain risk, a label typically reserved for entities suspected of exposing military systems to foreign threats.
Lin wrote that the government's move appeared less about protecting national security and more about penalizing the company for its public stance on how artificial intelligence should be used in warfare and surveillance.
The judge said the designation raised serious constitutional concerns, pointing to potential violations of free speech protections and due process rights. She added that Anthropic had not been given a meaningful opportunity to challenge the label before it was imposed.
Although the order temporarily stops the restrictions, Lin delayed its implementation for seven days, giving federal officials time to appeal.
The dispute stems from Anthropic's refusal to loosen safeguards on its Claude AI system. The company has maintained firm limits against deploying its technology in autonomous weapons or domestic mass surveillance, positions that put it at odds with Pentagon officials seeking broader operational flexibility.
Defense leaders argued that such restrictions could hinder military readiness, warning that privately imposed constraints might interfere with critical systems during conflict. Government lawyers also contended that the designation was based on contractual disagreements, not retaliation.
But Lin rejected that framing, writing that the evidence suggested the action was intended to punish dissent. She emphasized that allowing the government to blacklist a company over its expressed views would set a troubling precedent.
Anthropic welcomed the decision, saying it reinforced the importance of safeguarding both innovation and constitutional rights. The company added that it remains committed to working with government partners while upholding its safety principles.
The case is ongoing, with additional legal challenges still pending in Washington, leaving the broader clash between AI governance and military authority far from settled.

