AnthropicCaliforniaDepartment of Defense (DOD)FeaturedNorth AmericaUnited States

A Court Just Blocked the Pentagon’s Anthropic Ban. Here’s What That Means.

A district court judge in California issued a preliminary injunction blocking the Pentagon’s declaration of Anthropic as a “supply chain risk”—but the company is still in deep trouble.

On Thursday, Judge Rita F. Lin of the District Court for the Northern District of California temporarily halted a directive from President Trump that banned federal agencies from using artificial intelligence technology developed by Anthropic. The federal judge in San Francisco further ordered a preliminary injunction against the United States Department of Defense (DoD) after it labeled the company a “supply chain risk.”

The Pentagon’s move came in late February after the AI developer refused to loosen its restrictions on the use of its technology, with Anthropic insisting its Claude AI model be excluded from mass domestic surveillance and use in fully autonomous weapons. The designation from the Pentagon, made under a rarely-used procurement authority, would have forced contractors working with the DoD—including nearly all American major tech companies—to avoid using Anthropic’s tools.

Anthropic subsequently filed a lawsuit arguing that the designation violated its First Amendment and due process rights and caused significant business impact—a claim that the court upheld in the initial stages of litigation.

“At bottom, Anthropic has shown that these broad punitive measures were likely unlawful and that it is suffering irreparable harm from them. Numerous amici have also described wide-ranging harm to the public interest,” Lin wrote in a 43-page ruling.

The ruling will pause the government’s ban until the court decides the merits of the underlying case. Lin’s opinion noted that the supply chain risk designation is typically reserved for foreign intelligence agencies, companies with ties to those agencies, and terrorist groups, rather than American companies.

“These broad measures do not appear to be directed at the government’s stated national security interests,” Lin wrote. “If the concern is the integrity of the operational chain of command, the Department of [Defense] could just stop using Claude.”

The federal court judge added that the “measures appear designed to punish Anthropic” for openly disagreeing with the Pentagon.

The DoD’s lawyers had argued that Antropic’s actions made it untrustworthy and that the supply chain risk was intended to limit how the military could employ its AI models. The Pentagon’s team also suggested that Anthropic might update Claude to endanger national security—a move that Lin said was “both contrary to law and arbitrary and capricious.”

Anthropic Won Its Court Case—but Won’t Come Out Ahead

Anthropic welcomed the ruling.

“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits,” the company wrote in a media release. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

Experts at AI security provider Suzu Labs explained that it is necessary to examine the finer details of the ruling.

“The Pentagon lost the court case and the procurement tool,” said Michael Bell, founder and CEO at Suzu Labs, in an email to The National Interest. “A federal judge found the supply chain risk designation was retaliation, not a security determination, and cited the Pentagon’s own records. That precedent constrains every future use of the same authority against domestic companies. The tool was built for foreign adversaries. Now it’s been tested in court against a domestic company, and it failed.”

However, it is hard to see if there is a real winner in the Anthropic-Pentagon fracas.

“The real issue isn’t who won the lawsuit. It’s that frontier AI capability on classified networks was disrupted for over a month while this played out. From an operational standpoint, that’s the failure worth examining,” Aaron Colclough, VP of operations at Suzu Labs, told The National Interest via email.

“Single-vendor dependency in critical systems is a known risk in security, and this case shows it applies to defense AI procurement too,” Colclough added. “When one contract dispute can degrade classified network capability for weeks, the architecture is wrong. The fix isn’t better contracts with one provider or clearer legal precedent. It’s qualified redundancy: multiple providers, interoperable where possible, with clear use terms negotiated before anyone gets access to classified infrastructure.”

Bell suggested that even as the headlines call this a victory for Anthropic, it isn’t. Instead, it is clear that both sides made mistakes, and military AI readiness will suffer as a result.

“Anthropic got the injunction but lost the business relationship. The $200 million contract is functionally dead,” warned Bell.

Despite Anthropic’s court victory, it is unlikely that anyone at the Pentagon will champion working with a company that sued it—certainly not until there is a change at the top. It is certainly possible that the company could find a way back with a future presidential administration and leadership within the Pentagon. However, that won’t be for two years at the earliest—several lifetimes for a fast-moving technology like AI.

The US Armed Forces Just Want AI Tools That Work

The feud between Anthropic isn’t limited to one department or agency. President Donald Trump issued an executive order that phased out the company’s AI technology across all federal agencies. Bell warned that this was a separate action, and that Lin’s narrow ruling doesn’t touch it.

“Meanwhile, the warfighter lost capability. For over a month, frontier AI on classified networks was degraded or disrupted while both sides fought in court and on cable news,” Bell explained. “The rush to onboard replacement providers is happening, but classified infrastructure accreditation takes months, not weeks. The warfighter doesn’t care who won the argument. They care whether the tools work.”

Anthropic likely won’t be the only AI developer to put guardrails on its technology, and how it may be used. That could be both a problem and an opportunity.

“Competition solves this. Court rulings don’t,” said Bell. “Get more qualified providers on classified networks, negotiate terms clearly before deployment, and stop letting single-vendor dependency turn contract disputes into national security crises.”

About the Author: Peter Suciu

Peter Suciu has contributed to dozens of newspapers, magazines and websites over a 30-year career in journalism. He regularly writes about military hardware, firearms history, cybersecurity, politics, and international affairs. Peter is also a contributing writer for Forbes and Clearance Jobs. He is based in Michigan. You can follow him on Twitter: @PeterSuciu. You can email the author: [email protected].



Source link

Related Posts

1 of 2,006