A federal judge in San Francisco granted a preliminary injunction to Anthropic in its ongoing legal battle against the Trump administration.
Anthropic's lawsuit was filed to overturn the Defense Department's decision to blacklist the company following unsuccessful contract negotiations.
During a hearing on Tuesday, Judge Rita Lin questioned government representatives regarding their "attempt to cripple" the AI startup.
The decision came after both the legal teams for Anthropic and the U.S. government presented their arguments in court. A conclusive ruling on the matter could take several months to finalize.
The lawsuit seeks to nullify the blacklisting imposed by the Pentagon and President Donald Trump’s directive that banned federal agencies from utilizing Anthropic’s Claude AI models. Anthropic requested the injunction to suspend these actions, aiming to mitigate further financial and reputational damage while the case is in progress.
In the Tuesday hearing, Judge Lin challenged government lawyers about the rationale behind Anthropic's blacklisting, indicating concerns that the company may be experiencing unjust repercussions from the administration.
“One of the amicus briefs used the term ‘attempted corporate murder.’ I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic,” Judge Lin remarked.
In late February, Defense Secretary Pete Hegseth labeled Anthropic as a potential supply chain risk, suggesting that the use of its technology could pose a threat to U.S. national security. The Department of Defense (DOD) formally notified Anthropic of this designation in a letter earlier this month.
Anthropic holds the distinction of being the first American company publicly classified as a supply chain risk, a designation typically reserved for foreign adversaries. This classification mandates that defense contractors, including major firms such as Amazon, Microsoft, and Palantir, certify they do not employ Claude in their military operations.
The Trump administration justified the supply chain risk designation based on two separate classifications, requiring challenges in two different courts. Consequently, Anthropic has initiated an additional lawsuit in the U.S. Court of Appeals in Washington, D.C., seeking a formal review of the DOD’s decision.
Prior to the supply chain risk designation, President Trump published a post on Truth Social, instructing federal agencies to “immediately cease” all utilization of Anthropic's technology and endorsing a six-month phase-out period for entities such as the DOD.
The administration's actions caught many officials in Washington off guard, particularly those who had admired and come to rely on Anthropic's innovations. The company notably became the first to implement its models across the DOD’s classified networks and was recognized for its seamless integration with existing Defense contractors like Palantir.
In July, Anthropic entered into a $200 million contract with the Pentagon, but negotiations regarding the deployment of Claude on the DOD’s GenAI.mil AI platform filed in September. The DOD sought unrestricted access to Anthropic's models for any lawful purpose, whereas Anthropic aimed to secure guarantees that its technology would not be utilized for fully autonomous weapon systems or domestic mass surveillance.
The negotiations ultimately reached an impasse, leading to the present legal conflict. "Everyone, including Anthropic, agrees that the Department of [Defense] is free to stop using Claude and seek out a more permissive AI vendor,” Judge Lin stated during the hearing on Tuesday. “However, I believe the essence of this case is fundamentally different.”
Share this story