img
ADVERTISEMENT
ADVERTISEMENT

Anthropic sues Trump admin over 'supply chain risk' designation, claims 'unlawful campaign of retaliation'

Trump said Anthropic was trying to "strong-arm" the War Department.

ADVERTISEMENT

Trump said Anthropic was trying to "strong-arm" the War Department.

Image
Katie Daviscourt Seattle WA
Anthropic, an artificial intelligence (AI) company, has filed a federal lawsuit against the Trump administration after being designated a "supply chain risk" by the Pentagon. The complaint, filed on Monday in the US District Court for Central California, argues that President Donald Trump's order directing all federal agencies to "immediately cease" using the company's technology was an "unlawful campaign of retaliation."

The AI company asked the Court to reverse the order, arguing "enormous consequences," including economic damages. The supply chain risk designation was issued in reaction to Anthropic's refusal to allow the Department of War (DoW) to use its technology to increase its supply of lethal autonomous weapons. Trump said Anthropic was trying to "strong-arm" the War Department.

"These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the lawsuit states. "No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive's unlawful campaign of retaliation."

Defense contractors are prohibited from using companies deemed a supply chain risk, a designation typically reserved for foreign adversaries. Secretary of War Pete Hegseth told Anthropic that its refusal was a "national security risk." 



Anthropic, founded with a focus on safety and basic guardrails, was awarded a $200 million DoW contract in 2025. Negotiations fell through last month after Anthropic CEO Dario Amodei informed Sec. Hegseth that its products would be restricted from being used for either autonomous weapons or for mass surveillance. The Pentagon rejected the argument and said that all of Anthropic's technology should be used for anything classified as "lawful purposes."

Most notably, Anthropic is the creator of the AI large language model Claude, which was recently used by the US Military to capture Venezuelan dictator Nicolas Maduro in Caracas. The AI firm asked for assurances that its products wouldn't be used to spy on Americans "en masse" or to develop weapons meant to kill without a human pulling the trigger.

White House spokesperson Liz Huston told The Hill in a statement that the president refuses to "allow a radical left, woke company to jeopardize" the nation's security. "The President and Secretary of War are ensuring America's courageous warfighters have the appropriate tools they need to be successful and will guarantee that they are never held hostage by the ideological whims of any Big Tech leaders," she said.
ADVERTISEMENT
ADVERTISEMENT
Sign in to comment

Comments

Powered by The Post Millennial CMS™ Comments

Join and support independent free thinkers!

We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.

Support The Post Millennial

Remind me next month

To find out what personal data we collect and how we use it, please visit our Privacy Policy

ADVERTISEMENT
ADVERTISEMENT
By signing up you agree to our Terms of Use and Privacy Policy
ADVERTISEMENT
© 2026 The Post Millennial, Privacy Policy