Trump Administration Blacklists Anthropic Over AI Dispute

Feb 28, 2026, 2:31 AM
Image for article Trump Administration Blacklists Anthropic Over AI Dispute

Hover over text to view sources

The Trump administration has taken a significant step by blacklisting the artificial intelligence company Anthropic. This decision follows Anthropic's refusal to comply with demands from the Pentagon regarding the use of its AI technology, which has raised concerns about national security and operational control.
President Donald Trump announced that he has ordered all federal agencies to "immediately cease" using Anthropic's technology. This ban includes a six-month phase-out period for departments like the Defense Department, which had been utilizing Anthropic's products. In a post on Truth Social, Trump denounced the company as a "RADICAL LEFT, WOKE COMPANY," asserting that its actions are jeopardizing American lives and national security.
Defense Secretary Pete Hegseth echoed this sentiment, stating that Anthropic's refusal to grant full access to its AI model, Claude, fundamentally conflicts with American principles. He indicated that the relationship between Anthropic and the US military has been permanently altered as a result. Hegseth also expressed intentions to designate Anthropic as a supply chain risk, effectively blacklisting it from future government contracts.
The conflict centers around Anthropic's insistence on maintaining safeguards against the use of its technology for fully autonomous weapons and domestic mass surveillance. Anthropic CEO Dario Amodei has stated that such applications are "illegitimate" and "prone to abuse," reaffirming the company's commitment to ethical AI standards. Despite the Pentagon's strong resistance to these safeguards, Anthropic has remained firm in its position, emphasizing that it cannot in good conscience comply with the military's requests.
The Pentagon's ultimatum, which Hegseth delivered during a meeting with Amodei, specified a deadline for compliance regarding the use of Claude. If Anthropic did not agree to the military’s terms, the Defense Department would consider invoking the Defense Production Act (DPA) to compel compliance. This act has historically been used to prioritize defense contracts, but its invocation in this case raises questions about the ethical implications of forcing a tech company to relinquish control over its innovations.
Anthropic’s refusal to comply with the Pentagon's demands has led to heightened scrutiny of the company, particularly as it prepares for a potential initial public offering (IPO). Being designated as a supply chain risk could severely undermine Anthropic's business and investor confidence. The firm has already received significant investment, including over $8 billion from Amazon, and is positioned as a leader in the AI landscape, especially with its technology being among the few approved for classified military settings.
Critics of the Trump administration's actions have voiced concerns that political motivations are driving national security decisions. Senator Mark Warner, a Democrat from Virginia, stated that halting the use of a prominent American AI company raises serious questions about the government's priorities and the potential for favoritism in defense contracting.
Meanwhile, the administration's rhetoric has been characterized as inflammatory, with Hegseth and other officials branding Anthropic's policies as "woke." This label has drawn criticism from experts who argue that the term lacks a clear definition and serves as a catch-all for any safety measures that might limit the military's operational flexibility.
As the situation develops, Anthropic has indicated that it will challenge any supply chain risk designation in court, arguing that such a designation would be legally unsound and set a dangerous precedent. The coming weeks will be critical for both Anthropic and the Pentagon as they navigate this contentious dispute over the future of AI technology in military applications.
In conclusion, the blacklisting of Anthropic by the Trump administration highlights the complex interplay between national security, ethical considerations in AI, and the political landscape. As both sides prepare for potential legal battles, the implications of this conflict may reverberate throughout the tech industry and beyond.

Related articles

Trump Orders Federal Agencies to Stop Using Anthropic Technology

President Trump has ordered all federal agencies to immediately cease using Anthropic's AI technology, citing national security concerns. The Pentagon has been given a six-month grace period to phase out the tech, following Anthropic's refusal to comply with military demands for unrestricted access.

Trump Orders Federal Agencies to Halt Anthropic AI Use Amid Dispute

President Trump has directed all federal agencies to cease using Anthropic's AI technology following a dispute over military applications. The Pentagon designated Anthropic a supply chain risk after the company resisted demands to loosen ethical guidelines on its AI systems.

Trump Orders Halt on Anthropic AI Use Amid National Security Clash

The Trump administration has mandated US agencies to cease using Anthropic's AI technology, marking a significant escalation in tensions over military applications of AI. The decision, fueled by concerns over national security and AI safeguards, has drawn criticism and support from various factions within the tech community.

Trump Orders Halt on Anthropic's AI Usage Amid National Security Concerns

President Trump has ordered all US government agencies to stop using technology from Anthropic, following a dispute over the company's restrictions on AI usage. The Pentagon has labeled Anthropic a national security risk, complicating its future government contracts.

US Lawmakers Demand UK Briefing on Apple's Encryption Backdoor Order

US lawmakers are requesting a briefing from the UK government regarding its order for Apple to create a backdoor to encrypted user data. Concerns have been raised about the implications of such an action for user privacy and cybersecurity.