Command Palette

Search for a command to run...

Browse

Trump Orders Federal Ban on Anthropic AI Amid Pentagon's Security Blacklist

Trump Orders Federal Ban on Anthropic AI Amid Pentagon's Security Blacklist
Click to expand

President Donald Trump ordered every federal agency to stop using Anthropic’s artificial intelligence systems on Friday, escalating a high‑stakes confrontation over whether a private tech firm can refuse U.S. military demands to loosen AI safety limits npr +1. The directive followed the Pentagon’s move to label Anthropic a “supply‑chain risk,” effectively blacklisting the company across the defense industrial base and putting a contract worth up to $200 million at risk washingtonpost +1.

The clash stemmed from Anthropic’s refusal to drop two core safeguards that block its Claude model from supporting domestic mass surveillance or fully autonomous weapons, restrictions the Pentagon argued would improperly let a contractor veto lawful military uses techpolicy +1. Negotiations collapsed when Anthropic CEO Dario Amodei publicly rejected a Friday 5:01 p.m. ET deadline to accept new terms, saying the company “cannot in good conscience accede to their request” even under threat of blacklisting techpolicy +1.

How the Pentagon–Anthropic Standoff Escalated

Over the past week, defense officials pressed Anthropic to alter contract language so the department could use Claude “for all lawful purposes,” warning they were prepared to cancel the contract, brand the firm a national‑security risk and even invoke the Defense Production Act to compel changes techpolicy +1. Such a label is typically reserved for companies tied to foreign adversaries, and can bar other contractors from doing business with the targeted firm washingtonpost +1.

Anthropic held its line, arguing frontier models remain too unreliable for fully autonomous lethal systems and that enabling large‑scale domestic surveillance would cross an ethical red line reuters +1. After the deadline passed, Defense Secretary Pete Hegseth announced Anthropic would be treated as a supply‑chain risk and said the Pentagon and its contractors must phase out use of Claude, while allowing a short transition window to avoid immediate operational disruptions washingtonpost +1.

A Test Case for AI Governance — and 2026 Politics

Trump amplified the confrontation by ordering a government‑wide phase‑out of Anthropic technology, denouncing the company’s leaders on Truth Social and framing the dispute as a battle against “leftwing” tech elites seen as obstructing national security npr +1. The order could take six months to fully implement and may complicate work in agencies that have quietly integrated Claude into analysis and back‑office systems, defense and policy experts said axios +1. Some warned it could take months for the Pentagon alone to re‑engineer classified systems that rely on Anthropic’s tools apnews.

The move also split Silicon Valley. OpenAI CEO Sam Altman said his company shares Anthropic’s “red lines” on mass surveillance and autonomous weapons, signaling that at least some major AI providers want room to impose ethical limits on military customers thehill +1. Industry and legal analysts said the outcome of this fight will shape whether Washington can use emergency powers like the Defense Production Act to override such safeguards — and how central AI governance becomes in the 2026 election, where campaigns are already leaning into themes of national security, tech power and “woke” corporations nytimes +1.

The Bigger Picture

The Anthropic blacklist marked one of the first times a U.S. administration moved to punish a domestic AI firm over its internal safety rules, turning an abstract debate over “responsible AI” into a concrete battle over contracts, code and command authority. How courts, Congress and rival tech firms respond will determine whether this episode becomes an outlier born of a single showdown — or a template for how governments confront AI companies that try to draw their own red lines in matters of war and surveillance.