Command Palette

Search for a command to run...

Browse

OpenAI Secures Pentagon AI Deal as Anthropic Is Blacklisted Over Security Risks

OpenAI Secures Pentagon AI Deal as Anthropic Is Blacklisted Over Security Risks
Click to expand

OpenAI reached a deal late Friday to deploy its artificial intelligence models on the Pentagon’s classified networks, just hours after President Donald Trump ordered all federal agencies to stop using Anthropic’s technology and the Defense Department labeled the rival AI firm a national security “supply‑chain risk.”cnn +1 The decision abruptly ended talks over a Pentagon contract with Anthropic reportedly worth up to $200 million and signaled a major shift in how Washington will balance military needs against corporate AI safety guardrails.fortune +1

Trump’s directive on February 27 ordered agencies to “immediately cease” using Anthropic tools, while Defense Secretary Pete Hegseth’s designation barred federal contractors from relying on the company and set a six‑month phaseout for its systems already embedded in defense platforms.nbcnews +1 Anthropic, which had refused to allow its models to be used for domestic mass surveillance or fully autonomous lethal weapons, said it would challenge the designation in court, calling it legally unsound and a dangerous precedent.ft

How OpenAI’s Deal Differs From Anthropic’s Collapse

OpenAI said its agreement allows the Defense Department to use its models inside classified systems but includes explicit bans on domestic mass surveillance and fully autonomous weapons, along with technical safeguards and dedicated oversight staff.cnn +1 CEO Sam Altman argued the Pentagon “agrees with these principles” and urged officials to extend the same terms to all AI providers, an implicit defense of Anthropic’s rejected red lines.politico +1

By contrast, Pentagon negotiators had pressed Anthropic for a blanket commitment to support “all lawful purposes,” a demand that would have overridden its internal prohibitions and allowed uses such as wide‑area U.S. surveillance and autonomous targeting so long as they complied with U.S. law.fortune +1 When Anthropic refused to drop those limits, talks broke down and Hegseth moved to blacklist the company as a risk to the defense supply chain, a tool typically reserved for foreign or hardware suppliers rather than domestic software firms.wired +1

Industry Stakes and a New AI Security Doctrine

The confrontation immediately reshaped the race for Pentagon AI business. With Anthropic sidelined, OpenAI joins other major players — including Google and Elon Musk’s xAI — that have secured access to deploy large models on classified networks, positioning themselves for billions of dollars in future defense work.fortune +1 Investors and analysts said Anthropic’s exclusion could weigh on its valuation and IPO prospects, while signaling to the wider sector that insisting on strong usage guardrails may jeopardize access to federal contracts.ft +1

Legal and policy experts warned that using supply‑chain risk powers against a U.S. AI lab stretches existing tools and could chill companies from pushing back on government demands, especially in sensitive domains like surveillance and weapons.wired +1 Civil‑liberties and human‑rights groups, meanwhile, cautioned that if firms are pressured to relax safeguards, the result could be broader domestic monitoring and more automated targeting capabilities embedded into U.S. military doctrine.aljazeera +1

The Bigger Picture

The twin moves to ban Anthropic and embrace OpenAI crystallized a larger struggle over who sets the limits on frontier AI: elected officials and national‑security planners, or the tech companies building the systems. The outcome of Anthropic’s planned legal challenge and the durability of OpenAI’s promised safeguards are likely to shape not only future Pentagon procurement rules but also how much control private labs retain over how their most powerful models are ultimately used.