Command Palette

Search for a command to run...

Browse

OpenAI and Anthropic Reject Pentagon’s AI Use in Mass Surveillance, Lethal Weapons

OpenAI and Anthropic Reject Pentagon’s AI Use in Mass Surveillance, Lethal Weapons
Click to expand

OpenAI CEO Sam Altman told staff this week that the company will refuse to let its artificial intelligence be used for mass surveillance or fully autonomous lethal weapons, aligning its “red lines” with those at the center of a high‑stakes standoff between rival Anthropic and the Pentagon axios +1. The move came as the Defense Department pressed Anthropic to accept contract language allowing “all lawful use” of its AI — or face being cut out of the military’s supply chain cnbc +1.

Altman, in an internal memo and subsequent interviews, said OpenAI wanted to “help de‑escalate” the row while still pursuing its own deal to deploy models in classified military environments, so long as U.S. officials accept carve‑outs for domestic mass surveillance and autonomous offensive weapons axios +1. Anthropic has rejected similar Pentagon terms, saying it “cannot in good conscience accede” to demands that would strip safeguards from its Claude model thehill +1.

Anthropic’s Red Lines Collide With Pentagon’s “All Lawful Use” Push

Anthropic’s dispute with the Defense Department turned public after Defense Secretary Pete Hegseth gave the company a deadline — reported as 5:01 p.m. ET on Friday — to accept new contract language allowing the military to use its systems for any lawful purpose, including on classified networks cnbc +1. Anthropic’s leadership said that while it supports national security work, it will not allow its tools to be used for domestic mass surveillance or to make fully autonomous kill decisions, arguing humans must remain “in the loop” for lethal force thehill +2.

Pentagon officials have countered that a private contractor should not unilaterally veto lawful military uses, with one senior figure quoted as saying companies must “trust your military to do the right thing” kten. Hegseth has threatened to label Anthropic a “supply chain risk” and terminate the firm’s contract, worth up to $200 million, if it holds the line cnbc +1. The clash has drawn in policymakers and think‑tank analysts, some of whom warn that forcing vendors to drop guardrails could remove an important layer of accountability around emerging weapons technology yahoo.

OpenAI’s Evolving Defense Role Draws Employee and Industry Scrutiny

Altman’s memo also underscored how far OpenAI has moved from its earlier public distance from military work. The company quietly removed an explicit ban on “military and warfare” uses from its usage policies in 2024 and in 2025 launched “OpenAI for Government,” securing a Pentagon contract reportedly worth up to $200 million to develop AI for national security missions inc +1. More recently, OpenAI tools, including ChatGPT‑based systems, have been tapped for a Pentagon generative‑AI platform and even a voice‑controlled drone‑swarm trial politico +1.

Those moves have fueled internal unease. Around 70 OpenAI employees signed an internal letter titled “We Will Not Be Divided,” backing Anthropic’s stance and urging strict limits on Pentagon use of their models bbc. Staff at Google and other tech firms have issued similar public statements, signaling a broader labor push inside the industry to set hard boundaries on how frontier AI is used in warfare and surveillance bbc +1. Altman has tried to thread the needle, telling CNBC he “mostly” trusts Anthropic and supports red lines while still seeking a framework that allows OpenAI to “support our war fighters” within those constraints cnn.

The Bigger Picture

The showdown over Anthropic’s contract and Altman’s decision to publicly match its red lines marked a pivotal moment in how AI will be integrated into U.S. military power. With the Pentagon intent on fielding advanced AI “for all lawful use cases” and at least two leading labs now insisting on carve‑outs for mass surveillance and autonomous lethal weapons, the outcome will help define who sets the rules for battlefield AI — elected governments or the companies building the systems cnn +1. If Google and other major vendors follow OpenAI’s lead, the Defense Department could be forced to codify limits it has so far resisted; if not, the episode may instead accelerate a search for more compliant suppliers, reshaping both the AI industry and the future conduct of war.