Trump orders halt to U.S. government use of Anthropic AI

Trump orders U.S. to halt use of Anthropic tech as Pentagon clash over AI ‘red lines’ escalates

President Donald Trump ordered federal agencies to “immediately” stop using Anthropic’s technology after the San Francisco-based AI start-up rejected a Pentagon demand for unconditional military use of its Claude models, igniting a high-stakes confrontation over mass surveillance and autonomous weapons.

- Advertisement -

Anthropic accused the Pentagon of “intimidation” and vowed to sue, saying it would not allow its systems to be used for “mass domestic surveillance” or in “fully autonomous weapons systems.” “No amount of intimidation or punishment from the Department of War will change our position,” the company said in a statement.

The Pentagon countered that it operates within the law and that contracted suppliers cannot dictate how their products are employed. It set a deadline of 5:01 p.m. today — 10:01 p.m. Irish time — for Anthropic to comply or face compulsion under the Defense Production Act, a Cold War–era law last invoked broadly during the COVID pandemic to steer private industry toward national priorities.

Trump sharpened the pressure, warning the company to cooperate. “Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” he said.

Defense Secretary Pete Hegseth said he is directing the Pentagon to move ahead with a separate punishment: designating Anthropic a supply chain risk. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote on X, calling the company’s stance “a master class in arrogance and betrayal.” Such a designation is typically reserved for entities from adversary nations.

Anthropic said it will challenge any supply chain risk listing in court. “In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” the company said, adding that it remains “ready to continue our work to support the national security of the United States” under clear guardrails.

The showdown has triggered swift political and industry reactions. House Democratic leader Hakeem Jeffries praised the company’s stand, calling Hegseth “the least qualified Secretary of Defence in our nation’s history” and adding, “Mass surveillance of American citizens is unacceptable.”

Hundreds of employees from Google DeepMind and OpenAI urged their companies to back Anthropic in an open letter titled “We Will Not Be Divided,” warning against Pentagon demands for using commercial AI models in domestic mass surveillance and in systems that “autonomously” kill without human oversight. “They’re trying to divide each company with fear that the other will give in,” the letter said.

OpenAI CEO Sam Altman told employees he is pursuing an agreement with the Pentagon that includes “red lines” similar to Anthropic’s and hopes to help broker a resolution. “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions,” he said in a memo, according to U.S. media.

The dispute spotlights a core fault line in national security AI: how to harness rapidly advancing models for defense while limiting uses that many AI researchers argue violate civil liberties or increase the risk of accidental escalation. For now, the U.S. government faces an abrupt phaseout of Anthropic’s tools across agencies, while the Pentagon weighs legal levers and the start-up prepares for courtroom challenges that could set precedent on the extent of Washington’s authority over dual-use AI.

By Abdiwahab Ahmed
Axadle Times international–Monitoring.