Dark Mode
More forecasts: Johannesburg 14 days weather
  • Friday, 27 February 2026

Anthropic Refuses Pentagon Demand to Drop AI Safeguards

Anthropic Refuses Pentagon Demand to Drop AI Safeguards

AI firm Anthropic says it won’t give in to Pentagon pressure to loosen restrictions on how its technology is used, even if that means losing a major military contract.

 

The standoff centers on the Defense Department’s request that it be allowed to use Anthropic’s Claude AI model for “all lawful purposes.” The company has drawn two clear lines: no use in mass domestic surveillance of Americans and no deployment in fully autonomous weapons.

 

Chief executive Dario Amodei made the company’s position clear after a tense meeting with US Defense Secretary Pete Hegseth earlier this week. “Threats do not change our position: we cannot in good conscience accede to their request,” he said.

 

Anthropic’s contract with the Pentagon is worth up to $200m, and Claude is currently used on the military’s classified networks, something that sets it apart from most rival AI systems. If the company refuses to comply by the Pentagon’s deadline, officials have said they could cancel the deal and label Anthropic a “supply chain risk,” a designation typically associated with foreign adversaries and one that could severely damage its broader business.

 

Amodei said the company wants to keep working with the military, but only with limits in place. “Our strong preference is to continue to serve the Department and our warfighters – with our two requested safeguards in place,” he said. “We remain ready to continue our work to support the national security of the United States.”

 

At the heart of the dispute is whether Claude could be used to power systems that operate weapons without human oversight or analyze large amounts of data on US citizens. Amodei warned that such uses are “simply outside the bounds of what today’s technology can safely and reliably do”.

 

The Pentagon has pushed back, arguing it needs flexibility in high-stakes situations. Spokesperson Sean Parnell wrote: “This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk,” adding, “We will not let ANY company dictate the terms regarding how we make operational decisions.”

 

Officials also say existing laws already prohibit unlawful surveillance or improper weapons use. Emil Michael, the US Undersecretary for Defense, defended the military’s position in a TV interview, saying: “At some level, you have to trust your military to do the right thing.” He added: “We do have to be prepared for what China is doing.”

 

The Pentagon has even floated invoking the Defense Production Act, a Cold War-era law that allows the president to direct private companies to prioritize national defense needs. It’s unclear how that would square with also labeling the company a supply chain risk.

 

The clash is being closely watched across Silicon Valley. Anthropic has long branded itself as one of the most safety-focused AI developers, and critics say the Pentagon’s hard line could send a message to other tech firms not to impose limits on military use of their tools.

 

Some analysts argue the government has leverage because it wants to keep using Claude while stripping away guardrails. Others warn that sidelining one of the country’s leading AI companies during a heated race with China would be a drastic step.

 

For now, Anthropic says it won’t budge, even if it costs them.

Comment / Reply From