RIEM News LogoRIEM News

The trap Anthropic built for itself

The trap Anthropic built for itself
Source: techcrunch
Author: Connie Loizos
Published: 3/1/2026

To read the full content, please visit the original article.

Read original article
The article discusses the recent fallout between the Trump administration and Anthropic, a San Francisco-based AI company founded by former OpenAI researchers focused on AI safety. The administration severed ties with Anthropic after the company refused to allow its technology to be used for mass surveillance of U.S. citizens or for autonomous armed drones capable of lethal action without human oversight. This decision led to Anthropic being blacklisted from Pentagon contracts worth up to $200 million and barred from working with other defense contractors, following a directive from President Trump to cease all federal use of Anthropic’s technology. Anthropic has challenged the legal basis of this supply-chain-risk designation, calling it unprecedented for an American company. Max Tegmark, an MIT physicist and AI governance advocate, critiques Anthropic and similar AI firms for their role in creating their own predicament by resisting binding government regulation despite their public commitments to AI safety. Tegmark highlights that companies like Anthropic, OpenAI, Google DeepMind, and xAI have repeatedly promised

Tags

robotartificial-intelligenceautonomous-weaponsAI-ethicsdefense-technologyAI-regulationsurveillance-technology