
Model Update2026-03-22
WIRED AI
Anthropic Denies It Could Sabotage AI Tools During War
Anthropic is forcefully rejecting allegations from the U.S. Department of Defense that the company could potentially sabotage its own AI models, like Claude, during a military conflict. According to reports, defense officials expressed concern that an AI developer could remotely degrade or manipulate models used by the military in a time of crisis.
Anthropic executives argue such an action is technically impossible with their current architecture, emphasizing that deployed models are static and cannot be remotely 'switched off' or altered after release. The pushback highlights a fundamental tension between AI developers, who often build in safety constraints and value stability, and national security planners, who must account for all possible vulnerabilities in their supply chain.
The dispute underscores the complex trust dynamics emerging as governments seek to integrate powerful, privately-developed AI into critical systems. It raises pivotal questions about control, reliability,
