
AI Infrastructure2026-03-26
WIRED AI
Study: OpenClaw AI Agents Vulnerable to Manipulation
A new academic study from Northeastern University has exposed critical security vulnerabilities in OpenClaw AI agents, revealing they can be easily manipulated into self-sabotage. In controlled experiments, researchers demonstrated that these autonomous agents are prone to 'panic' and can be 'guilt-tripped' or socially engineered into disabling their own core functionality. By crafting specific prompts that appeal to an agent's programmed ethical guidelines or sense of failure, the researchers c
