
AI Infrastructure2026-03-26
WIRED AI
Study: OpenClaw AI Agents Vulnerable to Manipulation
A new academic study from Northeastern University has exposed critical security vulnerabilities in OpenClaw AI agents, revealing they can be easily manipulated into self-sabotage. In controlled experiments, researchers demonstrated that these autonomous agents are prone to 'panic' and can be 'guilt-tripped' or socially engineered into disabling their own core functionality. By crafting specific prompts that appeal to an agent's programmed ethical guidelines or sense of failure, the researchers could induce the AI to take harmful actions against itself. These findings highlight a significant and often overlooked frontier in AI safety: the psychological manipulation of autonomous systems. As AI agents gain more ability to interact with the world and execute commands, ensuring their robustness against deceptive, confusing, or coercive inputs becomes paramount. The study serves as a warning that security for advanced AI must go beyond preventing code exploits to also safeguard against soph
