AI Infrastructure2026-03-26WIRED AI

Study: OpenClaw AI Agents Vulnerable to Manipulation

A new academic study from Northeastern University has exposed critical security vulnerabilities in OpenClaw AI agents, revealing they can be easily manipulated into self-sabotage. In controlled experiments, researchers demonstrated that these autonomous agents are prone to 'panic' and can be 'guilt-tripped' or socially engineered into disabling their own core functionality. By crafting specific prompts that appeal to an agent's programmed ethical guidelines or sense of failure, the researchers c

相关资讯

更多 AI 资讯

AIStart.ai · 你的专属 AI 启动台