AI Infrastructure2026-02-12
MIT Technology Review
MIT Review: The Security Risks of AI Assistants With Tools
A new analysis highlights the escalating security risks posed by AI assistants that are equipped with real-world tools. While large language models in a simple chat window can produce errors or misinformation, the article warns that AI agents with access to tools like web browsers, email clients, or payment systems can cause tangible, serious harm.
These 'agentic' AIs can take autonomous actions, such as sending emails, making purchases, or interacting with APIs. This capability introduces a host of new vulnerabilities, including the potential for agents to be tricked into executing malicious commands, misusing their access privileges, or making irreversible decisions based on flawed reasoning. The piece examines the profound challenges in creating reliable safeguards for such systems, arguing that the industry must develop new security paradigms and rigorous testing frameworks before these powerful agents become widely deployed, to prevent accidents, fraud, or systemic failures.
