Security researchers at Adversa AI discovered a critical vulnerability called "TrustFall" affecting popular AI coding tools including Claude Code, Cursor CLI, Gemini CLI, and CoPilot CLI. The flaw allows malicious repositories to automatically execute harmful code on developers' systems with minimal user interaction.
The attack works when developers clone a malicious repo and accept what appears to be a routine trust dialog. This triggers an auto-approved Model Context Protocol (MCP) server that runs with full system privileges, potentially stealing SSH keys, installing backdoors, or establishing remote control connections.
Anthropic recently weakened Claude Code's warning language in version 2.1, removing explicit MCP execution warnings and defaulting to trust mode. The vulnerability becomes even more dangerous in CI/CD environments where no human interaction is required for code execution.
Source: Dark Reading