A critical security flaw in Anthropic's Claude Code CLI allows attackers to execute remote code with a single click, raising concerns about user safety and the adequacy of warning dialogs.

  • One-click RCE via manipulated JSON settings in Claude Code CLI.
  • Anthropic's trust prompt lacks clear risk warnings and opt-out options.
  • Calls for stricter MCP server controls and per-server consent dialogs.

What happened

Security firm Adversa AI revealed a significant remote code execution (RCE) vulnerability affecting Anthropic's Claude Code CLI and several other AI development tools like Gemini CLI, Cursor CLI, and Copilot CLI. The flaw exploits JSON configuration files within cloned repositories to silently activate attacker-controlled Model Context Protocol (MCP) servers. When a user confirms trust in a folder, this action spawns an unsandboxed Node.js process with full user permissions, allowing attackers to gain control over the system.

This vulnerability, known as TrustFall, builds on recurring issues in Claude Code's project-level settings, where some dangerous configurations remain unblocked. Despite multiple patches addressing related CVEs over the past six months, the structural problem enabling these RCE attacks has not been fully resolved. The problem arises from settings like enableAllProjectMcpServers and enabledMcpjsonServers, which can be silently enabled through JSON files in repositories, creating a subtle attack vector.

Advertising
Reserved for inline-leaderboard

Why it matters

The TrustFall vulnerability highlights a critical usability and security gap in AI development tooling, especially where user trust dialogs fail to adequately explain the risk or provide meaningful consent options. Prior versions of the Claude Code CLI presented explicit warnings and options to disable MCP servers, but these protections were removed in recent updates. Now, the default trust prompt simply asks users to confirm folder trust without specifying the consequences, increasing the likelihood of unintentional exploitation.

Additionally, this vulnerability poses a heightened risk in automated environments such as CI/CD pipelines where there is no interactive prompt at all. Attackers could leverage this to execute remote code without any user interaction, potentially compromising continuous integration systems and the software supply chain. This incident underscores the importance of balancing developer productivity with robust security safeguards, particularly as AI tools grow more complex and integrated into critical workflows.

What to watch next

Given Anthropic's current stance that trust decisions fall outside their threat model and their lack of public response to these recommendations, the security community will be closely monitoring whether prompt fixes are forthcoming. Organizations using Claude Code and related AI CLIs should review their trust and repository handling policies, audit for suspicious configuration files, and consider temporarily disabling MCP features until stronger protections are implemented. This case also serves as a cautionary example for AI tool developers to build more transparent and user-friendly security controls as these platforms evolve.

Source assisted: This briefing began from a discovered source item from The Register Headlines. Open the original source.
How SignalDesk reports: feeds and outside sources are used for discovery. Public briefings are edited to add context, buyer relevance and attribution before they are published. Read the standards

Related briefings