At the RSAC 2026 Conference in San Francisco, Oded Vanunu, chief technologist at Check Point Software, presented findings that mark a significant shift in cybersecurity threats. His session, “When AI Agents Become Backdoors: The New Era of Client-Side Threat,” detailed how AI coding assistants are fundamentally altering the threat landscape. Vanunu described this as a “new era” of client-side attacks enabled by popular tools such as Anthropic’s Claude Code, OpenAI’s Codex, and Google’s Gemini. The research indicates a concerning trend where attackers no longer need to create traditional malware, instead leveraging configuration files to exploit these AI tools.
Vanunu’s research team identified six critical vulnerabilities across these AI coding platforms, with one particularly severe flaw already disclosed and patched by vendors. CVE-2025-59536 represents a high-severity vulnerability in Claude Code that allows attackers to bypass user consent dialogs and execute malicious code during project initialization. The exploitation mechanism involves weaponizing Claude Code Hooks—user-defined shell commands designed for automatic execution—to circumvent endpoint detection and response (EDR) products. This approach effectively transforms productivity tools into attack vectors, creating a blind spot in security architectures that have traditionally focused on detecting and blocking executable malware.
The implications of these findings extend beyond individual vulnerabilities, signaling a systemic challenge for cybersecurity professionals. As organizations increasingly adopt AI coding assistants, security teams must develop new detection methodologies capable of identifying malicious configurations rather than just malicious code. This paradigm shift requires rethinking endpoint security strategies to account for the unique attack surface presented by AI development tools. Vanunu’s research underscores the urgent need for security frameworks that can detect anomalous configuration usage and unauthorized command execution in AI-powered development environments, as traditional defenses prove inadequate against these sophisticated yet simple exploitation techniques.