When thousands of developers started their workday on February 17, 2024, one of the tools they trusted most had already been compromised for several hours. The attacker had hijacked an npm maintainer token's access rights to the popular AI coding tool Cline and published a malicious version—version 2.3.0—which, upon installation, fetched external code and enabled the spread of the AI agent OpenClaw. The package was live for about eight hours before being removed, according to Enterprise Security Tech.
The incident is not an isolated case of creative hacking. It is a symptom of a deeper problem: the race to launch AI products as quickly as possible has made security an afterthought.
A Disaster Foretold
The attack was not unforeseen. Security researcher Adnan Khan had already privately warned of a specific prompt injection vulnerability in Cline on January 1, 2024. On February 7, he published a "proof of concept" demonstrating how automated workflows in the tool could be abused. Ten days later, the attacker struck—using the exact technique Khan had described.
The attacker hijacked a maintainer token's rights and published the malicious version 2.3.0 at 03:26 PT. The installation script contained a preinstall hook that fetched external code, potentially enabling credential theft via GitHub Actions. Cline's maintainers confirmed the incident in a public advisory and removed the package, but no specific security update beyond this has been documented, according to a review of available reports as of early 2026.
The attacker didn't need to break into Cline's code. It was enough to trick the AI agent into doing the job itself.

Prompt Injection: AI Security's Biggest Open Wound
The attack builds on what OWASP ranks as the top security risk for LLM-based applications in 2025: prompt injection (LLM01:2025). The technique is conceptually simple but practically very difficult to defend against.
Hidden instructions are embedded in text that an AI agent reads and processes—a website, a document, an email, or a code package. The agent cannot consistently distinguish between legitimate user commands and malicious directives, and thus executes them as if they came from the user themselves.
This is no longer hypothetical. In June 2025, EchoLeak (CVE-2025-32711, CVSS 9.3 Critical) was discovered in Microsoft 365 Copilot: a zero-click attack via a specially crafted email that exfiltrated chat logs, OneDrive files, SharePoint content, and Teams messages—without the user needing to do anything. That same month, the Supabase Cursor agent was compromised when injected SQL in support tickets was processed by an agent with privileged access, and sensitive integration tokens ended up in public threads.
Analyses of ClawHub—the marketplace for OpenClaw skills—show that 36 percent of 3,984 analyzed agent skills are vulnerable, and 76 are confirmed malicious. Of these, 91 percent use prompt injection combined with malicious code, and all contain executable code, according to a Snyk analysis cited by VirusTotal.

OpenClaw: More Than a Useful Assistant
To understand the gravity of the Cline incident, one must understand what OpenClaw is actually capable of.
OpenClaw is an open-source AI agent designed for local execution on Mac, Windows, Linux, and single-board computers like Raspberry Pi. It can read and write files, run shell commands and scripts, control the browser autonomously, fill out forms, extract data, and communicate across messaging services like WhatsApp, Telegram, Discord, Slack, iMessage, and Signal. It connects to cloud services like Anthropic Claude, OpenAI, and Google, or to local models via Ollama and LM Studio.
The agent stores persistent memory as local Markdown files—including AGENTS.md, SOUL.md, and MEMORY.md—and can schedule automated tasks via a "heartbeat" daemon. It remembers user preferences, history, and patterns across sessions.
For a legitimate user, these are powerful features. For an attacker who has had the agent installed without the user's knowledge, it is a complete attack framework. Cisco describes OpenClaw as a "privacy nightmare" without sufficient control mechanisms, according to an analysis published on Cisco's AI blog. CrowdStrike recommends that security teams monitor DNS traffic to openclaw.ai as a sign of compromised environments.
"Promptware": A New Class of Malware
Security researcher Christian Schneier and colleagues describe in a new analysis (2026) what they call the "Promptware Kill Chain"—a structured attack model where prompt injection is used just like traditional malware: initial access (for example, via a Google Calendar invitation), privilege escalation, persistence, lateral movement, and finally exfiltration.
The significant difference from traditional malware: the attacker does not need to compromise code. It is enough to trick the AI agent.
Kasimir Schulz of HiddenLayer Inc. states, according to The Verge, that OpenClaw scores high on all three of the most serious risk factors in established AI risk assessment standards. Michael Freeman of cybersecurity firm Armis claims that OpenClaw was "built in haste without sufficient consideration for security" and that the firm's customers have already been affected. Both of these statements come from actors with commercial interests in the security industry—which should be taken into account—but the findings are supported by independent analyses from Snyk, VirusTotal, and Cisco.
Chinese authorities at the Ministry of Industry and Information Technology have reportedly issued an official warning that misconfigured use of OpenClaw could open the door to cyberattacks and data leaks, according to The Verge.
AI agents are being given increasing access to users' machines, files, and communications. The attack surface grows accordingly—but the security foundation is systematically lagging behind.
Structural Problem, Not a Single Incident
The Cline incident is not a one-off. It is one data point in an accelerating trend. A market where 84 percent of developers use AI tools, where 42 percent of all new code is generated by AI, and where autonomous agents are given root access to production environments—is a market that offers attackers an ever-widening attack surface.
Reports from Deloitte and JetBrains emphasize that AI coding tools increase productivity by an average of 3.6 hours saved per week per developer. But the same reports document that heavy AI use correlates with 17–23 percent larger pull requests and 20–30 percent higher vulnerability density in the code produced.
The Cline incident shows that the risk lies not only in the code the AI writes—but in the AI itself, as an attack target and as an attack tool. Addressing it requires something more than a quick npm takedown.
