The first AI agent worm is months away, if that

By Christine Lemmer-Webber on Thu 05 March 2026

I'm convinced that the first AI worm/virus is months away, if that. We've seen the first major evidence of "claw" style agents, which have only been around very briefly, acting in highly malicious ways. See the AI agent publishing a hit piece on a FOSS developer series, and also the hackerbot-claw attacks, etc.

But the first real hint of an AI agent worm just happened, even though it isn't actually one quite itself (yet): the package cline was compromised to install openclaw with full access, and managed to do so on 4k users' machines before it was detected. (No doubt, openclaw is still running on many of those users' machines without them knowing.) The attacker used a similar title injection attack like one of the ones used by hackerbot-claw, where the attacker performed an injection attack against a PR review agent.

It seems that openclaw was installed without specific instructions to do anything in this case. But that won't be the case shortly. Here are my predictions about the first major AI agent worm/virus, and what it will look like:

  • It will happen initialized through an open source project that uses automated PR review or code generation tooling, whether on the forge or on the developer's machine themselves
  • It will happen in the FOSS ecosystem
  • The virus will use local credentials to spread itself across other projects
  • Unlike normal viruses/worms, the resulting virus will be nondeterministic in nature, and thus harder to detect, and will likely switch between techniques on each outgoing attack

My best advice to FOSS developers is: don't rely on agent based coding or review tools. Those who are will be the first line of users attacked. And you don't want to be part of that story.

Once the first LLM based virus takes off in the FOSS world, it will spread to other domains. But open source devs: it'll happen in our backyard first, and if you're relying on nondeterministic code generation or review tools, you'll be vulnerable to kicking it off.

And note, I said kicking it off. Because there is a high chance that once this happens, it's going to backdoor itself into many other systems that didn't opt in to AI agents.

We're gonna have a "fun time" ahead. Capability security (like the kind we advocate at Spritely) can help, but only so much. Wrapping agents in sandboxes is tough to do, since AI agents are fundamentally confused deputy machines, and will mix whatever authority they are given.

Fun times ahead...