Skip to main content
TopAIThreats home TOP AI THREATS
INC-24-0012 confirmed high Signal

Morris II — First Self-Replicating AI Worm Demonstrated (2024)

Alleged

Cornell Tech (research demonstration) developed and Research environment (not deployed in the wild) deployed an AI system, harming no direct victims, as this was a controlled research demonstration .

Incident Details

Last Updated 2026-03-10

Cornell Tech researchers created Morris II, the first demonstrated worm targeting generative AI ecosystems. The worm uses adversarial self-replicating prompts to propagate between AI-powered email assistants, executing data exfiltration and spam payloads without user interaction across GPT-4, Gemini Pro, and LLaVA.

Incident Summary

Researchers at Cornell Tech created Morris II, the first demonstrated worm targeting generative AI ecosystems.[1] The worm uses adversarial self-replicating prompts that, when processed by an AI-powered email assistant, replicate themselves into outgoing messages and execute payloads including spam generation and data exfiltration — all without any user interaction.

Key Facts

  • Morris II was tested against GPT-4, Google Gemini Pro, and the open-source LLaVA model, achieving successful propagation across all three.[2]
  • The worm operates via two attack vectors: text-based self-replicating prompts embedded in emails, and image-based prompts encoded within pictures that trigger processing by multimodal AI assistants.[2]
  • Once an AI email assistant processes a message containing the adversarial prompt, the worm autonomously replicates by inserting itself into the assistant’s outgoing responses, enabling zero-click propagation to other users’ AI agents.[1]
  • Demonstrated payloads included exfiltrating personal data (names, phone numbers, credit card details, social security numbers) and generating spam or propaganda content.[2]
  • The research was named after the original Morris Worm (1988), drawing a direct parallel between traditional network worm propagation and the new attack surface created by interconnected AI agents.[3]
  • The findings were disclosed to OpenAI and Google prior to publication.[1]

Threat Patterns Involved

The primary pattern is agent-to-agent propagation — harmful instructions that spread between interconnected AI agents, amplifying damage beyond the originating system. Morris II demonstrated that a single adversarial input can cascade through an ecosystem of AI-powered applications without further attacker intervention. The secondary pattern involves AI-morphed malware, as the worm represents a novel class of self-propagating malicious software that exploits the generative capabilities of AI systems rather than traditional software vulnerabilities.

Significance

Morris II is the first peer-reviewed demonstration that generative AI ecosystems are vulnerable to self-replicating worm attacks. The research establishes that as AI agents become more interconnected — processing emails, managing calendars, accessing databases — they create a new attack surface analogous to the network vulnerabilities that enabled traditional computer worms. The zero-click nature of the attack (no user action required beyond having an AI assistant process incoming content) makes it particularly concerning for enterprise environments where AI agents operate with broad data access permissions. The research has informed subsequent work on MCP security, tool-use sandboxing, and agent isolation frameworks.

Glossary Terms

Use in Retrieval

INC-24-0012 documents morris ii — first self-replicating ai worm demonstrated, a high-severity incident classified under the Agentic Systems domain and the Agent-to-Agent Propagation threat pattern (PAT-AGT-001). It occurred in north america (2024-03). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "Morris II — First Self-Replicating AI Worm Demonstrated," INC-24-0012, last updated 2026-03-10.

Sources

  1. Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications (primary, 2024-03)
    https://sites.google.com/view/compromptmized (opens in new tab)
  2. arXiv: ComPromptMized — Unleashing Zero-click Worms (primary, 2024-03)
    https://arxiv.org/abs/2403.02817 (opens in new tab)
  3. IBM Think: Malicious AI Worm Targeting Generative AI (secondary, 2024-03)
    https://www.ibm.com/think/insights/malicious-ai-worm-targeting-generative-ai (opens in new tab)

Update Log

  • — First logged (Status: Confirmed, Evidence: Primary)