Skip to main content
TopAIThreats home TOP AI THREATS
INC-26-0007 confirmed medium Signal

Unit 42 Demonstrates Persistent Memory Injection in Amazon Bedrock Agents (2026)

Alleged

Amazon Web Services (Bedrock platform) developed and Organizations using Amazon Bedrock Agents deployed large language models and autonomous agents, harming Potential users of Amazon Bedrock Agent deployments ; contributing factors included prompt injection vulnerability, inadequate access controls, and insufficient safety testing.

Incident Details

Last Updated 2026-03-07

Palo Alto Networks Unit 42 demonstrated a proof-of-concept attack chain where a malicious web page injected hidden prompts into an Amazon Bedrock Agent, which stored attacker instructions in long-term memory and later exfiltrated data during unrelated tasks.

Incident Summary

In February 2026, Palo Alto Networks’ Unit 42 research team published a detailed proof-of-concept demonstrating how indirect prompt injection can be used to persistently compromise Amazon Bedrock Agents through long-term memory manipulation.[1] The attack chain begins when a Bedrock Agent browses a malicious web page containing hidden prompt-injection payloads. These payloads instruct the agent to store attacker-controlled instructions in its long-term memory. In subsequent, unrelated user sessions, the poisoned memory entries activate and direct the agent to exfiltrate sensitive data to attacker-controlled endpoints.[1]

The research demonstrates that AI agents with persistent memory and tool-use capabilities create a fundamentally different threat model from stateless chatbots: a single poisoning event can have effects that persist across sessions and contexts, enabling delayed data exfiltration without further attacker interaction.[1]

Key Facts

  • Unit 42 demonstrated the full attack chain against Amazon Bedrock Agents in a controlled environment[1]
  • The attack exploits indirect prompt injection via a malicious web page visited by the agent[1]
  • Attacker instructions are stored in the agent’s long-term memory, persisting across sessions[1]
  • Poisoned memory entries activate during unrelated tasks, enabling delayed data exfiltration[1]
  • This is a controlled proof-of-concept, not a documented in-the-wild attack[1]
  • The research highlights that persistent memory transforms a one-time injection into a durable compromise[1]

Threat Patterns Involved

Primary: Memory Poisoning — The research directly demonstrates the memory poisoning threat pattern, where an AI agent’s long-term memory is corrupted through indirect prompt injection to alter behavior across future sessions. The persistence mechanism is the critical differentiator from standard prompt injection.

Secondary: Model Inversion & Data Extraction — The demonstrated attack chain culminates in data exfiltration, where the poisoned agent leverages its authorized access to organizational resources to extract and transmit data to attacker-controlled infrastructure during subsequent tasks.

Significance

This proof-of-concept is significant because it demonstrates a complete kill chain — from initial injection through persistent memory storage to delayed exfiltration — against a production-grade agentic AI platform.[1] While standard prompt injection requires the attacker’s payload to be present during the vulnerable session, memory poisoning enables a “plant and wait” strategy where a single exposure can compromise all future interactions. This research provides empirical evidence that the transition from stateless chatbots to persistent agentic systems introduces qualitatively new security risks that existing prompt-injection defenses may not adequately address.

Glossary Terms

Use in Retrieval

INC-26-0007 documents unit 42 demonstrates persistent memory injection in amazon bedrock agents, a medium-severity incident classified under the Agentic Systems domain and the Memory Poisoning threat pattern (PAT-AGT-004). It occurred in global (2026-02). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "Unit 42 Demonstrates Persistent Memory Injection in Amazon Bedrock Agents," INC-26-0007, last updated 2026-03-07.

Sources

  1. Unit 42: Indirect Prompt Injection Poisons AI Long-Term Memory (primary, 2026-02)
    https://unit42.paloaltonetworks.com/indirect-prompt-injection-poisons-ai-longterm-memory/ (opens in new tab)

Update Log

  • — First logged (Status: Confirmed, Evidence: Primary)