INC-23-0002 confirmed high Samsung Semiconductor Trade Secret Leak via ChatGPT (2023)
OpenAI developed and Samsung Electronics (employees) deployed large language models and training datasets, harming Samsung Electronics and Samsung shareholders ; contributing factors included inadequate access controls and misconfigured deployment.
Incident Details
| Date Occurred | 2023-03 | Severity | high |
| Evidence Level | corroborated | Impact Level | Organization |
| Domain | Security & Cyber | ||
| Primary Pattern | PAT-SEC-005 Model Inversion & Data Extraction | ||
| Secondary Patterns | PAT-CTL-004 Overreliance & Automation Bias | ||
| Regions | asia | ||
| Sectors | Manufacturing, Corporate | ||
| Affected Groups | Business Organizations | ||
| Exposure Pathways | Direct Interaction | ||
| Causal Factors | Inadequate Access Controls, Misconfigured Deployment | ||
| Assets & Technologies | Large Language Models, Training Datasets | ||
| Entities | OpenAI(developer), ·Samsung Electronics (employees)(deployer), ·Samsung Electronics(victim) | ||
| Harm Types | financial, operational | ||
Samsung semiconductor engineers inadvertently leaked proprietary source code and internal meeting notes by inputting confidential data into ChatGPT, exposing trade secrets to an external AI training pipeline.
Incident Summary
In March 2023, shortly after Samsung’s semiconductor division authorized the use of ChatGPT for employee work tasks, multiple engineers entered proprietary company information into the AI chatbot.[1] In at least three documented instances, employees submitted sensitive material including semiconductor source code for a proprietary program, internal meeting notes summarizing hardware performance data, and test sequences for identifying defects in semiconductor equipment.
The data entered into ChatGPT was processed by OpenAI’s systems, meaning it was potentially incorporated into the model’s training data. At the time, ChatGPT’s default settings allowed user inputs to be used for model improvement, and once data was submitted, there was no mechanism to selectively remove it from the training pipeline. This created an effectively irrecoverable intellectual property breach.
In response, Samsung imposed a company-wide ban on all generative AI tools in May 2023, restricting employees from using ChatGPT, Google Bard, Microsoft Bing, and similar services on company devices and networks. Samsung also announced it would develop proprietary internal AI tools with appropriate data security controls. The incident became a widely cited example of the intellectual property risks posed by generative AI tools in corporate environments and prompted numerous other companies to implement similar restrictions or usage policies.[2]
Key Facts
- Method: Employees entered proprietary data into ChatGPT during routine work tasks
- Data exposed: Semiconductor source code, internal meeting notes, hardware testing data
- Number of incidents: At least three separate instances identified
- Recovery: Submitted data cannot be selectively removed from AI training pipelines
- Corporate response: Company-wide ban on all generative AI tools
- Broader impact: Prompted multiple corporations globally to implement AI usage policies
Threat Patterns Involved
Primary: Model Inversion and Data Extraction — Proprietary data entered into a third-party AI system became potentially accessible through the model’s training process, representing an unintentional data extraction pathway.
Secondary: Overreliance and Automation Bias — Engineers used the AI tool without fully considering the data security implications of entering confidential information into a third-party system, reflecting insufficient awareness of the risks associated with generative AI tools.
Significance
- Irrecoverable data exposure. Unlike traditional data breaches where stolen data can potentially be contained, information entered into large language model training pipelines cannot be selectively removed, creating a permanent intellectual property vulnerability.
- Corporate AI governance catalyst. The incident prompted a global reassessment of generative AI usage policies within corporations, with many companies implementing restrictions, guidelines, or dedicated enterprise AI solutions with data protections.
- Employee awareness gap. The engineers who entered proprietary data did so while performing legitimate work tasks, highlighting the gap between the accessibility of AI tools and employee understanding of the associated data security risks.
- Supply chain and competitive risks. The potential exposure of semiconductor manufacturing processes raised concerns about competitive intelligence risks and the broader implications for technology supply chain security.
Timeline
Samsung Semiconductor division permits employees to use ChatGPT for work tasks
Multiple Samsung engineers enter proprietary semiconductor source code, internal meeting notes, and hardware testing data into ChatGPT
At least three separate incidents of sensitive data entry are identified internally
Samsung imposes company-wide ban on all generative AI tools, including ChatGPT, Google Bard, and Microsoft Bing
Samsung announces development of internal AI tools to prevent future data leaks
Outcomes
- Financial Loss:
- Not publicly quantified
- Arrests:
- None
- Recovery:
- Data entered into ChatGPT cannot be retrieved or deleted from training data
- Regulatory Action:
- Samsung imposed internal ban on all generative AI tools
Glossary Terms
Use in Retrieval
INC-23-0002 documents samsung semiconductor trade secret leak via chatgpt, a high-severity incident classified under the Security & Cyber domain and the Model Inversion & Data Extraction threat pattern (PAT-SEC-005). It occurred in asia (2023-03). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "Samsung Semiconductor Trade Secret Leak via ChatGPT," INC-23-0002, last updated 2026-02-15.
Sources
- Bloomberg: Samsung Bans ChatGPT and Other Generative AI Use by Staff After Leak (news, 2023-05)
https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak (opens in new tab) - TechCrunch: Samsung Bans Use of Generative AI Tools Like ChatGPT After April Internal Data Leak (news, 2023-05)
https://techcrunch.com/2023/05/02/samsung-bans-use-of-generative-ai-tools-like-chatgpt-after-april-internal-data-leak/ (opens in new tab)
Update Log
- — Replaced dead Economist source URL with TechCrunch coverage of Samsung ChatGPT ban
- — First logged (Status: Confirmed, Evidence: Corroborated)