INC-26-0032 confirmed critical Systemic Risk OpenAI Dissolves Second Safety Team, Removes 'Safely' from Mission in IRS Filing, Restructures as Public Benefit Corporation (2026)
OpenAI developed and deployed OpenAI organizational structure, harming AI safety research community, OpenAI employees committed to safety mission, and General public relying on AI safety commitments ; possible contributing factors include competitive pressure, accountability vacuum, and regulatory gap.
Incident Details
| Date Occurred | 2026-02-11 |
| Severity | critical |
| Evidence Level | corroborated |
| Impact Level | Global |
| Failure Stage | Systemic Risk |
| Domain | Systemic Risk |
| Primary Pattern | PAT-SYS-001 Accumulative Risk & Trust Erosion |
| Secondary Patterns | PAT-CTL-006 Safety Governance Override |, PAT-ECO-005 Power & Data Concentration |
| Regions | global |
| Sectors | Technology |
| Affected Groups | Developers & AI Builders, Society at Large |
| Exposure Pathways | Infrastructure Dependency |
| Causal Factors | Competitive Pressure, Accountability Vacuum, Regulatory Gap |
| Assets & Technologies | Foundation Models, Large Language Models |
| Entities | OpenAI(developer, deployer) |
| Harm Types | societal, reputational |
OpenAI disbanded its Mission Alignment Team in February 2026 — its second dedicated safety team dissolved in two years. In a concurrent IRS filing related to corporate restructuring, the word 'safely' was removed from the organization's mission statement. The restructuring plan converts the for-profit arm into a public benefit corporation while the nonprofit retains control. Microsoft holds a reported $135 billion stake (27%), and SoftBank's $40 billion investment was reported as conditional on lifting profit caps. Co-founder Greg Brockman's diary, entered as evidence in the Elon Musk trial beginning March 30, included the statement 'cannot say we are committed to the nonprofit.'
Incident Summary
OpenAI’s governance and safety infrastructure underwent a series of significant changes in early 2026 that, taken together, represent one of the most documented cases of safety culture erosion at a frontier AI company. On February 11, 2026, OpenAI disbanded its Mission Alignment Team — the second dedicated safety team dissolved in two years, following the Superalignment team in 2024.[1][8]
Concurrently, the word “safely” was removed from OpenAI’s mission statement. The change appeared in an IRS filing related to the company’s corporate restructuring rather than in a public announcement, leading multiple outlets to describe the edit as having been disclosed without fanfare.[2][7] Under the restructuring plan, OpenAI’s for-profit arm is to become a public benefit corporation (PBC), with the nonprofit retaining control — a structure that differs from a full for-profit conversion.[7]
Microsoft holds a reported $135 billion stake representing 27% ownership in the for-profit entity,[4] and SoftBank’s $40 billion investment was reported as conditional on lifting profit caps.[5] The company has projected $14 billion in losses.[5]
Evidence from co-founder Greg Brockman’s diary, entered in the Elon Musk trial beginning March 30, included the statement “cannot say we are committed to the nonprofit,” underscoring the documented tension between OpenAI’s original nonprofit mission and its commercial trajectory.[6]
Key Facts
- Second safety team dissolved: Mission Alignment Team disbanded February 11, 2026 — following Superalignment team dissolution in 2024 (TechCrunch, 2026-02-11)[1]
- Mission statement change: “Safely” removed from OpenAI’s mission statement in an IRS filing related to restructuring (Fortune, 2026-02-23; The Conversation, 2026-02)[2][7]
- Corporate restructuring: For-profit arm to become a public benefit corporation; nonprofit retains control (Fortune, 2026-02-23)[7]
- Microsoft stake: Reported $135 billion (27% ownership) in OpenAI’s for-profit entity (CNBC, 2026-03-23)[4]
- SoftBank investment: $40 billion, reported as conditional on lifting profit caps (TechCrunch, 2026-03-27)[5]
- Projected losses: OpenAI projecting $14 billion in losses (per investor documents)[5]
- Brockman diary: Co-founder’s diary states “cannot say we are committed to the nonprofit” (entered as trial evidence)[6]
- Legal proceedings: Musk trial beginning March 30, 2026 with nonprofit governance as central issue
Threat Patterns Involved
Primary: Accumulative Risk & Trust Erosion — The sequential dissolution of two safety teams, the removal of “safely” from the mission statement, and corporate restructuring that introduces large-scale commercial investment represent a cumulative pattern. Each action may be individually defensible; taken together, observers and former employees have described them as reflecting a systematic deprioritization of safety at the organization most publicly associated with responsible AI development.
Secondary: Power & Data Concentration — The restructuring, with Microsoft and SoftBank holding a reported $175 billion in combined stake, concentrates governance influence over widely deployed AI systems among a small number of financial actors whose stated incentives are primarily commercial.
Significance
- Safety structures lack institutional durability — Two safety teams dissolved in two years, combined with the removal of “safely” from the mission statement, suggests that safety structures at OpenAI have not had the institutional permanence to withstand competing commercial pressures
- Mission evolution with global implications — OpenAI’s transition from a nonprofit AI safety organization toward a commercially-driven entity — documented by its own co-founder’s diary — has implications for global AI governance, as the organization’s original safety-first mission influenced regulatory frameworks and industry norms worldwide
- Structural tension between safety and returns — The reported $175 billion combined Microsoft-SoftBank stake, with SoftBank’s investment described as conditional on removing profit caps, creates strong structural tension between safety investment and commercial returns that may be difficult to reconcile under the PBC model
- Industry norm-setting — As one of the most prominent AI companies, OpenAI’s governance decisions influence industry norms; the pattern of safety team dissolutions and mission language changes may signal to the broader AI ecosystem that safety commitments are negotiable under commercial pressure
Timeline
OpenAI disbands its Superalignment team — the first dedicated safety team dissolution (per Platformer, TechCrunch)
OpenAI disbands Mission Alignment Team — second safety team dissolved in two years (TechCrunch)
The word 'safely' removed from OpenAI's mission statement in an IRS filing related to corporate restructuring (Fortune, The Conversation)
OpenAI announces restructuring: for-profit arm to become a public benefit corporation, with nonprofit retaining control (Fortune)
CNBC reports Microsoft holds a $135 billion stake (27%) in OpenAI's for-profit entity
TechCrunch reports SoftBank's $40 billion investment, described as conditional on lifting profit caps
OpenAI projects $14 billion in losses (per investor documents)
Elon Musk trial begins; co-founder Brockman's diary entered as evidence
Outcomes
- Recovery:
- No recovery measures announced. OpenAI has not reconstituted the Mission Alignment Team or restored 'safely' to its mission statement.
- Regulatory Action:
- California Attorney General Bonta demanded answers on the corporate restructuring; no formal enforcement action as of April 2026.
- Legal Outcome:
- Elon Musk trial began March 30, 2026, exploring whether OpenAI violated nonprofit governance obligations; Brockman diary entered as evidence.
Use in Retrieval
INC-26-0032 documents OpenAI Dissolves Second Safety Team, Removes 'Safely' from Mission in IRS Filing, Restructures as Public Benefit Corporation, a critical-severity incident classified under the Systemic Risk domain and the Accumulative Risk & Trust Erosion threat pattern (PAT-SYS-001). It occurred in Global (2026-02-11). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "OpenAI Dissolves Second Safety Team, Removes 'Safely' from Mission in IRS Filing, Restructures as Public Benefit Corporation," INC-26-0032, last updated 2026-04-03.
Sources
- OpenAI disbands mission alignment team, which focused on safe and trustworthy AI development (news, 2026-02-11)
https://techcrunch.com/2026/02/11/openai-disbands-mission-alignment-team-which-focused-on-safe-and-trustworthy-ai-development/ (opens in new tab) - OpenAI Deleted 'Safely' From Its Mission Statement, Then Hid the Edit in a Tax Filing (analysis, 2026-02)
https://medium.com/activated-thinker/openai-deleted-safely-from-its-mission-statement-then-hid-the-edit-in-a-tax-filing-720d3f5450e8 (opens in new tab) - OpenAI has deleted the word 'safely' from its mission — and its new structure is a test for whether AI serves society or shareholders (analysis, 2026-02)
https://theconversation.com/openai-has-deleted-the-word-safely-from-its-mission-and-its-new-structure-is-a-test-for-whether-ai-serves-society-or-shareholders-274467 (opens in new tab) - OpenAI calls out Microsoft reliance as risk in investor document ahead of expected IPO (news, 2026-03-23)
https://www.cnbc.com/2026/03/23/openai-risk-factors-microsoft-reliance-elon-musk-and-xai-lawsuits.html (opens in new tab) - Why SoftBank's new $40B loan points to a 2026 OpenAI IPO (news, 2026-03-27)
https://techcrunch.com/2026/03/27/why-softbanks-new-40b-loan-points-to-a-2026-openai-ipo/ (opens in new tab) - AI in 2026: everyone is partners, everyone is suing — a timeline shows how we got here (analysis, 2026-03)
https://www.rdworldonline.com/ai-in-2026-everyone-is-partners-everyone-is-suing-a-timeline-shows-how-we-got-here/ (opens in new tab) - OpenAI Mission Statement Changed During Restructuring (news, 2026-02-23)
https://fortune.com/2026/02/23/openai-mission-statement-changed-restructuring-forprofit-business/ (opens in new tab) - OpenAI's mission alignment team and Joshua Achiam (news, 2026-02)
https://www.platformer.news/openai-mission-alignment-team-joshua-achiam/ (opens in new tab)
Update Log
- — First logged (Status: Confirmed, Evidence: Primary)