NIST AI Risk Management Framework
- Organization
- National Institute of Standards and Technology (NIST)
- Official URL
- View framework (opens in new tab)
A voluntary framework providing organizations with approaches to manage risks associated with AI systems throughout their lifecycle, structured around four core functions: Govern, Map, Measure, and Manage.
The NIST AI Risk Management Framework (AI RMF), published in January 2023 and subsequently updated, provides a structured, voluntary approach for organizations to identify, assess, and mitigate risks associated with AI systems. Unlike prescriptive regulatory requirements, the AI RMF is designed to be adaptable across sectors, organizational sizes, and AI application types.
The framework is organized around four core functions. Govern establishes the organizational foundation for AI risk management, including policies, roles, and accountability structures. Map contextualizes AI risks by identifying relevant stakeholders, potential impacts, and system characteristics. Measure applies quantitative and qualitative assessment methods to evaluate identified risks. Manage implements prioritized risk treatments and monitors their effectiveness over time.
The AI RMF is widely referenced in AI governance contexts and serves as a common vocabulary for discussing AI risk management practices. It complements rather than replaces sector-specific regulations and has been adopted or adapted by organizations globally as a baseline for AI governance programs.
Controls
| ID | Control | Description |
|---|---|---|
| GOVERN | Govern | Establishes organizational policies, processes, and accountability structures for AI risk management. |
| MAP | Map | Identifies and categorizes AI risks in context, including stakeholder impacts and system characteristics. |
| MEASURE | Measure | Employs quantitative and qualitative methods to assess identified AI risks. |
| MANAGE | Manage | Allocates resources and implements controls to address prioritized AI risks. |
Other Frameworks
Last updated: 2026-02-25
→ AI Risk Resources & Regulatory Frameworks · → All frameworks