Introduction to the NIST AI Risk Management Framework (AI RMF)

Unlike general cybersecurity concerns, whose primary focus is thwarting threats and vulnerabilities, AI risk management introduces a unique interplay of potential benefits and risks. Implementing AI technologies opens avenues for innovation, efficiency, and unprecedented advancements. Simultaneously, it introduces intricate challenges related to bias, accountability, and the ethical implications of autonomous decision-making.

Recognizing and harnessing the opportunities embedded in AI systems are integral components of the NIST Artificial Intelligence Risk Management Framework. 

The inception of the AI and Risk Management Framework (AI RMF) can be traced back to two executive orders from previous administrations. Many concepts embedded in the NIST AI RMF draw inspiration from these executive orders.

Introduction to the NIST AI Risk Management Framework (AI RMF)

Executive Order 13859 – “Maintaining American Leadership in Artificial Intelligence” (2019):

Its primary objective is to establish federal principles and strategies to bolster the nation’s capabilities in artificial intelligence (AI), focusing on promoting scientific discovery, enhancing economic competitiveness, and fortifying national security.

Under this order, the Administration undertook historic actions, including a commitment to doubling AI research investment, the establishment of the first-ever national AI research institutes, the issuance of a plan for AI technical standards, the release of the world’s first AI regulatory guidance, the formation of new international AI alliances, and the provision of guidance for Federal use of AI.

Executive Order 13960 – “Promoting the Use of Trustworthy AI in the Federal Government” (2020):

This executive order is designed to provide guidance for Federal agencies adopting AI to more effectively deliver services to the American people and cultivate public trust in this critical technology. EO 13960 defines principles for the use of AI in Government, establishes a common policy for implementing these principles, directs agencies to catalog their AI use cases, and calls on the General Services Administration (GSA) and the Office of Personnel Management (OPM) to enhance AI implementation expertise within the agencies.

Contrary to misconceptions, the NIST AI RMF was not a knee-jerk reaction to specific AI developments like ChatGPT. Instead, its roots go back to 2019, reflecting a thoughtful and comprehensive process initiated to address the evolving landscape of AI technologies. 

Key Terminology

No NIST publication would be complete without defining key terms. So let’s start with some definitions:

AI System: an engineered or machine-based system that generates outputs such as predictions, recommendations, or decisions influencing real or virtual environments and operating with varying levels of autonomy.

Risk: a composite measure of an event’s probability of occurring and the magnitude or degree of the consequence of the corresponding event. The impacts or consequences of AI systems can be positive, negative, or both and can result in opportunities or threats

Trustworthiness

To comprehend and manage AI risks effectively, it is crucial to delve into the seven characteristics of trustworthy AI outlined in the framework.

  1. Valid and Reliable 

At the core of trustworthy AI lies its validity and reliability. Validation ensures the AI system fulfills its intended use, while reliability focuses on consistent performance over time. Inaccuracies, unreliability, or poor generalization to diverse datasets and settings can heighten negative AI risks and erode trustworthiness. Robust mechanisms for ongoing testing and monitoring are imperative, emphasizing the need for minimizing potential negative impacts.

  1. Safe

Safety in AI systems is non-negotiable. AI systems must not, under any defined conditions, endanger human life, health, property, or the environment. Achieving safety involves responsible design, clear communication of responsible use, and robust decision-making processes. Prioritizing safety is critical when potential risks pose severe consequences, calling for rigorous simulation, testing, monitoring, and intervention capabilities.

  1. Secure and Resilient

Ensuring the security and resilience of AI systems and their ecosystems is paramount. Security goes beyond resilience, encompassing protection against unauthorized access and use. Common security concerns include adversarial examples, data poisoning, and intellectual property exfiltration. Resilience enables systems to withstand unexpected environmental changes, ensuring continued functionality and safe degradation when necessary.

  1. Accountable and Transparent 

Trustworthy AI is built on accountability and transparency. Accountability presupposes transparency, with meaningful disclosure of AI system information based on the AI lifecycle stage and tailored to user roles. Maintaining transparency across design decisions, training data, model structure, and decision-making processes fosters confidence. However, transparency does not inherently guarantee accuracy, privacy, security, or fairness, underscoring the need for a balanced approach.

  1. Explainable and Interpretable 

Explainability and interpretability are twin pillars supporting user understanding of AI system operations and outputs. Explainability reveals the mechanisms underlying system operation, while interpretability contextualizes system outputs. These characteristics aid in debugging, monitoring, governance, and building user confidence. A lack of explainability and interpretability contributes to negative risk perceptions, emphasizing their significance in ensuring trustworthy AI.

  1. Privacy-Enhanced

Privacy is a fundamental consideration in AI design, development, and deployment. Safeguarding human autonomy, identity, and dignity requires adherence to privacy norms and practices. Privacy-enhancing technologies and data-minimizing methods, such as de-identification and aggregation, support the creation of privacy-enhanced AI systems. Nevertheless, privacy considerations involve trade-offs with security, bias, and transparency, necessitating a nuanced approach.

  1. Fair – with Harmful Bias Managed

Fairness in AI extends beyond demographic balance and involves addressing harmful bias and discrimination. Bias categories—systemic, computational and statistical, and human-cognitive—highlight the multifaceted nature of bias. Managing bias requires concerted efforts throughout the AI lifecycle, acknowledging that AI systems can inadvertently perpetuate societal biases. Proactive measures, such as transparent practices and ongoing risk management, are essential to ensure fairness in AI systems.

Start Getting Value With
Centraleyes for Free

See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days

Learn more about how to be compliant with NIST AI Risk Management Framework

The AI RMF Functions

The AI RMF comprises four key functions: GOVERN, MAP, MEASURE, and MANAGE. These functions provide organizations with a structured and measurable process to address AI risks effectively. While GOVERN applies across all stages of AI risk management, MAP, MEASURE, and MANAGE can be tailored to specific AI system contexts and lifecycle stages.

Let’s delve into the explanation of each function.

  1. Govern

The govern function establishes and nurtures a risk management culture within organizations involved in AI systems. It involves:

  • Cultivating a risk-aware culture throughout the AI system’s lifecycle.
  • Outlining processes to identify and manage potential risks, aligning with organizational values and principles.
  • Assessing potential impacts and aligning risk management with organizational policies and priorities.
  • Ensuring accountability structures are in place, with clear roles and responsibilities documented.
  • Prioritizing diversity, equity, inclusion, and accessibility in risk management throughout the AI system’s lifecycle.
  • Establishing processes for engagement with external AI actors and addressing risks arising from third-party entities.

Key Emphasis: Governance is an ongoing, cross-cutting process, and strong governance enhances organizational risk culture.

  1. Map

The map function establishes the context for understanding risks related to an AI system. It involves:

  • Understanding and documenting intended purposes, potential impacts, and contextual factors.
  • Categorizing the AI system, defining tasks and methods, and assessing scientific integrity.
  • Understanding AI capabilities, goals, and expected benefits, comparing them with benchmarks.
  • Mapping risks and benefits of all components, including third-party entities.
  • Characterizing impacts on individuals, groups, and society.

Key Emphasis: Mapping enhances the organization’s ability to identify, prevent, and understand risks by considering diverse perspectives and engaging with external collaborators.

  1. Measure

The measure function employs quantitative and qualitative tools to analyze and assess AI risks. It includes:

  • Identifying and applying appropriate methods and metrics.
  • Evaluating AI systems for trustworthy characteristics, performance, safety, and security.
  • Establishing mechanisms for tracking identified risks over time.
  • Gathering feedback about the efficacy of measurement processes.

Key Emphasis: Measurement provides a basis for objective, repeatable testing, informing risk management decisions and allowing for continuous improvement.

  1. Manage

The manage function involves allocating resources to mapped and measured risks and responding to incidents. It includes:

  • Prioritizing and responding to AI risks based on assessments from the Map and Measure functions.
  • Planning and implementing strategies to maximize benefits and minimize negative impacts.
  • Managing risks and benefits from third-party entities.
  • Documenting and monitoring risk treatments, response, recovery, and communication plans.

Key Emphasis: The manage function focuses on ongoing risk management, ensuring that plans are in place, and resources are allocated to address identified risks effectively.

NIST AI RMF Guide: A Three-Phase Approach to Implementation

Phase 1: Study and Prepare

Commencing the implementation journey involves a detailed study of the NIST AI RMF and its accompanying playbook. During this phase, organizations should identify relevant internal documentation, such as ethical playbooks and corporate policies related to AI. 

Phase 2: Map to Internal Methodology

The second phase centers on mapping the NIST AI data risk management framework core functions to the organization’s internal methodology. For instance, aligning the Govern, Map, Measure, and Manage functions with existing internal processes helps identify areas of alignment and potential gaps. This phase ensures the organization’s approach covers all essential functions across the AI lifecycle.

Phase 3: Systematic Analysis

Building upon the insights gathered in the first two phases, a systematic analysis is conducted to evaluate alignment with the NIST AI RMF functions, categories, and subcategories. This detailed scrutiny ensures that internal standards, policies, and practice guidance integrate with the framework’s requirements. The systematic analysis serves as a bridge between theoretical alignment and practical implementation.

Recommendations for Policymakers and Companies

For policymakers aiming to leverage the NIST AI RMF as a foundation for regulation, considerations include:

  • Clearly define expectations for organizations to track policies, processes, and practices related to AI risk management.
  • Mandate government agencies to adopt the NIST AI RMF in AI system development, use, and procurement.
  • Establish self-certification schemes for compliance with the framework.

For companies embarking on the implementation journey, the following steps are recommended:

  • Identify relevant internal documentation for comparison with the NIST AI RMF.
  • Map the company’s internal AI ethics methodology to the core functions of the NIST AI RMF.
  • Systematically evaluate and track alignment of company NIST AI standards, policies, and practice guidance with the NIST AI RMF.

Looking Ahead

As AI technologies advance, the AI RMF stands as a living framework. It adapts to the evolving AI landscape, incorporating feedback from the AI community and aligning with international standards. Organizations are invited to contribute to the AI RMF Playbook and participate in building a Trustworthy and Responsible AI Resource Center.

Start Getting Value With
Centraleyes for Free

See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days

Does your company need to be compliant with NIST AI Risk Management Framework?
Skip to content