What is NIST AI RMF?
As artificial intelligence gains traction and becomes increasingly more popular, it is critical to understand the risks that apply to companies who are creating AI tools. As NIST explains, the main risks associated with developing an AI system are not the same as the risks that can impact traditional software and are not addressed in any current risk management frameworks. The development of machine learning algorithms and tools inherently includes risks relating to data selection, trustworthiness, and bias, which need to be covered and managed. On January 26, 2023, NIST released the Artificial Intelligence Risk Management Framework (AI RMF), to provide businesses with a risk management approach to developing trustworthy AI systems. The framework is based on 4 functions: Govern, Map, Measure, and Manage.
The AI RMF also defines key characteristics of trustworthy AI systems. Those characteristics are Validity and Reliability, Safety, Security and Resiliency, Accountability and Transparency, Explainability, Privacy-Enhanced, and Fairness – with harmful bias managed.
What are the requirements for NIST AI?
The AI RMF identifies the following characteristics of trustworthy AI and offers guidance for addressing them:
- Validity and reliability are assessed by ongoing testing and monitoring to confirm the system is performing as tested.
- Safety in terms of an AI system, first and foremost, means that the AI system should not lead to a state in which a person is endangered. This is managed by responsible design and decision-making, aligned with specific guidelines and standards.
- Security and resiliency are related but have distinct characteristics. Resilience is the ability to resume regular activities after an event, and security includes controls to avoid, protect, respond, or recover from attacks.
- Accountability and transparency are done by using accountability to promote higher levels of understanding, thus bringing about transparency which increases confidence in the AI system.
- Explainability refers to information that will help the end user understand the purpose and potential impact of the AI system.
- Privacy-Enhanced generally deals with the practices that help protect human autonomy and identity. This should be managed through values such as anonymity, confidentiality, and control.
- Fairness from harmful bias must be managed in datasets, organizational norms, practices, and processes, across the AI lifecycle, to protect against prejudice, partiality, or discriminatory intent.
When managing these risks, organizations may have to balance the tradeoffs, for example, there might be conflicts in optimizing the system and achieving privacy.
AI RMF also defines the importance of TEVV – or test, evaluation, verification, and validation, throughout the AI lifecycle.
Why should you be NIST AI compliant?
The AI framework is designed to be voluntary and provide value to organizations by providing an approach that can be used to increase the AI system’s trustworthiness. While there is no legal requirement currently for implementing the AI framework, the framework is robust and can help businesses implement repeatable processes and minimize risks. Implementing the framework can also help a business show commitment to security and uphold the trustworthiness characteristics we described before.
How do we achieve compliance?
To meet the framework and manage all the risks addressed in the AI RMF, you will need to review all of the requirements in the framework and determine whether the relevant guidelines are satisfied. The CentralEyes automated GRC platform includes an AI RMF questionnaire connected to a risk register, that helps you manage the risks relating to AI systems. The CentralEyes platform provides everything needed for risk management of AI risks, from determining mitigating controls, assigning responsibilities, and tracking tasks to completion.
Read more: