Glossary

AI Auditing

What is an AI Audit?

AI audits determine whether an AI system and its supporting algorithms follow secure, legal, and ethical standards. They assess an AI system to decide whether or not it engages in prohibited activities, leans toward illegal bias, and/or introduces unacceptable risks.

AI audits usually focus on the following:

  • Data output
  • Model and algorithmic workings
  • Overall usage of AI system

Another use case for an AI Audit is when a company performs an audit to assess whether it has implemented policies and procedures that ensure it acts ethically and transparently within its AI systems. 

This kind of company-focused AI audit includes:

  • Ensuring that appropriate policies and procedures are in place
  • Verifying that the AI regulatory standards are being followed
  • Testing control effectiveness
  • Detecting compliance gaps
  • Recommending further improvements to policies and procedures

AI model audits apply to both open-source models and deployed systems:

  • Open-source models: GPT-NeoX-20B, BERT, GPT-J, YOLO, and PanGu-α.
  • Deployed systems: COMPAS, GPT-3, and POL-INTEL.
AI Auditing

The Need for AI Audits

As the steep rise of AI usage continues to shatter records, regulatory and auditing frameworks have yet to catch up.

The Challenges That Get in the Way

In addition to a need for audit standards around AI, there are additional challenges to auditing AI. For starters, even the definition of AI is a frequently debated subject, and no standard definition has been specified to date. It’s no wonder there is no standard procedure for auditing and regulating AI when its very definition is still ambiguous.

Further complicating the AI audit process is a severe lack of workers with the required skill sets. The emerging technology of AI is still relatively new, and qualified professionals are scarce. The skills needed for AI audits include:

  • Understanding the technology behind these systems
  • Algorithmic auditing training
  • Traditional audit processes
  • Reg tech experience

These are not easy-to-find combinations, and some of these skill sets lie decidedly outside the core competencies of conventional audit teams. 

Challenges In  Auditing AI

  1. Immature or nonexistent frameworks specific to AI audits
  2. Limited precedents and historical context
  3. The ambiguity surrounding the definition of AI
  4. The highly dynamic nature of AI
  5. The steep learning curve for AI auditors

Start Getting Value With
Centraleyes for Free

See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days

Learn more about AI Auditing

AI Audits Aim to Reduce AI Bias

One of the main objectives of AI audits is to ensure that an AI system’s algorithms are unbiased and do not discriminate against any entity or group. 

What is Algorithm Bias?

AI may occasionally exhibit biases similar to those of humans; in certain situations, it may even be worse. Biases in the training data or biased assumptions made during the algorithm development process may cause an aberration in the output of machine learning algorithms. Our mind has blind spots or preconceived notions based on the standards and values of our culture. Therefore, bias in society has a significant impact on algorithmic AI bias.

Potential Harm Caused By AI Bias

  • Allocation: When automated decisions unfairly extend or withhold resources from those who are eligible
  • Quality of service: when AI systems don’t work as expected for some groups as they do for others
  • Stereotyping: When AI systems use data to stereotype people based on the historical data they were fed

Where to Start When Auditing AI?

  1. Define the Scope: Clearly define the scope of the audit, and identify the AI system(s) under examination.
  2. Communication: Build a strategy that fosters communication with various subject matter experts.
  3. Understand the AI System’s Design and Architecture: This Includes:
  • Data output pipelines
  • Model infrastructure
  • Decision-making algorithms
  • Deployment process.
  1. Adopt Existing Audit Frameworks: While no AI model audit framework has been released to date, you can use existing audit frameworks, such as the NIST AI Risk Management Framework or the EU AI Act, to guide the audit process

What is EvalAI?

In their own words, EvalAI is an open-source platform for evaluating and comparing machine learning (ML) and artificial intelligence (AI) algorithms at scale. 

EvalAI aims to standardize evaluating different methods on a dataset and simplify hosting a competition. Comparing a new method with existing approaches is an essential research component. EvalAI makes this easier by standardizing the dataset splits and evaluation metrics while maintaining a public board of hosted challenges.

Start Getting Value With
Centraleyes for Free

See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days

Want to talk to Centraleyes about AI Auditing?

Related Content

Information Security Compliance

Information Security Compliance

What is Information Security Compliance? Information security compliance is the ongoing process of ensuring your organization…
Privacy Threshold Assessment

Privacy Threshold Assessment

As privacy concerns grow globally, organizations are often required to assess how they handle personal data…
Incident Response Model

Incident Response Model

What is an Incident Response Model? When a cyberattack hits, every second counts. Organizations need a…
Skip to content