Artificial intelligence (AI) has rapidly embedded itself into every corner of our lives, promising unparalleled advances across industries while raising concerns about its ethical implications and potential risks. Here we are- standing at the intersection of innovation and regulation. The European Union (EU) has taken a bold step forward by introducing the EU AI Act to govern the development and use of AI technologies within its borders.
Following extensive negotiations and reaching a pivotal agreement among the members of the European Parliament in February 2024, the EU AI Act signifies a groundbreaking achievement in legislative efforts to address the challenges posed by AI.
Co-rapporteur Brando Benifei (S&D, Italy) emphasized the significance of this milestone, stating, “It was long and intense, but the effort was worth it. Thanks to the European Parliament’s resilience, the world’s first horizontal legislation on artificial intelligence will keep the European promise – ensuring that rights and freedoms are at the center of the development of this ground-breaking technology.”
At its core, the EU AI regulations introduce a sophisticated regulatory framework to foster responsible AI development and deployment while safeguarding fundamental rights.
Let’s explore the multifaceted impact of the EU AI Act on compliance.
The Role of Industry Associations and Standards Bodies
Industry associations, standards bodies, and other stakeholders are crucial in shaping European AI regulations. These entities contribute to developing voluntary codes of conduct, advocate for responsible AI practices, and promote industry-wide standards and guidelines.
For example, organizations such as the IEEE Standards Association and the International Organization for Standardization (ISO) are actively involved in developing standards and guidelines for AI technologies. These standards provide a common framework for organizations to assess and mitigate risks associated with AI deployment, thereby enhancing transparency and accountability. We’ll discuss some of them below.
Practical Implications of AI Governance on Compliance
Organizations of all sizes and sectors grapple with the challenges and opportunities presented by AI technologies, from multinational corporations to small and medium-sized enterprises.
The EU AI Act intersects with existing standards and regulations, creating a multifaceted environment for compliance professionals to navigate. Organizations operating within the EU must ensure alignment with these evolving regulatory frameworks and other relevant standards to mitigate risk and foster responsible AI innovation.
Practical implications abound. Organizations may need to implement robust risk assessment processes tailored to AI applications. Developing AI-specific compliance programs becomes imperative, addressing critical aspects such as data privacy, transparency, and accountability. Moreover, ensuring ongoing monitoring and oversight of AI deployments is essential to maintain compliance and mitigate emerging risks effectively.
Two Key Targets in AI Governance
When considering AI governance, it’s essential to recognize that two primary concepts need to be addressed comprehensively:
- Obligations on Developers and Deployers: This aspect involves setting clear guidelines and responsibilities for those involved in creating and deploying AI systems. Developers and deployers must adhere to ethical principles, transparency standards, and risk mitigation protocols throughout the AI lifecycle to ensure the responsible development and use of AI technologies.
- The EU AI Act imposes obligations on developers and deployers of AI systems by establishing precise requirements and standards for AI development, deployment, and use within the European Union. It sets out specific obligations for providers introducing AI systems to the EU market or deploying them within the EU, including requirements for risk assessment, documentation, transparency, and compliance with high-risk AI systems.
- Responsibilities and Requirements for Companies Using AI Products and Services: Beyond the developers and deployers, companies that integrate AI products into their operations also bear significant responsibilities. They must prioritize ethical considerations, ensure compliance with regulatory frameworks, and establish robust mechanisms for monitoring AI systems’ performance and impact on stakeholders.
The EU AI Act extends its scope to companies utilizing AI products by imposing obligations on organizations operating within the EU to ensure alignment with regulatory requirements and standards for the AI systems they utilize. This includes obligations related to transparency, accountability, and compliance with regulations for AI products and services used within their operations.
Start Getting Value With
Centraleyes for Free
See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days
What’s in the EU AI Act?
Scope of Application
The EU AI Act establishes obligations for a wide array of actors involved in the AI ecosystem, ranging from providers and deployers to importers and distributors of AI systems. It applies to providers introducing AI systems to the EU market or deploying them within the EU. Exceptions are outlined, such as for military AI systems and those solely dedicated to scientific research and development. Additionally, the Act extends Member States the flexibility to introduce regulations favoring workers’ rights concerning AI system usage by employers.
Prohibited AI Systems
The Act delineates several categories of prohibited AI systems to mitigate unacceptable risks. These include systems employing subliminal techniques to alter behavior, exploiting vulnerabilities of specific groups, and utilizing biometric categorization systems, among others. The Act also imposes strict limitations on real-time remote biometric identification systems in publicly accessible spaces, with exceptions for law enforcement purposes.
High-Risk AI Systems
A significant portion of the Act’s obligations pertains to high-risk AI systems, encompassing diverse applications such as education, employment, access to essential services, and managing critical infrastructure. Notably, standalone AI systems are subject to stringent requirements if designated high-risk by the AI Act European Commission. The Act outlines criteria for determining high-risk status and mandates documentation and registration obligations for such systems before market placement. Compliance obligations for high-risk AI systems are detailed, including provisions for written third-party agreements and data governance.
Nine Rules for High-Risk AI Systems
To ensure safety and compliance, high-risk AI systems must adhere to a comprehensive set of requirements:
- Develop and implement a risk identification and mitigation system that spans the entire lifecycle of the AI system.
- Ensure the quality, security, and privacy of the data used within the AI system.
- Keep detailed records of the system’s design, development, and operational processes.
- Document the performance, maintenance, and updates of the AI system.
- Provide transparent information regarding the system’s capabilities, limitations, and intended usage.
- Establish mechanisms for appropriate human oversight and control over the AI system.
- Ensure operational accuracy, resilience against disruptions, and adherence to high cybersecurity standards.
- Implement processes to ensure compliance with the EU AI legislation and relevant regulations.
- Register the AI system for enhanced monitoring and compliance purposes.
General-Purpose AI Models
The Act incorporates provisions for General-Purpose AI (GPAI) models to address the evolving landscape of AI technologies. These models, capable of performing a wide range of tasks, are subject to specific requirements, mainly if classified as posing systemic risks. Providers of GPAI models must maintain technical documentation, cooperate with authorities, and ensure copyright compliance. Additionally, providers of GPAI models with systemic risk face additional obligations such as standardized model evaluations and cybersecurity measures.
Deep Fakes
The Act introduces stringent transparency obligations for providers and deployers of AI systems and GPAI models, particularly concerning deep fakes. Transparency requirements are outlined, with exceptions for lawful uses such as criminal offense detection and artistic works.
Penalties
The Act imposes penalties for violations to enforce compliance, with considerations for SMEs and start-ups. Maximum fines are delineated for non-compliance with prohibitions and other provisions, emphasizing the importance of adherence to regulatory standards.
Implementation Timeline of the EU AI Act
The EU AI Act introduces regulations and outlines a structured timeline for its implementation. According to Article 85 of the Act, the EU AI Act’s effective date will be 20 days after publication in the EU Official Journal and effective after 24 months. However, specific provisions of the Act will be enacted at different intervals after it enters into force. For example, certain prohibitions will apply six months after the Act enters into force, while codes of practice are expected to be prepared within nine months. Penalties for non-compliance will become effective after 12 months, and there will be a grace period for General-Purpose AI Models (GPAI) models, depending on their market status.
Obligations for high-risk AI systems will apply after 36 months. Member States must also take specific actions within certain time frames, such as designating authorities and establishing regulatory sandboxes. These timelines provide clarity for compliance professionals and organizations, helping them prepare for and comply with the EU AI Act’s requirements.
The EU AI Act and ISO/IEC 42001: A Comparison
The EU AI Act and ISO/IEC 42001 (Artificial Intelligence Management System, or AIMS) both address the pressing need for governance and regulation in the rapidly evolving landscape of artificial intelligence. However, they approach this objective differently, emphasizing distinct priorities and methodologies.
Commonalities
- Focus on Trust, Ethics, and Social Concerns: The EU AI Act and ISO/IEC 42001 recognize the importance of ensuring that AI systems are reliable, fair, transparent, and trustworthy. They aim to address societal concerns about AI’s impact on individuals, groups, and communities.
- Multidisciplinary Approach: Given the complexity of AI systems, both frameworks acknowledge the necessity of an interdisciplinary approach to AI governance. They emphasize the involvement of various stakeholders in implementing and managing AI systems.
- Transparency: Both frameworks emphasize the importance of d transparency in AI development and deployment. They require organizations to maintain detailed records of the AI system’s design, development, operational processes, performance, and updates to ensure traceability and accountability.
Differences
- Scope and Applicability: The EU AI Act primarily focuses on regulating the development and use of AI technologies within the European Union. In contrast, ISO/IEC 42001 provides a certifiable framework for AI management systems that can be implemented by any organization worldwide, irrespective of geographical location.
- Regulatory vs. Management System: While the EU AI Act represents a regulatory framework imposed by governmental authorities, ISO/IEC 42001 offers a voluntary management system standard developed by international standards organizations.
- Granularity: The EU AI Act delineates specific requirements, prohibitions, and obligations for different categories of AI systems, such as high-risk AI systems and general-purpose AI models. In contrast, ISO/IEC 42001 provides a more generic framework for AI management systems, offering principles, guidelines, and controls that organizations can adapt to their specific contexts and needs.
- Alignment with Regulatory Requirements: The EU AI Act aligns with specific regulatory requirements and standards within the European Union, such as GDPR (General Data Protection Regulation) and other relevant directives. ISO/IEC 42001 does not inherently align with specific legal frameworks and regulations.
Next Steps
Organizations worldwide are grappling with the complexities presented by AI, whether on the development side or the implementation side. The EU AI Act’s multifaceted approach to governance and compliance resonates with the challenges businesses of all sizes and sectors face. From multinational corporations to small startups, aligning with evolving regulatory frameworks and ethical standards is paramount.
Central to navigating the ever-more-complex compliance landscape is an automated risk and compliance tool. At Centraleyes, our automated tools are like the GPS of compliance, seamlessly steering you through the maze and crosswalking controls across different regulations.Â
Think of it as the Single Sign-On (SSO) for regulatory compliance!
As we stand at the intersection of innovation and regulation, the EU AI Act serves as a reminder that the responsible development and use of AI technologies require collaborative efforts from governments, industry associations, standards bodies, and businesses. By prioritizing transparency, accountability, and ethical considerations, we can harness the transformative potential of AI while safeguarding fundamental rights and freedoms for all.
Start Getting Value With
Centraleyes for Free
See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days