Key Takeaways
- February 2025 provisions banning “unacceptable-risk” AI systems are now in effect.
- On August 2, 2025, obligations for General-Purpose AI (GPAI) providers will take effect.
- Over 45 companies and EU leaders are advocating for a “stop-the-clock” delay, citing legal uncertainty and high compliance costs.
- The EU AI Act is now live on the Centraleyes platform.
Artificial intelligence (AI) has rapidly embedded itself into every corner of our lives, promising unparalleled advances across industries while raising concerns about its ethical implications and potential risks. Here we are- standing at the intersection of innovation and regulation. The European Union (EU) has taken a bold step forward by introducing the EU AI Act to govern the development and use of AI technologies within its borders.
Following extensive negotiations and reaching a pivotal agreement among the members of the European Parliament in February 2024, the EU AI Act signifies a groundbreaking achievement in legislative efforts to address the challenges posed by AI.
Co-rapporteur Brando Benifei (S&D, Italy) emphasized the significance of this milestone, stating, “It was long and intense, but the effort was worth it. Thanks to the European Parliament’s resilience, the world’s first horizontal legislation on artificial intelligence will keep the European promise – ensuring that rights and freedoms are at the center of the development of this ground-breaking technology.”
At its core, the EU AI regulations introduce a sophisticated regulatory framework to foster responsible AI development and deployment while safeguarding fundamental rights.
Let’s explore the multifaceted impact of the EU AI Act on compliance.
EU AI Act: July 2025 Update
The EU AI Act officially came into force in August 2024, marking it as the first comprehensive legislation regulating artificial intelligence worldwide. But rather than taking full effect all at once, the Act is being rolled out in stages over a multi-year timeline. Each phase activates different obligations for developers, deployers, and users of AI.
Already in Effect
On February 2, 2025, the first set of rules became enforceable. These target “unacceptable-risk” AI systems, such as:
- AI that manipulates behavior through subliminal techniques
- Tools that exploit vulnerabilities of specific groups (e.g., children, people with disabilities)
- Real-time biometric identification in public spaces (with narrow exceptions)
Violations now carry penalties of up to €35 million or 7% of global turnover.

What’s Next: August 2, 2025
The next phase takes effect on August 2, 2025, bringing new requirements for developers of General-Purpose AI (GPAI) models, particularly those with systemic impact.
These include:
- Training data summaries and copyright compliance disclosures
- Bias, toxicity, and robustness testing
- Systemic risk assessments
- Incident reporting and energy efficiency metrics
- Transparent documentation for model capabilities and limitations
This affects providers like Google, OpenAI, Meta, and Mistral, as well as downstream businesses that use or fine-tune their models.

Industry Concern and Political Pressure
Although this deadline has been on the books since 2024, it’s now facing significant resistance:
- The AI Code of Practice, a guidance document explaining how to comply, was due in May 2025 but remains unpublished.
- 45 companies, including ASML, Mistral, Google, and Meta, have signed an open letter asking the EU to pause enforcement for two years, citing unclear standards and resource burdens.
- Swedish Prime Minister Ulf Kristersson called the Act “confusing” and asked the EU to temporarily halt the rollout.
- EU industry groups such as CCIA Europe have echoed these concerns, calling for a “stop-the-clock” mechanism.
The European Commission, however, has not confirmed any delay. EU tech chief Henna Virkkunen stated, “We are not planning to backslide. Digital simplification doesn’t mean downgrading our objectives.” She did confirm the Code of Practice will be published before August, though no exact date has been announced.
The EU AI Act is Now on Centraleyes
The new EU AI Act framework is now live on the Centraleyes platform, designed to help organizations identify their risk category, meet requirements, and track their readiness.
We’ve broken the Act into three tailored questionnaires, supporting different levels of regulatory exposure:
- EU AI Act – High Risk: For users developing or deploying high-risk systems. Includes documentation support and a Declaration of Conformity builder to help meet legal obligations.
- EU AI Act – Limited Risk: For companies using AI in a limited-risk capacity, with transparency and disclosure obligations.
- EU AI Act – Minimal Risk: For users outside the law’s scope who still want to align with best practices voluntarily.
Each questionnaire is smart-mapped to the platform’s AI Governance and ISO/IEC 42001 frameworks, making it easy to build an integrated compliance program that scales with regulation.
The Role of Industry Associations and Standards Bodies
Industry associations, standards bodies, and other stakeholders are crucial in shaping European AI regulations. These entities contribute to developing voluntary codes of conduct, advocate for responsible AI practices, and promote industry-wide standards and guidelines.
For example, organizations such as the IEEE Standards Association and the International Organization for Standardization (ISO) are actively involved in developing standards and guidelines for AI technologies. These standards provide a common framework for organizations to assess and mitigate risks associated with AI deployment, thereby enhancing transparency and accountability. We’ll discuss some of them below.
Practical Implications of AI Governance on Compliance
Organizations of all sizes and sectors grapple with the challenges and opportunities presented by AI technologies, from multinational corporations to small and medium-sized enterprises.
The EU AI Act intersects with existing standards and regulations, creating a multifaceted environment for compliance professionals to navigate. Organizations operating within the EU must ensure alignment with these evolving regulatory frameworks and other relevant standards to mitigate risk and foster responsible AI innovation.
Practical implications abound. Organizations may need to implement robust risk assessment processes tailored to AI applications. Developing AI-specific compliance programs becomes imperative, addressing critical aspects such as data privacy, transparency, and accountability. Moreover, ensuring ongoing monitoring and oversight of AI deployments is essential to maintain compliance and mitigate emerging risks effectively.
Two Key Targets in AI Governance
When considering AI governance, it’s essential to recognize that two primary concepts need to be addressed comprehensively:
- Obligations on Developers and Deployers: This aspect involves setting clear guidelines and responsibilities for those involved in creating and deploying AI systems. Developers and deployers must adhere to ethical principles, transparency standards, and risk mitigation protocols throughout the AI lifecycle to ensure the responsible development and use of AI technologies.
- The EU AI Act imposes obligations on developers and deployers of AI systems by establishing precise requirements and standards for AI development, deployment, and use within the European Union. It sets out specific obligations for providers introducing AI systems to the EU market or deploying them within the EU, including requirements for risk assessment, documentation, transparency, and compliance with high-risk AI systems.
- Responsibilities and Requirements for Companies Using AI Products and Services: Beyond the developers and deployers, companies that integrate AI products into their operations also bear significant responsibilities. They must prioritize ethical considerations, ensure compliance with regulatory frameworks, and establish robust mechanisms for monitoring AI systems’ performance and impact on stakeholders.
The EU AI Act extends its scope to companies utilizing AI products by imposing obligations on organizations operating within the EU to ensure alignment with regulatory requirements and standards for the AI systems they utilize. This includes obligations related to transparency, accountability, and compliance with regulations for AI products and services used within their operations.

Start Getting Value With
Centraleyes for Free
See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days
What’s in the EU AI Act?
Scope of Application
The EU AI Act establishes obligations for a wide array of actors involved in the AI ecosystem, ranging from providers and deployers to importers and distributors of AI systems. It applies to providers introducing AI systems to the EU market or deploying them within the EU. Exceptions are outlined, such as for military AI systems and those solely dedicated to scientific research and development. Additionally, the Act extends Member States the flexibility to introduce regulations favoring workers’ rights concerning AI system usage by employers.
Prohibited AI Systems
The Act delineates several categories of prohibited AI systems to mitigate unacceptable risks. These include systems employing subliminal techniques to alter behavior, exploiting vulnerabilities of specific groups, and utilizing biometric categorization systems, among others. The Act also imposes strict limitations on real-time remote biometric identification systems in publicly accessible spaces, with exceptions for law enforcement purposes.
High-Risk AI Systems
A significant portion of the Act’s obligations pertains to high-risk AI systems, encompassing diverse applications such as education, employment, access to essential services, and managing critical infrastructure. Notably, standalone AI systems are subject to stringent requirements if designated high-risk by the AI Act European Commission. The Act outlines criteria for determining high-risk status and mandates documentation and registration obligations for such systems before market placement. Compliance obligations for high-risk AI systems are detailed, including provisions for written third-party agreements and data governance.
Nine Rules for High-Risk AI Systems
To ensure safety and compliance, high-risk AI systems must adhere to a comprehensive set of requirements:
- Develop and implement a risk identification and mitigation system that spans the entire lifecycle of the AI system.
- Ensure the quality, security, and privacy of the data used within the AI system.
- Keep detailed records of the system’s design, development, and operational processes.
- Document the performance, maintenance, and updates of the AI system.
- Provide transparent information regarding the system’s capabilities, limitations, and intended usage.
- Establish mechanisms for appropriate human oversight and control over the AI system.
- Ensure operational accuracy, resilience against disruptions, and adherence to high cybersecurity standards.
- Implement processes to ensure compliance with the EU AI legislation and relevant regulations.
- Register the AI system for enhanced monitoring and compliance purposes.
General-Purpose AI Models
The Act incorporates provisions for General-Purpose AI (GPAI) models to address the evolving landscape of AI technologies. These models, capable of performing a wide range of tasks, are subject to specific requirements, mainly if classified as posing systemic risks. Providers of GPAI models must maintain technical documentation, cooperate with authorities, and ensure copyright compliance. Additionally, providers of GPAI models with systemic risk face additional obligations such as standardized model evaluations and cybersecurity measures.
Deep Fakes
The Act introduces stringent transparency obligations for providers and deployers of AI systems and GPAI models, particularly concerning deep fakes. Transparency requirements are outlined, with exceptions for lawful uses such as criminal offense detection and artistic works.
Penalties
The Act imposes penalties for violations to enforce compliance, with considerations for SMEs and start-ups. Maximum fines are delineated for non-compliance with prohibitions and other provisions, emphasizing the importance of adherence to regulatory standards.
Implementation Timeline of the EU AI Act
The EU AI Act introduces regulations and outlines a structured timeline for its implementation. According to Article 85 of the Act, the EU AI Act’s effective date will be 20 days after publication in the EU Official Journal and effective after 24 months. However, specific provisions of the Act will be enacted at different intervals after it enters into force. For example, certain prohibitions will apply six months after the Act enters into force, while codes of practice are expected to be prepared within nine months. Penalties for non-compliance will become effective after 12 months, and there will be a grace period for General-Purpose AI Models (GPAI) models, depending on their market status.
Obligations for high-risk AI systems will apply after 36 months. Member States must also take specific actions within certain time frames, such as designating authorities and establishing regulatory sandboxes. These timelines provide clarity for compliance professionals and organizations, helping them prepare for and comply with the EU AI Act’s requirements.
The EU AI Act and ISO/IEC 42001: A Comparison
The EU AI Act and ISO/IEC 42001 (Artificial Intelligence Management System, or AIMS) both address the pressing need for governance and regulation in the rapidly evolving landscape of artificial intelligence. However, they approach this objective differently, emphasizing distinct priorities and methodologies.
Commonalities
- Focus on Trust, Ethics, and Social Concerns: The EU AI Act and ISO/IEC 42001 recognize the importance of ensuring that AI systems are reliable, fair, transparent, and trustworthy. They aim to address societal concerns about AI’s impact on individuals, groups, and communities.
- Multidisciplinary Approach: Given the complexity of AI systems, both frameworks acknowledge the necessity of an interdisciplinary approach to AI governance. They emphasize the involvement of various stakeholders in implementing and managing AI systems.
- Transparency: Both frameworks emphasize the importance of d transparency in AI development and deployment. They require organizations to maintain detailed records of the AI system’s design, development, operational processes, performance, and updates to ensure traceability and accountability.
Differences
- Scope and Applicability: The EU AI Act primarily focuses on regulating the development and use of AI technologies within the European Union. In contrast, ISO/IEC 42001 provides a certifiable framework for AI management systems that can be implemented by any organization worldwide, irrespective of geographical location.
- Regulatory vs. Management System: While the EU AI Act represents a regulatory framework imposed by governmental authorities, ISO/IEC 42001 offers a voluntary management system standard developed by international standards organizations.
- Granularity: The EU AI Act delineates specific requirements, prohibitions, and obligations for different categories of AI systems, such as high-risk AI systems and general-purpose AI models. In contrast, ISO/IEC 42001 provides a more generic framework for AI management systems, offering principles, guidelines, and controls that organizations can adapt to their specific contexts and needs.
- Alignment with Regulatory Requirements: The EU AI Act aligns with specific regulatory requirements and standards within the European Union, such as GDPR (General Data Protection Regulation) and other relevant directives. ISO/IEC 42001 does not inherently align with specific legal frameworks and regulations.
Next Steps
Organizations worldwide are grappling with the complexities presented by AI, whether on the development side or the implementation side. The EU AI Act’s multifaceted approach to governance and compliance resonates with the challenges businesses of all sizes and sectors face. From multinational corporations to small startups, aligning with evolving regulatory frameworks and ethical standards is paramount.
Central to navigating the ever-more-complex compliance landscape is an automated risk and compliance tool. At Centraleyes, our automated tools are like the GPS of compliance, seamlessly steering you through the maze and crosswalking controls across different regulations.
Think of it as the Single Sign-On (SSO) for regulatory compliance!
As we stand at the intersection of innovation and regulation, the EU AI Act serves as a reminder that the responsible development and use of AI technologies require collaborative efforts from governments, industry associations, standards bodies, and businesses. By prioritizing transparency, accountability, and ethical considerations, we can harness the transformative potential of AI while safeguarding fundamental rights and freedoms for all.
FAQs
1. Do I need to comply with the EU AI Act if my company is not based in the EU?
Yes. The EU AI Act applies extraterritorially, meaning that any company placing AI systems on the EU market or using AI within the EU, regardless of its headquarters, must comply with the relevant provisions.
2. What qualifies as a General-Purpose AI (GPAI) model under the Act?
GPAI models are AI systems that are not limited to a single intended purpose. They’re often foundational models, such as GPT-4, Gemini, or LLaMA, that can be adapted to a wide variety of downstream tasks. If your organization builds, fine-tunes, or integrates such models, you may fall under GPAI-related obligations.
3. What’s the difference between high-risk AI and GPAI obligations?
- High-risk AI systems are tied to specific applications (e.g., hiring tools, biometric access, credit scoring).
- GPAI obligations apply to the underlying models themselves, especially if they pose systemic risks.
It’s possible for a single organization to be subject to both sets of obligations, depending on its role and use of AI.
4. How will I know if my system is considered “high-risk”?
The Act defines high-risk categories in Annex III, but in general, if your AI is used in areas like education, employment, financial services, healthcare, or critical infrastructure, it may be high-risk. Centraleyes includes a structured assessment to help determine if your systems fall under these criteria.
5. What happens if the August 2, 2025, deadline is delayed?
As of now, no formal delay has been announced, but industry groups and political leaders are pushing for a postponement. Regardless of the outcome, preparing now is key, especially since fines and enforcement mechanisms are already live for some provisions.
6. Is there any overlap between the EU AI Act and ISO/IEC 42001?
Yes. While the EU AI Act is a regulatory law and ISO/IEC 42001 is a voluntary management system standard, both emphasize the importance of risk governance, transparency, documentation, and human oversight. Centraleyes smart-maps controls across both frameworks to streamline dual alignment.
Start Getting Value With
Centraleyes for Free
See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days
Featured image designed by Freepik.


