Overview of AI Regulations and Regulatory Proposals of 2023

“AI is too important not to regulate—and too important not to regulate well,” asserts Google, capturing the sentiment resonating across the global tech landscape. Indeed, the regulation of Artificial Intelligence looms large on the horizon, and in many ways, it’s already underway. 

Take the European Union’s ambitious AI Act, for instance, with its far-reaching rules designed to rein in AI applications that pose unacceptable risks. Meanwhile, China has implemented stringent regulations, mandating state review of algorithms to ensure alignment with core socialist values.

In contrast, the United States navigates its regulatory journey with a decentralized approach. Rather than a sweeping national AI law, the U.S. will likely see a mosaic of bottom-up initiatives and executive-branch actions unfold in the coming years. While tailored to America’s regulatory landscape, this approach may not satisfy advocates pushing for comprehensive national AI regulation. Instead, the focus is expected to be on targeted measures such as funding AI research and ensuring AI child safety, reflecting a nuanced approach to navigating the complexities of AI governance.

As AI continues to reshape industries and societies, the regulatory landscape evolves in tandem. From the corridors of Brussels to the tech hubs of Silicon Valley, the push for responsible AI governance underscores the pivotal role of regulations in shaping the future of artificial intelligence.

 Here’s a summary of notable AI regulations around the world.

Overview of AI Regulations and Regulatory Proposals of 2023

United States:

Federal Regulation:

In the United States, AI regulation is decentralized, with various federal initiatives addressing specific aspects of AI governance. The government has prioritized AI risk assessment and management, recognizing the importance of understanding algorithms’ decision-making processes. Legislative proposals such as the Algorithmic Accountability Act, DEEP FAKES Accountability Act, and Digital Services Oversight and Safety Act highlight efforts to enhance transparency and accountability in AI systems’ operations.

Executive Order 14110:

The Biden Administration has taken significant steps to promote responsible AI development through Executive Order 14110. This order outlines goals for ensuring AI’s safe, secure, and trustworthy development and use. It emphasizes risk mitigation, talent acquisition, worker protection, civil rights preservation, consumer protection, and international collaboration, signaling a commitment to comprehensive AI governance at the federal level.

NIST AI Risk Management Framework (AI RMF):

Released by the National Institute of Standards and Technology (NIST), the NIST AI RMF provides voluntary guidelines and recommendations for assessing and managing risks associated with AI technologies. This framework offers a structured approach to identifying, evaluating, and mitigating risks throughout the AI lifecycle, encompassing data quality, model transparency, fairness, accountability, and security. While not legally binding, the AI RMF is a valuable resource for organizations seeking to navigate the complexities of AI risk management.

State and Municipal Regulation:

In addition to federal efforts, several U.S. states and municipalities have implemented AI rules and regulations to address local concerns and priorities. States like California, Connecticut, Texas, and Illinois have enacted rules to balance innovation with oversight. At the same time, municipalities such as New York City have passed ordinances targeting specific AI applications, such as employment decisions. This combination of federal, state, and local initiatives reflects ongoing efforts to promote responsible AI development and deployment while safeguarding societal interests.

EU

The EU AI Act has progressed to the trilogue stage, marking a significant step toward its finalization and implementation. Trilogue discussions involve the commission, council, and European Parliament, to reach a consensus on the legislation.

Expected to be passed in early 2024, the EU AI Act encompasses a wide range of measures designed to regulate AI systems comprehensively. Some key proposals being debated in the trilogue include the prohibition of certain types of AI systems, such as those deemed manipulative, exploitative, or engaging in social scoring or real-time biometric identification.

Additionally, the act seeks to classify high-risk AI systems and establish stringent requirements for their compliance. It delegates regulatory and enforcement authorities, prescribes conformity standards, and mandates transparency obligations for AI systems interacting with individuals.

The act proposes innovation measures, establishes a governance framework involving the EU AI Board and national authorities, and mandates the creation of a database for high-risk systems. Other provisions include requirements for iterative review after-market deployment, the formation of codes of conduct for non-high-risk AI systems, and stipulations on confidentiality for authorities handling proprietary data.

EU has taken the role of trailblazer in reaching this stage of AI Regulation in Europe. The EU AI Act represents a comprehensive effort to regulate AI technologies, balancing innovation with ethical considerations and protecting individuals’ rights and interests.

Start Getting Value With
Centraleyes for Free

See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days

Learn more about AI Regulations and Regulatory Proposals of 2023

UK

In 2023, the UK Government reaffirmed its commitment to a “pro-innovation” and “context-specific” AI regulatory approach. This was exemplified by the publication of the AI White Paper on March 29, 2023, which proposed sector-specific oversight of AI development and usage, leveraging existing regulatory bodies such as the ICO, FCA, CMA, and Ofcom within their respective remits.

Throughout the year, UK regulators issued guidance on AI use and regulation. The ICO released eight questions addressing generative AI, while the FCA provided insights into the future of financial regulation in the context of AI. Similarly, the CMA initially reviewed competition and consumer protection considerations regarding AI foundation models, and Ofcom addressed the implications of generative AI in the communications sector.

In response to recommendations from the Pro-innovation Regulation of Technologies Review by Sir Patrick Vallance, the UK Government accepted suggestions concerning intellectual property (IP) and AI. However, plans to develop a code of practice on copyright and AI were abandoned in February 2024.

The Parliament’s Communications and Digital Committee initiated an inquiry into large language models (LLMs) in July 2023, seeking public input to evaluate the effectiveness of government regulations on AI and future technological capabilities.

The UK Government hosted the AI Safety Summit 2023 on November 1 and 2, 2023, attracting representatives from various countries, companies, and civil society groups. During this summit, attendees, including the US, China, Japan, members of the EU, Korea, Singapore, and Brazil, endorsed the Bletchley Declaration, acknowledging both the potential benefits and risks associated with AI. UK Prime Minister Rishi Sunak announced the establishment of the UK AI Safety Institute. In contrast, Vice President Kamala Harris announced the creation of a US AI Safety Institute housed by NIST, underscoring the global commitment to addressing AI safety concerns.

China

China has taken bold strides to pioneer regulatory frameworks in this dynamic field. Positioned as one of the leading nations in AI governance, Chinese lawmakers are actively shaping a comprehensive regulatory landscape to navigate the complexities of artificial intelligence.

Several regulations and policies have been swiftly enacted to govern specific facets of AI usage, reflecting China’s proactive stance toward fostering responsible AI practices. Notably, the Algorithmic Recommendation Management Provisions, currently in effect, govern the utilization of algorithmic recommendation systems. Moreover, the Interim Measures for the Management of Generative AI Services have been rolled out to regulate generative AI technologies. Additionally, China is formulating the Deep Synthesis Management Provisions, which are aimed at overseeing the management of deep synthesis technologies. These initiatives underscore China’s dedication to nurturing a regulatory ecosystem that encourages ethical AI development while effectively addressing emerging challenges and risks associated with its proliferation.

Canada

Canada stands at the forefront of AI regulation, poised to enact the anticipated AI and Data Act, a pivotal component of Bill C-27. It signals its commitment to fostering responsible innovation in the AI landscape. The proposed legislation, dubbed the AI and Data Act (AIDA), is tailored to safeguard Canadians from the potential perils of AI while propelling the nation to the forefront of global AI advancement.

The AIDA boasts a multifaceted approach, aiming to ensure that high-impact AI systems adhere to established safety and human rights standards, thereby elevating Canadian values and firms in the global AI arena. With provisions aimed at curtailing reckless and malicious AI usage, the act delineates clear boundaries to mitigate harm. Empowering the Minister of Innovation, Science, and Industry to enforce its provisions underscores the act’s commitment to fostering compliance and accountability across sectors. Moreover, Canada has issued a Directive on Automated Decision-Making, imposing stringent mandates on the federal government’s use of automated decision-making systems. These initiatives underscore Canada’s dedication to promoting responsible AI development and deployment while safeguarding the rights and interests of its populace.

Brazil

Brazil is steadfastly advancing toward establishing a robust regulatory framework for artificial intelligence (AI) with its proposed comprehensive AI Bill, marking a pivotal step toward responsible AI governance. Esteeming human rights and accountability, this legislation harbors ambitious provisions to regulate AI development and deployment.

The proposed AI Bill aims to prevent specific “excessive risk” AI systems, a strategic move to mitigate potential societal adversities. Further bolstering accountability, the bill envisages the establishment of a dedicated regulatory body tasked with enforcement and oversight. Additionally, the legislation introduces a civil liability regime for AI developers, fostering transparency and recourse mechanisms for harm caused by AI systems. Mandating reporting obligations for significant security incidents bolsters transparency and risk management. Moreover, prioritizing individual rights, the bill ensures access to explanations of AI-based decisions, prohibits discrimination, rectifies biases, and enshrines due process mechanisms. Brazil’s proactive approach underscores its unwavering commitment to ethical AI development while championing human rights and societal welfare.

Other Countries

At least eight other countries across the Americas and Asia are in various stages of developing their own AI regulatory approaches.

Practical Steps for Companies

  1. Stay Informed and Proactive

Companies should stay abreast of the evolving regulatory landscape by closely monitoring updates from relevant government agencies and industry associations. Proactively understanding new regulations and guidelines will enable companies to anticipate compliance requirements and mitigate potential risks.

  1. Conduct Compliance Assessments

Conduct thorough assessments of existing AI systems and processes to ensure alignment with emerging regulations and standards. Evaluate the impact of new rules on current practices, identify areas for improvement, and develop action plans to address any gaps in compliance.

  1. Invest in Compliance Resources

Allocate resources towards building internal expertise or partnering with external consultants to navigate complex regulatory requirements effectively. Investing in compliance training programs for AI development, deployment, and management employees will enhance awareness and adherence to regulatory obligations.

  1. Enhance Transparency and Accountability

Emphasize transparency and accountability in AI systems by documenting processes, decision-making algorithms, and data sources. Implement robust governance frameworks to ensure responsible AI development and deployment, including mechanisms for monitoring and addressing biases, discrimination, and ethical concerns.

  1. Collaborate with Regulators and Industry Partners

Foster collaborative relationships with regulatory authorities, industry peers, and stakeholders to contribute to developing responsible AI laws and regulations and standards. Engage in constructive dialogue with regulators to provide feedback on proposed rules, share best practices, and advocate for policies that balance innovation with consumer protection and societal well-being.

  1. Stay Agile and Adaptive

Remain agile and adaptive in response to evolving regulatory requirements and market dynamics. Continuously assess and adjust compliance strategies in light of new regulations, technological advancements, and emerging industry trends. Embrace a continuous improvement and innovation culture to stay ahead of regulatory developments and maintain a competitive edge in AI.

Start Getting Value With
Centraleyes for Free

See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days

Skip to content