On May 8, 2024, the Colorado House of Representatives passed SB 205, a landmark law regulating artificial intelligence (AI). This bill, which had already cleared the state Senate on May 3, positions Colorado as the first state in the nation to introduce comprehensive AI legislation. With Governor Jared Polis’s decision still pending, the bill’s potential enactment has significant implications for AI development and deployment within the state.
Introduction to the Colorado AI Act
With no federal AI regulation on the horizon, state legislatures like Colorado’s and Utah’s are filling the void. SB 205, the Colorado AI Act, is a pioneering effort to establish a regulatory framework for AI systems, particularly those classified as “high-risk.” This legislation aims to protect consumers from the potential harms of AI by imposing strict requirements on developers and deployers of high-risk AI systems.
The bill titled “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (the “Colorado AI Act”) will take effect on February 1, 2026.
Unlike the Colorado Privacy Act, which focuses on the rights of consumers regarding their personal data, the Colorado AI Act specifically addresses the use of AI in decision-making processes that impact consumers.
The AI Colorado consumer protection act targets conduct that is already illegal, such as unlawful discrimination in critical consumer activities like lending and employment. However, it now imposes a detailed set of obligations for companies that use AI in making these decisions.
Key Provisions of the Colorado AI Act
The Colorado law regulating AI introduces several critical provisions designed to ensure ethical AI deployment:
- Algorithmic Discrimination Duty of Care: Developers and deployers of high-risk AI systems must exercise reasonable care to prevent algorithmic discrimination, which refers to biased outcomes affecting individuals or groups based on protected classifications.
- AI Interaction Notices & Public Disclosures: Entities deploying AI systems intended to interact with consumers must disclose this interaction unless it’s obvious. Additionally, AI developers and deployers must publicly disclose the types of high-risk AI systems they work with and how they manage associated risks.
- High-Risk AI Developer Requirements: Developers must provide detailed information about their AI systems, including training data, performance evaluations, and safeguards against algorithmic discrimination, to both deployers and the Colorado Attorney General.
- High-Risk AI Deployer Requirements: Deployers must implement a comprehensive risk management policy, conduct regular impact assessments, and notify consumers of AI use in significant decision-making processes. They must also allow consumers to correct data and appeal adverse decisions influenced by AI.
Start Getting Value With
Centraleyes for Free
See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days
Scope and Applicability
SB 205 applies to any business operating within Colorado that uses high-risk AI systems—AI systems involved in consequential decisions affecting consumers. Notably, small businesses with fewer than 50 employees and revenue below a certain threshold are exempt, focusing the bill on larger entities with more significant AI applications.
Compliance Requirements
Organizations under the Colorado AI regulation must adhere to stringent compliance measures, including:
- Regular audits of AI systems.
- Implementation of robust data protection and risk management strategies.
- Submission of annual reports detailing AI usage and its impacts.
These requirements aim to ensure that high-risk AI systems operate transparently and ethically, minimizing potential harm to consumers.
Enforcement and Penalties
The Colorado Attorney General is tasked with enforcing the Colorado rules and regulations surrounding AI. Non-compliance can result in significant penalties, including fines, mandatory shutdowns of non-compliant systems, and public disclosure of violations. This enforcement mechanism underscores the state’s commitment to upholding the act’s standards.
Stakeholder Perspectives
Different stakeholders have varied views on the Colorado AI Act:
- Tech Companies
Tech companies are divided on the Colorado AI Act. Some view the legislation as necessary to ensure ethical AI development and maintain public trust in AI technologies. These companies argue that transparency and accountability can foster innovation by building consumer confidence. However, other tech companies express concerns about the increased regulatory burdens that could stifle innovation. They fear that the compliance costs and administrative overhead associated with the Act might slow the development and deployment of AI technologies, potentially putting them at a competitive disadvantage globally.
- Privacy Advocates
Privacy advocates generally support the Colorado AI Act, emphasizing its focus on transparency and consumer protection. They argue that the Act’s requirements for disclosure, risk assessments, and the prohibition of algorithmic discrimination are crucial steps towards safeguarding individual rights in the face of rapidly advancing AI technologies. Privacy advocates believe such regulations are essential to prevent misuse of AI, protect sensitive personal information, and ensure that AI systems are used responsibly.
- Legal Experts
Legal experts have highlighted several potential ambiguities and enforcement challenges within the Colorado AI Act. They point out that the definitions of high-risk AI systems and the specific requirements for compliance could be interpreted in various ways, leading to inconsistent application of the law. Legal experts call for clear guidelines and standardized procedures to enforce the Act uniformly across different sectors. They also stress the importance of establishing robust mechanisms for monitoring compliance and addressing potential legal disputes that may arise.
Challenges and Criticisms of the AI Law
Critics of Colorado’s AI law say that it presents several challenges:
- Increased Operational Costs for Compliance: The requirements for risk assessments, transparency measures, and ongoing monitoring of AI systems could significantly increase operational costs for companies. Smaller businesses, in particular, may find it challenging to allocate resources to meet these regulatory demands.
- Slowing Down AI Innovation and Deployment: The compliance burden associated with SB 205 might slow the pace of AI innovation and deployment. Companies may become more cautious in developing and implementing new AI technologies due to the fear of non-compliance and potential penalties.
- Creating Legal Uncertainties: The Act’s broad and sometimes vague language could lead to legal uncertainties. Companies may face difficulties in interpreting and applying the requirements, leading to litigation and inconsistent enforcement risks. These uncertainties could be exploited, potentially undermining the Act’s effectiveness.
Future Outlook
The Colorado AI Act is likely the first step in an evolving landscape of AI regulation. As AI technology advances and its applications become more widespread, the Act may undergo amendments to address new challenges and incorporate best practices. Future iterations of the law might refine definitions, clarify compliance requirements, and enhance enforcement mechanisms to keep pace with technological developments.
Colorado’s initiative could inspire other states to adopt similar regulations, contributing to a more cohesive national framework for AI governance. As more states implement their own AI regulations, there may be a push toward harmonizing these laws to reduce compliance complexity for companies operating across state lines. Ultimately, the goal will be to create a balanced regulatory environment that fosters innovation while ensuring ethical and responsible AI development.
State AI Legislation: an Emerging Trend
As artificial intelligence (AI) technology continues to evolve and permeate various sectors, the regulatory landscape around it is becoming increasingly complex. This complexity is particularly pronounced at the state level in the United States, where recent legislative activities suggest a trend toward comprehensive AI regulation.
Utah’s Artificial Intelligence Policy Act
On March 13, 2024, Utah made history by becoming the first US state to enact a broad consumer protection statute specifically governing AI with the passage of the Utah Artificial Intelligence Policy Act (AIPA). This groundbreaking legislation focuses on ensuring the transparent use of AI. Effective May 1, 2024, the AIPA imposes significant disclosure obligations on covered entities using generative AI (gen AI) technologies and establishes penalties for violations, including civil penalties calculated on a per-violation basis.
The AIPA has created a new regulatory body, the Office of Artificial Intelligence Policy, to promote innovation. This office is tasked with establishing an AI “Learning Laboratory Program” to analyze risks and benefits related to AI development and use. The program is designed to inform the state’s broader approach to regulating AI by inviting entities to participate and potentially entering into regulatory mitigation agreements. These agreements allow participants to mitigate certain regulatory consequences by implementing specific safeguards and limiting their technology’s use.
Additional State Initiatives
Several other states regulate AI through existing consumer privacy laws and specific AI legislation, such as deepfake regulations in political advertising. Some examples include:
- California: Proposed regulations on automated decision-making under the CCPA.
- Tennessee: ELVIS Act prohibiting AI-created likenesses without permission.
- Texas and Minnesota: Laws against AI in political advertising to prevent misinformation and influence on elections.
Preparing for AI Regulation
As the wave of AI regulation continues to grow, companies must take proactive steps to ensure compliance. Here are key actions to consider:
- Inventory Existing AI Tools: Review and catalog all AI tools used within the organization, focusing on their use of personal data and potential consumer interactions.
- Assess Applicability of State Laws: Determine if your organization qualifies as a developer or deployer of high-risk AI systems under the new regulations.
- Monitor Regulatory Developments: Stay informed about regulatory changes and enforcement activities to anticipate and adapt to evolving requirements.
- Implement Transparency Measures: As mandated by state laws, ensure clear and conspicuous disclosure of AI usage to consumers.
- Evaluate AI Outputs: Regularly monitor AI systems to identify and mitigate false, misleading, or discriminatory outputs.
- Conduct Employee Training: Educate employees on the legal requirements and best practices for using AI within the organization.
What’s Next?
The increasing regulatory focus on AI at the state level signifies a shift towards ensuring consumer protection and transparency in AI use. By staying informed and proactive, companies can navigate this complex landscape and leverage AI responsibly, fostering innovation while adhering to emerging legal standards.
The Colorado AI Act represents a significant milestone in AI regulation. Setting stringent standards for high-risk AI systems aims to protect consumers while fostering ethical AI development. As the first comprehensive state-level AI legislation, it sets a precedent for other states.
Start Getting Value With
Centraleyes for Free
See for yourself how the Centraleyes platform exceeds anything an old GRC
system does and eliminates the need for manual processes and spreadsheets
to give you immediate value and run a full risk assessment in less than 30 days