Colorado is on the brink of becoming the U.S.'s first state to implement sweeping legislation targeting the use of artificial intelligence (AI) within employment among other pivotal sectors. Passed by the state legislature on May 8, Senate Bill 24-205 (SB205) awaits Governor Jared Polis's approval. With an anticipated implementation year of 2026, the bill aims to eliminate algorithmic bias, mandating strict adherence to compliance protocols by both creators and users of critical AI systems.
### Definition and Coverage Under SB205
The bill classifies "high-risk artificial intelligence systems" as machine-based algorithms with substantial impacts on decision-making in diverse fields including:
* Employment and job opportunities
* Educational admissions and opportunities
* Financial or lending services
* Essential government services
* Healthcare services
* Housing
* Insurance
* Legal services
Such AI systems warrant the high-risk tag when their decision-making processes lead to or significantly affect consequential choices impacting individual or group rights, potentially leading to inequalities based on protected characteristics like age, disability, race, religion, or gender.
### Applicability
SB205 pertains to:
**Developers**: Entities within Colorado who develop or notably alter an AI system.
**Deployers**: Entities within Colorado that utilize high-risk AI systems.
Exemptions might apply to small businesses with less than 50 full-time employees.
### Compliance Mandates
Effective February 1, 2026, involved entities must:
For **Developers**:
- Furnish detailed information to deployers, including data on potential misuses and data summaries.
- Post a public declaration on their websites outlining their AI systems and risk management tactics.
- Report any risks of algorithmic discrimination to the attorney general.
For **Deployers**:
- Set up and periodically evaluate a detailed risk management policy.
- Perform yearly impact assessments of AI systems, and additional reviews following significant changes.
- Inform consumers when high-risk AI systems play a part in significant decision-making processes, providing detailed disclosures on their websites.
- Make certain consumers know when they are interacting with an AI system unless it is apparent to a reasonable person.
### General Protocols
Both developers and deployers must exercise due diligence to prevent algorithmic discrimination. Deployers need to report any discriminatory outcomes identified by their AI systems to the attorney general. Developers are required to notify the attorney general and all known deployers about any newly discovered risks of discrimination.
### Consumer Information and Rights
Businesses utilizing high-risk AI systems need to provide detailed notifications to individuals affected by these systems, clarifying:
- The purpose and functionality of the AI system.
- The type of decisions influenced by the AI.
- The choice to opt out of profiling in decisions that carry significant legal consequences.
- Relevant contact information and access to the public statement on AI usage.
### Enforcement
The Colorado attorney general will exclusively enforce SB205, treating violations as unfair or deceptive trade practices. While there is no private right to action under this bill, businesses can claim an affirmative defense if they address violations through feedback or internal reviews.
Colorado employers must brace for substantial compliance demands imposed by SB205, potentially setting a precedent for nationwide AI system scrutiny and regulation. Businesses must build solid AI risk management strategies, conduct regular impact assessments, and maintain clear disclosures to meet the requirements of this groundbreaking law. Additionally, employers elsewhere should stay vigilant as similar regulations may be considered in other states, indicating a trend towards more strict AI governance.