Navigating New Frontiers: Colorado’s Groundbreaking AI Consumer Protection Law
Navigating New Frontiers: Colorado’s Groundbreaking AI Consumer Protection Law
The Colorado Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems (the “Colorado AI Act” or the “Act”) is the first of its kind in the United States. It introduces comprehensive consumer protection measures targeting interactions with AI systems. This pioneering legislation, set to take effect on February 1, 2026, places new obligations on developers and deployers of high-risk AI systems, including enhanced transparency requirements and various consumer rights. The Colorado AI Act is similar to the EU AI Act, for example, in applying a risk-based approach to regulating AI. However, there also are several differences, such as the Colorado AI Act’s more limited territorial scope and more extensive requirements for deployers of high-risk AI systems. For more detail on the similarities and differences, see Colorado AI Act vs EU AI Act.
If your company does business in Colorado and either develops or deploys AI systems:
The Colorado AI Act will apply to developers and deployers. Developers are persons doing business in the state that develop or intentionally and substantially modify an AI system, while deployers are persons doing business in the state that deploy a high-risk AI system. Unlike many of the state consumer privacy laws, the Colorado AI Act does not have a threshold number of consumers to trigger applicability. And while both the Colorado AI Act and the Colorado Privacy Act (CPA) use “consumers,” the term refers to Colorado residents under the AI Act and the CPA defines consumers as Colorado residents “acting only in an individual or household context,” excluding anyone acting in a commercial or employment context. Therefore, companies that may not be subject to the CPA may have obligations under the Colorado AI Act.
Similar to the EU AI Act (see our alert—EU AI Act – Landmark Law on Artificial Intelligence Approved by the European Parliament), the bulk of the Colorado AI Act’s requirements apply to “high-risk AI systems.” These are defined as any artificial intelligence system that, when deployed, makes or is a substantial factor in making consequential decisions. Consequential decisions are those with a material legal or similarly significant effect on the provision or denial to any Colorado resident of, or the cost or terms of:
While this definition does not completely align with the EU AI Act’s high-risk artificial systems, there are overlapping areas of concern that are relevant to many companies. For example, the EU AI Act also recognizes the risks related to using AI Systems for decisions related to education, employment, and healthcare as high-risk areas.
The Colorado AI Act requires developers and deployers of these high-risk AI systems to use “reasonable care” to avoid algorithmic discrimination and establishes specific requirements for what constitutes reasonable care. Algorithmic discrimination is defined as “the use of an artificial intelligence system [that] results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of an actual or perceived” classification status protected by Colorado or federal law. Compliance with the requirements of the Colorado AI Act creates a rebuttable presumption that a developer or deployer used reasonable care to avoid algorithmic discrimination. An overview of the main requirements for deployers and developers is included below.
Obligations imposed on developers of high-risk AI systems include the following:
Obligations imposed on deployers of high-risk AI systems include the following:
In addition to the obligations above, deployers or developers that deploy, offer, sell, lease, license, give, or otherwise make available an AI system that interacts directly with consumers must inform consumers that they are interacting with an AI system, unless it would be obvious to a reasonable person.
The Colorado AI Act provides for some limited exemptions, including for:
The Attorney General has exclusive authority to enforce the Colorado AI Act as well as rule-making authority. Violations of the Colorado AI Act’s provisions constitute a deceptive trade practice. There is no private right of action.
Developers, deployers, and other persons have an affirmative defense to any action brought by the Attorney General if they:
Use of high-risk AI systems will likely also be profiling under the CPA where consumer (as defined by the CPA) personal data is processed. Entities subject to the CPA must provide consumers with notice of profiling and the right to opt out at or before any profiling in furtherance of decisions that produce legal or similarly significant effects concerning the resident. In addition, companies subject to the CPA must conduct and document a data protection assessment if the profiling presents a reasonably foreseeable risk to consumers of: (i) unfair or deceptive treatment, or unlawful disparate impact; (ii) financial or physical injury; (iii) a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, if the intrusion would be offensive to a reasonable person; or (iv) other substantial injury.
The Colorado AI Act is similar to the EU AI Act in a few ways. For example, both take a risk-based approach to regulating AI and require assessment and management of AI risks. There are, however, also several differences, such as the more limited territorial scope of the Colorado AI Act and the fact that it imposes more significant requirements on deployers of AI systems. The table below summarizes some of these differences: