A Bill for an Act
Page 1, Line 101Concerning consumer protections in interactions with
Page 1, Line 102artificial intelligence systems.
Bill Summary
(Note: This summary applies to this bill as introduced and does not reflect any amendments that may be subsequently adopted. If this bill passes third reading in the house of introduction, a bill summary that applies to the reengrossed version of this bill will be available at http://leg.colorado.gov.)
In 2024, the general assembly enacted Senate Bill 24-205, which created consumer protections in interactions with artificial intelligence systems (provisions). The bill amends these provisions by:
- Redefining "algorithmic discrimination" to mean the use of an artificial intelligence system that results in a violation of any applicable local, state, or federal anti-discrimination law;
- Creating an exception to the definition of "developer" of an artificial intelligence system (developer) if a person offers the artificial intelligence system with open model weights or if the person meets specified conditions regarding the artificial intelligence system;
- Exempting specified technologies that do not make, or are not a substantial factor in making, a consequential decision from the definition of "high-risk artificial intelligence system";
- Eliminating the duty of a developer or deployer of a high-risk artificial intelligence system (deployer) to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination;
- Eliminating the requirement that a developer or deployer notify the attorney general of any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system;
- Exempting a developer from specified disclosure requirements if the developer has received less than $10,000,000 from third-party investors, has annual revenues of less than $5,000,000, and has been actively operating and generating revenue for less than 5 years and sells, distributes, or otherwise makes available to deployers high-risk artificial intelligence systems that do not exceed specified limits on the number of consequential decisions made by the systems;
- Requiring a deployer to include in an impact assessment whether the system poses any known or reasonably foreseeable risks of limiting accessibility for certain individuals, an unfair or deceptive trade practice, a violation of state or federal labor laws, or a violation of the "Colorado Privacy Act";
- Requiring a deployer to provide additional information to a consumer if the high-risk artificial intelligence system makes, or is a substantial factor in making, a consequential decision concerning the consumer;
- Amending provisions regarding a consumer's right to appeal an adverse consequential decision concerning the consumer so that the provisions apply only to an adverse consequential decision that is not a time-limited decision or a competitive decision;
- Clarifying the meaning of "adverse" when referring to a consequential decision;
- Broadening an exemption for a deployer from specified disclosure requirements based on the deployer's number of full-time equivalent employees;
- Exempting a deployer from specified requirements if the deployer uses the high-risk artificial intelligence system solely relating to the recruitment, sourcing, or hiring of external candidates for employment and meets specified disclosure requirements.
- Applying specified requirements only to high-risk artificial intelligence systems that make, or are the principal basis in making, consequential decisions;
- Requiring a developer or deployer that withholds information otherwise subject to disclosure to provide specified information regarding the disclosure; and
- Requiring that the attorney general's authority to investigate and enforce violations of the provisions begins on January 1, 2027.
Page 3, Line 1Be it enacted by the General Assembly of the State of Colorado:
Page 3, Line 2SECTION 1. In Colorado Revised Statutes, 6-1-1701, amend
Page 3, Line 3(1)(a), (3), (5), (6), (7), (10), and (11)(a) introductory portion; repeal and
Page 3, Line 4reenact, with amendments, (9)(b); and add (2.7), (10.3), (11.7), (13), and (14) as follows:
Page 3, Line 56-1-1701. Definitions. As used in this part 17, unless the context otherwise requires:
Page 3, Line 6(1) (a) "Algorithmic discrimination" means
any condition inPage 3, Line 7
which the use of an artificial intelligence system that results inanPage 3, Line 8
unlawful differential treatment or impact that disfavors an individual orPage 3, Line 9
group of individuals on the basis of their actual or perceived age, color,Page 3, Line 10
disability, ethnicity, genetic information, limited proficiency in thePage 3, Line 11
English language, national origin, race, religion, reproductive health, sex,Page 3, Line 12
veteran status, or other classification protected under the laws of this statePage 3, Line 13
or federal law a violation of any applicable local, state, orPage 3, Line 14federal anti-discrimination law, including:
Page 4, Line 1(I) The Colorado anti-discrimination act, parts 3 to 8 of article 34 of title 24;
Page 4, Line 2(II) The "Civil Rights Act of 1964", 42 U.S.C. sec. 2000a et seq.;
Page 4, Line 3(III) The "Americans with Disabilities Act of 1990", 42 U.S.C. sec. 12101 et seq.;
Page 4, Line 4(IV) The "Age Discrimination in Employment Act of 1967", 29 U.S.C. sec. 621 et seq.;
Page 4, Line 5(V) The "Genetic Information Nondiscrimination Act of 2008", 42 U.S.C. sec. 2000ff et seq.; and
Page 4, Line 6(VI) The "Pregnant Workers Fairness Act", 42 U.S.C. sec. 2000gg et seq.
Page 4, Line 7(2.7) "Competitive decision" means a decision regarding a
Page 4, Line 8consumer where a favorable decision has been made regarding
Page 4, Line 9another consumer and that favorable decision necessarily
Page 4, Line 10entails an adverse decision for the consumer, such as a decision
Page 4, Line 11regarding a job opportunity for which there are no remaining openings.
Page 4, Line 12(3) (a) "Consequential decision" means a decision that has a
Page 4, Line 13material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of:
Page 4, Line 14
(a) (I) Education enrollment or an education opportunity;(b) (II) Employment or an employment opportunity;Page 4, Line 15
(c) (III) (A) Afinancial or lending service loan, financing, orPage 4, Line 16credit for an individual for personal, family, or household purposes from a financial lending or credit service;
Page 4, Line 17(B) Consumer credit transactions, as defined in section 5-1-301 (12); or
Page 5, Line 1(C) Banking or credit union services for an individual,
Page 5, Line 2including banking transactions, as defined in section 11-101-401
Page 5, Line 3(9), but excluding banking or credit union services primarily
Page 5, Line 4relating to securities, as defined in section 11-51-201 (17);
Page 5, Line 5derivatives transactions, as defined in 17 CFR 270.18f-4, as that
Page 5, Line 6section existed on July 1, 2025; or services provided to an
Page 5, Line 7individual who is an accredited investor, as defined in 17 CFR 230.501, as that section existed on July 1, 2025;
Page 5, Line 8
(d) (IV) An essential government service, which is a servicePage 5, Line 9that is provided by the state; a municipality, township, county,
Page 5, Line 10or home rule county; or a subdivision or agency of government
Page 5, Line 11and which is needed to support the continuing operation of the
Page 5, Line 12government agency or to provide for or support the health,
Page 5, Line 13safety, and welfare of the public, including medicare, medicaid,
Page 5, Line 14compliance monitoring, enforcement of laws, permitting, and licensing;
Page 5, Line 15
(e) (V) Health-care services;Page 5, Line 16
(f) (VI) Housing, with respect to the purchase or renting ofPage 5, Line 17a primary residence, including short-term tenancy and
Page 5, Line 18transitional housing if it serves as a consumer's primary residence;
Page 5, Line 19
(g) (VII) Insurance; or(h) (VIII) A legal service.Page 5, Line 20(b) A consequential decision is "adverse" if the consequential decision results in:
Page 5, Line 21(I) The denial, cancellation, termination, or revocation
Page 6, Line 1of employment or of a good, a service, or other thing of value to the consumer;
Page 6, Line 2(II) An unfavorable change to the terms of existing
Page 6, Line 3employment or the terms of access to a good, a service, or other thing of value to the consumer;
Page 6, Line 4(III) The denial or refusal to grant employment or a good,
Page 6, Line 5a service, or other thing of value on substantially the same
Page 6, Line 6terms as those originally represented to and expected by the consumer; or
Page 6, Line 7(IV) An offer of employment or a good, a service, or other
Page 6, Line 8thing of value to the consumer on material terms that are
Page 6, Line 9materially less favorable than the most favorable terms
Page 6, Line 10available to a substantial proportion of consumers from or through that deployer.
Page 6, Line 11(5) "Deploy" means to use a high-risk artificial intelligence system or an artificial intelligence system described in section 6-1-1704.
Page 6, Line 12(6) "Deployer" means a person doing business in this state, or an
Page 6, Line 13agent of that person, that deploys a high-risk artificial intelligence
Page 6, Line 14system or an artificial intelligence system described in section 6-1-1704.
Page 6, Line 15(7) (a) "Developer" means a person doing business in this state,
Page 6, Line 16
that develops or intentionally and substantially modifies an artificial intelligence system. or an agent of that person, that:Page 6, Line 17(I) Develops an artificial intelligence system; or
Page 6, Line 18(II) Modifies an artificial intelligence system that makes, or is a substantial factor in making, a consequential decision.
Page 6, Line 19(b) Except as provided in section 6-1-1704, a person is not
Page 7, Line 1subject to the obligations or liability of a developer under this
Page 7, Line 2part 17 if the person offers the artificial intelligence system
Page 7, Line 3with open model weights or, on and after January 1, 2027, so long as the developer:
Page 7, Line 4(I) Does not engage in any material conduct or make any
Page 7, Line 5material statement or representation to vendors, deployers,
Page 7, Line 6other developers, or the general public, including marketing or
Page 7, Line 7advertising, that promotes the use of the artificial intelligence
Page 7, Line 8system in making consequential decisions or that is materially
Page 7, Line 9inconsistent with the statements described in subsection(7)(b)(II) of this section; or
Page 7, Line 10(II) States in all contracts with deployers, vendors, and
Page 7, Line 11other developers applicable to the artificial intelligence
Page 7, Line 12system, and in the terms of service, end user license agreement,
Page 7, Line 13or other similar legal documentation applicable to the artificial intelligence system, that:
Page 7, Line 14(A) The artificial intelligence system is not designed to
Page 7, Line 15enable deployers, other developers, or vendors to use the
Page 7, Line 16system in making, or being a substantial factor in making, consequential decisions;
Page 7, Line 17(B) Deployers, other developers, or vendors shall not use
Page 7, Line 18or engage in conduct that enables or encourages the use of the
Page 7, Line 19artificial intelligence system in making, or being a substantial factor in making, consequential decisions;
Page 7, Line 20(C) If a deployer uses the artificial intelligence system to
Page 7, Line 21make, or be a substantial factor in making, a consequential
Page 7, Line 22decision, the deployer is responsible for ensuring that their use
Page 8, Line 1of the artificial intelligence system complies with all applicable state and federal laws, including this part 17; and
Page 8, Line 2(D) If a deployer, a vendor, or other developer modifies
Page 8, Line 3the artificial intelligence system so that it can be used to make,
Page 8, Line 4or be a substantial factor in making, consequential decisions,
Page 8, Line 5the party making the modification may be considered a developer for purposes of this part 17.
Page 8, Line 6(9) (b) "High-risk artificial intelligence system" does not
Page 8, Line 7include the following technologies unless the technologies,
Page 8, Line 8when deployed, make, or are a substantial factor in making, a consequential decision:
Page 8, Line 9(I) A technology that:
Page 8, Line 10(A) Performs a narrow procedural task of a limited
Page 8, Line 11nature, including a technology that classifies incoming
Page 8, Line 12documents into categories, is used to detect duplicates among
Page 8, Line 13a large number of applications, categorizes documents based on
Page 8, Line 14when they were received, renames files according to
Page 8, Line 15standardized naming conventions, or automates the extraction of metadata for indexing;
Page 8, Line 16(B) Improves a previously completed human activity
Page 8, Line 17without being a substantial factor in any decisions resulting
Page 8, Line 18from the prior human activity, including improving the language used in previously drafted documents; or
Page 8, Line 19(C) Detects decision-making patterns or deviations from
Page 8, Line 20preexisting decision-making patterns following a previously
Page 8, Line 21completed human assessment, which assessment the technology
Page 8, Line 22is not meant to replace or influence without sufficient human
Page 9, Line 1review, including a technology that analyzes a particular
Page 9, Line 2decision-maker's preexisting pattern of decisions and flags potential inconsistencies or anomalies;
Page 9, Line 3(II) Tools for filtering robocalls, junk or spam email, or messages;
Page 9, Line 4(III) Spell-checking tools;
(IV) Calculators;
Page 9, Line 5(V) Internet or computer network infrastructure
Page 9, Line 6optimization, diagnostic, or maintenance tools, such as domain
Page 9, Line 7name registration, website hosting, content delivery, web
Page 9, Line 8caching, network traffic management, or system diagnostic tools;
Page 9, Line 9(VI) Databases, spreadsheets, or other similar tools that
Page 9, Line 10do no more than organize data already in the possession of the user of the technology;
Page 9, Line 11(VII) Cybersecurity and data security measures,
Page 9, Line 12including firewalls, antivirus software, intrusion detection and prevention tools, and malware detection tools;
Page 9, Line 13(VIII) Technologies used to perform, assist, or administer
Page 9, Line 14office support functions and other ancillary business
Page 9, Line 15operations, such as ordering office supplies, managing meeting schedules, or automating inventory tracking;
Page 9, Line 16(IX) Anti-fraud systems or tools used to prevent, detect,
Page 9, Line 17or respond to unlawful and malicious conduct or to comply with federal or state law; or
Page 9, Line 18(X) Technology that communicates with consumers in
Page 9, Line 19natural language for the purpose of providing those consumers
Page 10, Line 1with information, referrals or recommendations, or answers to questions and that is subject to an acceptable use policy.
Page 10, Line 2(10)
(a) "Intentional and substantial modification" orPage 10, Line 3
"intentionally and substantially modifies" means a deliberate changePage 10, Line 4
made to an artificial intelligence system that results in any new reasonablyPage 10, Line 5
foreseeable risk of algorithmic discrimination "Model weights" meansPage 10, Line 6the numerical parameters within a model that are generated by
Page 10, Line 7or are a component of an artificial intelligence system and that help determine the model's output in response to inputs.
Page 10, Line 8
(b) "Intentional and substantial modification" or "intentionally andPage 10, Line 9
substantially modifies" does not include a change made to a high-riskPage 10, Line 10
artificial intelligence system, or the performance of a high-risk artificial intelligence system, if:Page 10, Line 11
(I) The high-risk artificial intelligence system continues to learn after the high-risk artificial intelligence system is:Page 10, Line 12
(A) Offered, sold, leased, licensed, given, or otherwise made available to a deployer; orPage 10, Line 13
(B) Deployed;Page 10, Line 14
(II) The change is made to the high-risk artificial intelligencePage 10, Line 15
system as a result of any learning described in subsection (10)(b)(I) of this section;Page 10, Line 16
(III) The change was predetermined by the deployer, or a thirdPage 10, Line 17
party contracted by the deployer, when the deployer or third partyPage 10, Line 18
completed an initial impact assessment of such high-risk artificial intelligence system pursuant to section 6-1-1703 (3); andPage 10, Line 19
(IV) The change is included in technical documentation for thePage 10, Line 20
high-risk artificial intelligence system.Page 11, Line 1(10.3) "Open model weights" means, with respect to an artificial intelligence system, that the developer:
Page 11, Line 2(a) Places the artificial intelligence system in the public
Page 11, Line 3domain without any license or reservation of rights or makes
Page 11, Line 4the artificial intelligence system available under a license that
Page 11, Line 5allows any member of the public to copy, distribute, modify, and
Page 11, Line 6use the artificial intelligence system's model weights without permission, payment, royalties, or fees; and
Page 11, Line 7(b) Provides sufficiently detailed information about
Page 11, Line 8other components of the model, artificial intelligence system,
Page 11, Line 9or training data for a person skilled in artificial intelligence to
Page 11, Line 10correctly interpret the model weights and utilize them effectively in other artificial intelligence systems.
Page 11, Line 11(11) (a) "Substantial factor" means, except as provided in section 6-1-1703 (6.7), a factor that:
Page 11, Line 12(11.7) "Time-limited decision" means a decision relating to
Page 11, Line 13a good, a service, or an opportunity that has an end or
Page 11, Line 14expiration date that is established prior to the commencement of the decision-making process.
Page 11, Line 15(13) "Unitary business" means a single economic enterprise
Page 11, Line 16made up either of separate parts of a single entity or of an
Page 11, Line 17affiliated group of entities that are sufficiently
Page 11, Line 18interdependent, integrated, and interrelated through their
Page 11, Line 19activities so as to provide a synergy and mutual benefit that
Page 11, Line 20produces a sharing or exchange of value among them and a significant flow of value to the separate parts.
Page 11, Line 21(14) "Vendor" means a person that knowingly sells, offers
Page 12, Line 1for sale, or distributes an artificial intelligence system to a deployer or to another vendor.
Page 12, Line 2SECTION 2. In Colorado Revised Statutes, 6-1-1702, amend (2)
Page 12, Line 3introductory portion, (2)(a), (2)(c)(III), (3)(a), (4), (6), and (7); repeal (1) and (5); and add (8) and (9) as follows:
Page 12, Line 46-1-1702. Developer duty to avoid algorithmic discrimination
Page 12, Line 5- required documentation - applicability - exempt developers. (1)
OnPage 12, Line 6
and after February 1, 2026, a developer of a high-risk artificialPage 12, Line 7
intelligence system shall use reasonable care to protect consumers fromPage 12, Line 8
any known or reasonably foreseeable risks of algorithmic discriminationPage 12, Line 9
arising from the intended and contracted uses of the high-risk artificialPage 12, Line 10
intelligence system. In any enforcement action brought on or afterPage 12, Line 11
February 1, 2026, by the attorney general pursuant to section 6-1-1706,Page 12, Line 12
there is a rebuttable presumption that a developer used reasonable care asPage 12, Line 13
required under this section if the developer complied with this section andPage 12, Line 14
any additional requirements or obligations as set forth in rules promulgated by the attorney general pursuant to section 6-1-1707.Page 12, Line 15(2) On and after
February 1, 2026, and January 1, 2027, exceptPage 12, Line 16as provided in subsection (6) of this section, a developer of a high-risk
Page 12, Line 17artificial intelligence system shall make available to
the each deployer or other developer of the high-risk artificial intelligence system:Page 12, Line 18(a) A general statement describing the
reasonably foreseeablePage 12, Line 19intended uses and known harmful or inappropriate uses of the high-risk artificial intelligence system;
Page 12, Line 20(c) Documentation describing:
Page 12, Line 21(III) The intended inputs and outputs of the high-risk artificial
Page 12, Line 22intelligence system;
Page 13, Line 1(3) (a) Except as provided in subsection (6) of this section, a
Page 13, Line 2developer that offers, sells, leases, licenses, gives, or otherwise makes
Page 13, Line 3available to a deployer or other developer a high-risk artificial
Page 13, Line 4intelligence system on or after
February 1, 2026 January 1, 2027, shallPage 13, Line 5make available to the deployer or other developer, to the extent feasible,
Page 13, Line 6the documentation and information, through artifacts such as model cards,
Page 13, Line 7dataset cards, or other impact assessments, necessary for a deployer, or
Page 13, Line 8for a third party contracted by a deployer, to complete an impact assessment pursuant to section 6-1-1703 (3).
Page 13, Line 9(4) (a) On and after
February 1, 2026 January 1, 2027, aPage 13, Line 10developer shall make available, in a manner that is clear and readily
Page 13, Line 11available on the developer's website or in a public use case inventory, a statement summarizing:
Page 13, Line 12(I) The types of high-risk artificial intelligence systems that the
Page 13, Line 13developer has developed
or intentionally and substantially modified and currently makes available to a deployer or other developer; andPage 13, Line 14(II) How the developer manages known or reasonably foreseeable
Page 13, Line 15risks of algorithmic discrimination that may arise from the development
Page 13, Line 16
or intentional and substantial modification of the types of high-riskPage 13, Line 17artificial intelligence systems described in accordance with subsection (4)(a)(I) of this section.
Page 13, Line 18(b) A developer shall update the statement described in subsection (4)(a) of this section
Page 13, Line 19
(I) as necessary to ensure that the statement remains accurate.andPage 13, Line 20
(II) No later than ninety days after the developer intentionally andPage 13, Line 21
substantially modifies any high-risk artificial intelligence systemPage 13, Line 22
described in subsection (4)(a)(I) of this section.Page 14, Line 1(5)
On and after February 1, 2026, a developer of a high-riskPage 14, Line 2
artificial intelligence system shall disclose to the attorney general, in aPage 14, Line 3
form and manner prescribed by the attorney general, and to all knownPage 14, Line 4
deployers or other developers of the high-risk artificial intelligencePage 14, Line 5
system, any known or reasonably foreseeable risks of algorithmicPage 14, Line 6
discrimination arising from the intended uses of the high-risk artificialPage 14, Line 7
intelligence system without unreasonable delay but no later than ninety days after the date on which:Page 14, Line 8
(a) The developer discovers through the developer's ongoingPage 14, Line 9
testing and analysis that the developer's high-risk artificial intelligencePage 14, Line 10
system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination; orPage 14, Line 11
(b) The developer receives from a deployer a credible report thatPage 14, Line 12
the high-risk artificial intelligence system has been deployed and has caused algorithmic discrimination.Page 14, Line 13(6) Nothing in subsections (2) to
(5) (4) of this section requires aPage 14, Line 14developer to disclose a trade secret, information otherwise protected
Page 14, Line 15from disclosure by applicable state or federal law, or information that
Page 14, Line 16would create a security risk to the developer. If a developer withholds
Page 14, Line 17information from a disclosure pursuant to this subsection (6),
Page 14, Line 18the developer shall notify the person that would otherwise
Page 14, Line 19have a right to receive the information, state the basis for
Page 14, Line 20withholding the information, and provide all information to
Page 14, Line 21which the basis for withholding does not apply. The notification must comply with the requirements of section 6-1-1703 (4)(c).
Page 14, Line 22(7) (a) A developer shall maintain all documentation,
Page 14, Line 23disclosures, and other records required by subsections (2) to (4)
Page 15, Line 1of this section with respect to each high-risk artificial
Page 15, Line 2intelligence system throughout the period during which the
Page 15, Line 3developer sells, markets, distributes, or makes available the
Page 15, Line 4high-risk artificial intelligence system and for at least three
Page 15, Line 5years following the last date on which the developer sells,
Page 15, Line 6markets, distributes, or makes available the high-risk artificial intelligence system.
Page 15, Line 7(b) On and after
February 1, 2026 January 1, 2027, the attorneyPage 15, Line 8general may require that a developer disclose to the attorney general, no
Page 15, Line 9later than ninety days after the request and in a form and manner
Page 15, Line 10prescribed by the attorney general, the statement or documentation
Page 15, Line 11described in subsection (2) of this section or the records maintained
Page 15, Line 12pursuant to subsection (7)(a) of this section. The attorney general
Page 15, Line 13may evaluate
such the statement,or documentation, or records toPage 15, Line 14ensure compliance with this part 17, and the statement,
or documentation,Page 15, Line 15
is or records are not subject to disclosure under the "Colorado OpenPage 15, Line 16Records Act", part 2 of article 72 of title 24. In a disclosure required
Page 15, Line 17pursuant to this subsection (7), a developer may designate the statement,
Page 15, Line 18
or documentation, or records as includingproprietary information or aPage 15, Line 19trade secret or information otherwise protected from disclosure
Page 15, Line 20by the "Colorado Open Records Act", part 2 of article 72 of title
Page 15, Line 2124. To the extent that any information contained in the statement,
orPage 15, Line 22documentation, or records includes information subject to
Page 15, Line 23attorney-client privilege or work-product protection, the disclosure does not constitute a waiver of the privilege or protection.
Page 15, Line 24(8) Subsections (2)(c), (2)(d), and (4) of this section do not
Page 15, Line 25apply to a developer that:
Page 16, Line 1(a) Meets the requirements of sections 24-48.5-112 (1)(g)(III) and (1)(g)(IV); and
Page 16, Line 2(b) Sells, distributes, or otherwise makes available to
Page 16, Line 3deployers high-risk artificial intelligence systems that deployers use to make:
Page 16, Line 4(I) Beginning April 1, 2027, and before March 31, 2028, ten
Page 16, Line 5thousand or fewer consequential decisions in the preceding calendar year;
Page 16, Line 6(II) Beginning April 1, 2028, and before March 31, 2029, five
Page 16, Line 7thousand or fewer consequential decisions in the preceding calendar year; and
Page 16, Line 8(III) Beginning April 1, 2029, and before March 31, 2030,
Page 16, Line 9two thousand five hundred or fewer consequential decisions in the preceding calendar year.
Page 16, Line 10(9) Nothing in this section applies to a developer of an artificial intelligence system to the extent that:
Page 16, Line 11(a) The artificial intelligence system produces or consists
Page 16, Line 12of a score, a model, an algorithm, or similar output that is a
Page 16, Line 13consumer report, as defined by and subject to the "Fair Credit
Page 16, Line 14Reporting Act", 15 U.S.C. sec. 1681a (d)(1), related regulations, and part 1 of article 18 of title 5; and
Page 16, Line 15(b) The developer adheres to the "Fair Credit Reporting
Page 16, Line 16Act", 15 U.S.C. sec. 1681 et seq., including 15 U.S.C. secs. 1681e and 1681g.
Page 16, Line 17SECTION 3. In Colorado Revised Statutes, 6-1-1703, amend
Page 16, Line 18(2)(a) introductory portion, (3)(a), (3)(b)(II), (3)(b)(III), (3)(b)(V), (3)(g),
Page 16, Line 19(4)(a) introductory portion, (4)(a)(II), (4)(b), (5)(a) introductory portion,
Page 17, Line 1(6), (8), and (9); repeal (1), (3)(c), (3)(f), and (7); and add (4)(d), (6.3), (6.5), (6.7), and (10) as follows:
Page 17, Line 26-1-1703. Deployer duty to avoid algorithmic discrimination
Page 17, Line 3- risk management policy and program - definitions. (1)
On and afterPage 17, Line 4
February 1, 2026, a deployer of a high-risk artificial intelligence systemPage 17, Line 5
shall use reasonable care to protect consumers from any known orPage 17, Line 6
reasonably foreseeable risks of algorithmic discrimination. In anyPage 17, Line 7
enforcement action brought on or after February 1, 2026, by the attorneyPage 17, Line 8
general pursuant to section 6-1-1706, there is a rebuttable presumptionPage 17, Line 9
that a deployer of a high-risk artificial intelligence system usedPage 17, Line 10
reasonable care as required under this section if the deployer compliedPage 17, Line 11
with this section and any additional requirements or obligations as setPage 17, Line 12
forth in rules promulgated by the attorney general pursuant to section 6-1-1707.Page 17, Line 13(2) (a) On and after
February 1, 2026 January 1, 2027, andPage 17, Line 14except as provided in
subsection (6) subsections (6) and (8) of thisPage 17, Line 15section, a deployer of a high-risk artificial intelligence system shall
Page 17, Line 16implement a risk management policy and program to govern the
Page 17, Line 17deployer's deployment of the high-risk artificial intelligence system. The
Page 17, Line 18risk management policy and program must specify and incorporate the
Page 17, Line 19principles, processes, and personnel that the deployer uses to identify,
Page 17, Line 20document, and mitigate known or reasonably foreseeable risks of
Page 17, Line 21algorithmic discrimination. The risk management policy and program
Page 17, Line 22must be an iterative process planned, implemented, and regularly and
Page 17, Line 23systematically reviewed and updated over the life cycle of a high-risk
Page 17, Line 24artificial intelligence system, requiring regular, systematic review and
Page 17, Line 25updates. A risk management policy and program implemented and
Page 18, Line 1maintained pursuant to this subsection (2) must be reasonable considering:
Page 18, Line 2(3) (a) Except as provided in subsections
(3)(d), (3)(e), and (6) (3)(d), (3)(e), (5), (6), and (8) of this section,Page 18, Line 3
(I) a deployer, or a third party contracted by the deployer, thatPage 18, Line 4deploys a high-risk artificial intelligence system on or after
February 1,Page 18, Line 5
2026 January 1, 2027, shall complete an impact assessment for the high-risk artificial intelligence system:Page 18, Line 6(I) Prior to the first deployment of the high-risk artificial
Page 18, Line 7intelligence system or January 1, 2027, whichever occurs later;and
Page 18, Line 8(II) Annually for as long as the high-risk artificial intelligence system is deployed.
Page 18, Line 9
(II) On and after February 1, 2026, a deployer, or a third partyPage 18, Line 10
contracted by the deployer, shall complete an impact assessment for aPage 18, Line 11
deployed high-risk artificial intelligence system at least annually andPage 18, Line 12
within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available.Page 18, Line 13(b) An impact assessment completed pursuant to this subsection
Page 18, Line 14(3) must include, at a minimum, and to the extent reasonably known by or available to the deployer:
Page 18, Line 15(II) An analysis of whether the deployment of the high-risk
Page 18, Line 16artificial intelligence system poses any known or reasonably foreseeable risks of:
Page 18, Line 17(A) Algorithmic discrimination and, if so, the nature of the
Page 18, Line 18algorithmic discrimination and the steps that have been taken to mitigate
Page 18, Line 19the risks;
Page 19, Line 1(B) Limiting accessibility for individuals who are
Page 19, Line 2pregnant, breastfeeding, or disabled and, if so, what reasonable
Page 19, Line 3accommodations the deployer may provide that would mitigate any such limitations on accessibility;
Page 19, Line 4(C) An unfair or deceptive trade practice described in section 6-1-105;
Page 19, Line 5(D) A violation of state or federal labor laws, including
Page 19, Line 6laws pertaining to wages, occupational health and safety, and the right to organize; or
Page 19, Line 7(E) A violation of the "Colorado Privacy Act", part 13 of this article 1, if applicable;
Page 19, Line 8(III) A description of the categories and sources of data that
Page 19, Line 9the high-risk artificial intelligence system processes as inputs and the outputs that the high-risk artificial intelligence system produces;
Page 19, Line 10(V) A description of any metrics used to evaluate the
Page 19, Line 11performance and known limitations of the high-risk artificial intelligence system, including the system's validity and reliability;
Page 19, Line 12(c)
In addition to the information required under subsection (3)(b)Page 19, Line 13
of this section, an impact assessment completed pursuant to thisPage 19, Line 14
subsection (3) following an intentional and substantial modification to aPage 19, Line 15
high-risk artificial intelligence system on or after February 1, 2026, mustPage 19, Line 16
include a statement disclosing the extent to which the high-risk artificialPage 19, Line 17
intelligence system was used in a manner that was consistent with, orPage 19, Line 18
varied from, the developer's intended uses of the high-risk artificial intelligence system.Page 19, Line 19(f)
A deployer shall maintain the most recently completed impactPage 19, Line 20
assessment for a high-risk artificial intelligence system as required underPage 20, Line 1
this subsection (3), all records concerning each impact assessment, andPage 20, Line 2
all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system.Page 20, Line 3(g)
On or before February 1, 2026, and at least annually thereafterPage 20, Line 4Beginning January 1, 2027, a deployer, or a third party contracted by
Page 20, Line 5the deployer, must review the deployment of each high-risk artificial
Page 20, Line 6intelligence system deployed by the deployer annually to ensure that the
Page 20, Line 7high-risk artificial intelligence system is not causing algorithmic discrimination.
Page 20, Line 8(4) (a) On and after
February 1, 2026, and no later than the MayPage 20, Line 91, 2026, except as provided in subsection (6) of this section, before
Page 20, Line 10each time
that a deployer deploys a high-risk artificial intelligencePage 20, Line 11system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer shall:
Page 20, Line 12(II) Provide to the consumer a statement disclosing:
Page 20, Line 13(A) The purpose of the high-risk artificial intelligence system and the nature of the consequential decision;
Page 20, Line 14(B) The trade name of the high-risk artificial intelligence
Page 20, Line 15system and the name of the developer or developers of the high-risk artificial intelligence system;
Page 20, Line 16(C) The contact information for the deployer;
Page 20, Line 17(D) A description, in plain language, of the high-risk artificial
Page 20, Line 18intelligence system,
and which description must, at a minimum,Page 20, Line 19include the respective roles of the high-risk artificial
Page 20, Line 20intelligence system and any human components of the
Page 20, Line 21decision-making process; the personal aspects concerning the
Page 20, Line 22consumer's economic situation, health, personal preferences,
Page 21, Line 1interests, reliability, behavior, location, or movements that the
Page 21, Line 2high-risk artificial intelligence system evaluates, analyzes, or
Page 21, Line 3predicts; the method by which the high-risk artificial
Page 21, Line 4intelligence system evaluates, analyzes, or predicts those
Page 21, Line 5personal aspects; how those personal aspects are relevant to
Page 21, Line 6the consequential decisions for which the high-risk artificial
Page 21, Line 7intelligence system is used; and information sufficient for
Page 21, Line 8consumers with disabilities or other consumers entitled to
Page 21, Line 9accommodation under applicable law to determine whether they
Page 21, Line 10will require accommodation and, if so, how to request the accommodation; and
Page 21, Line 11(E) Instructions on how to access the statement required by subsection (5)(a) of this section; and
Page 21, Line 12(b) On and after
February 1, 2026 May 1, 2026, a deployer thatPage 21, Line 13has deployed a high-risk artificial intelligence system to make, or be a
Page 21, Line 14substantial factor in making, a consequential decision concerning a
Page 21, Line 15consumer shall, if the consequential decision is adverse to the consumer,
Page 21, Line 16provide to the consumer, without unreasonable delay and no later than thirty days after the decision:
Page 21, Line 17(I) A
statement disclosing the principal reason or reasons for the consequential decision, single notice that discloses:Page 21, Line 18(A) The main reason or reasons for the consequential
Page 21, Line 19decision, including the degree to which, and manner in which, the
Page 21, Line 20high-risk artificial intelligence system contributed to the consequential
Page 21, Line 21decision and the categories and sources of data that adversely
Page 21, Line 22affected the output of the high-risk artificial intelligence
Page 21, Line 23system in making, or being a substantial factor in making, the
Page 22, Line 1consequential decision, including any categories and sources of sensitive data, as defined in section 6-1-1303 (24);
Page 22, Line 2(B)
The type of data that was Information on whether andPage 22, Line 3how the consumer can exercise their rights described in
Page 22, Line 4subsection (4)(b)(II) of this section and, if applicable, subsection
Page 22, Line 5(4)(b)(III) of this section and section 6-1-1306 (1)(b) with respect
Page 22, Line 6to any personal data processed by the high-risk artificial intelligence system;
in making the consequential decision; andPage 22, Line 7(C)
The source or sources A copy of thedata described inPage 22, Line 8
subsection (4)(b)(I)(B) of this section notice provided to the consumer pursuant to this subsection (4)(b)(I);Page 22, Line 9(II) An opportunity to correct any incorrect personal data that the
Page 22, Line 10high-risk artificial intelligence system processed in making, or as a
Page 22, Line 11substantial factor in making, the consequential decision in the same manner as described in section 6-1-1306 (1)(c); and
Page 22, Line 12(III) For a consequential decision that is not a competitive
Page 22, Line 13decision, not a time-limited decision, and is adverse based on
Page 22, Line 14incorrect personal data or unlawful information or inferences,
Page 22, Line 15an opportunity to appeal
an the adverse consequential decisionPage 22, Line 16concerning the consumer arising from the deployment of a high-risk
Page 22, Line 17artificial intelligence system, which appeal must, if technically feasible,
Page 22, Line 18allow for human review.
unless providing the opportunity for appeal isPage 22, Line 19
not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer.Page 22, Line 20(d) A deployer shall not use a high-risk artificial
Page 22, Line 21intelligence system to make, or be a substantial factor in
Page 22, Line 22making, a consequential decision if the deployer cannot provide
Page 23, Line 1accurate disclosures that satisfy the requirements of subsections (4)(a) and (4)(b)(I) of this section.
Page 23, Line 2(5) (a) On and after
February 1, 2026, and January 1, 2027,Page 23, Line 3except as provided in subsection (6) of this section, a deployer shall make
Page 23, Line 4available, in a manner that is clear and readily available on the deployer's website, a statement summarizing:
Page 23, Line 5(6) Subsections (2),
(3) (4)(b)(II), (4)(b)(III), and (5) of thisPage 23, Line 6section do not apply to a deployer if, at the time the deployer deploys
aPage 23, Line 7the high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed:
Page 23, Line 8(a) The deployer:
Page 23, Line 9(I) Beginning January 1, 2027, and before March 31, 2028,
Page 23, Line 10employs fewer than
fifty five hundred full-time equivalent employeesand worldwide;Page 23, Line 11(II) Beginning April 1, 2028, and before March 31, 2029,
Page 23, Line 12employs fewer than two hundred fifty full-time equivalent employees worldwide; and
Page 23, Line 13(III) Beginning April 1, 2029, employs fewer than one hundred full-time equivalent employees worldwide;
Page 23, Line 14
(II) (a.5)Does The developer and deployer do not use the deployer's own data to train the high-risk artificial intelligence system;Page 23, Line 15(b) The high-risk artificial intelligence system:
Page 23, Line 16(I) Is used for the intended uses that are disclosed to the deployer as required by section 6-1-1702 (2)(a); and
Page 23, Line 17(II) Continues learning based on data derived from sources other than the deployer's own data; and
Page 23, Line 18(c) The deployer makes available to consumers any impact assessment that:
Page 24, Line 1(I) The developer of the high-risk artificial intelligence system has completed and provided to the deployer; and
Page 24, Line 2(II) Includes information that is substantially similar to the
Page 24, Line 3information in the impact assessment required under subsection (3)(b) of this section.
Page 24, Line 4(6.3) Subsections (2), (4)(b), and (5) of this section do not
Page 24, Line 5apply to a deployer's use of a high-risk artificial intelligence system to the extent that:
Page 24, Line 6(a) The deployer uses the high-risk artificial intelligence
Page 24, Line 7system in consequential decisions solely relating to the
Page 24, Line 8recruitment, sourcing, or hiring of external candidates for employment; and
Page 24, Line 9(b) The requirements of subsections (6)(b) and (6)(c) of this
Page 24, Line 10section are met with respect to the high-risk artificial intelligence system.
Page 24, Line 11(6.5) For purposes of subsections (6) and (6.3) of this
Page 24, Line 12section, if a deployer is part of a unitary business and deploys a
Page 24, Line 13high-risk artificial intelligence system that is provided or made
Page 24, Line 14available to the deployer through, or that is paid in whole or in
Page 24, Line 15part by, another entity within the unitary business, the
Page 24, Line 16calculation of the number of full-time equivalent employees
Page 24, Line 17with respect to the high-risk artificial intelligence system is
Page 24, Line 18based on the total number of employees across the unitary business.
Page 24, Line 19(6.7) (a) Subsections (2), (3), (4)(b)(II), and (4)(b)(III) of this
Page 24, Line 20section apply only to high-risk artificial intelligence systems
Page 25, Line 1that make, or are the principal basis in making, consequential decisions.
Page 25, Line 2(b) (I) As used in this subsection (6.7), unless the context
Page 25, Line 3otherwise requires, "principal basis" means the use of the output
Page 25, Line 4of a high-risk artificial intelligence system to make a consequential decision without meaningful human involvement.
Page 25, Line 5(II) As used in this subsection (6.7)(b), "meaningful human involvement" means that a human:
Page 25, Line 6(A) Engages in a meaningful consideration of available
Page 25, Line 7data that is used or produced as output by the high-risk artificial intelligence system; and
Page 25, Line 8(B) Has the authority to change or influence the outcome of the consequential decision.
Page 25, Line 9(7)
If a deployer deploys a high-risk artificial intelligence systemPage 25, Line 10
on or after February 1, 2026, and subsequently discovers that thePage 25, Line 11
high-risk artificial intelligence system has caused algorithmicPage 25, Line 12
discrimination, the deployer, without unreasonable delay, but no later thanPage 25, Line 13
ninety days after the date of the discovery, shall send to the attorneyPage 25, Line 14
general, in a form and manner prescribed by the attorney general, a notice disclosing the discovery.Page 25, Line 15(8) Nothing in subsections (2) to (5)
and (7) of this sectionPage 25, Line 16requires a deployer to disclose a trade secret or information otherwise
Page 25, Line 17protected from disclosure by applicable state or federal law. To the
Page 25, Line 18extent that a deployer withholds information from a disclosure
Page 25, Line 19pursuant to this subsection (8),
or section 6-1-1705 (5), the deployer shallPage 25, Line 20notify the
consumer and provide a person that would otherwise havePage 25, Line 21a right to receive the information, state the basis for the
Page 26, Line 1withholding, and provide all information to which the basis for
Page 26, Line 2withholding does not apply. Notification that a deployer
Page 26, Line 3provides pursuant to this subsection (8) must satisfy the requirements of subsection (4)(c) of this section.
Page 26, Line 4(9) (a) A deployer shall maintain all documentation,
Page 26, Line 5disclosures, and other records required by subsections (2) to (5)
Page 26, Line 6of this section throughout the period during which the deployer
Page 26, Line 7deploys the high-risk artificial intelligence system and for at
Page 26, Line 8least three years following the final deployment of each high-risk artificial intelligence system by the deployer.
Page 26, Line 9(b) On and after
February 1, 2026 January 1, 2027, the attorneyPage 26, Line 10general may require that a deployer, or a third party contracted by the
Page 26, Line 11deployer, disclose to the attorney general, no later than ninety days after
Page 26, Line 12the request and in a form and manner prescribed by the attorney general,
Page 26, Line 13the risk management policy implemented pursuant to subsection (2) of
Page 26, Line 14this section, the impact assessment completed pursuant to subsection (3)
Page 26, Line 15of this section, or the records maintained pursuant to subsection
(3)(f)Page 26, Line 16(9)(a) of this section. The attorney general may evaluate the risk
Page 26, Line 17management policy, impact assessment, or records to ensure compliance
Page 26, Line 18with this part 17, and the risk management policy, impact assessment, and
Page 26, Line 19records are not subject to disclosure under the "Colorado Open Records
Page 26, Line 20Act", part 2 of article 72 of title 24. In a disclosure pursuant to this
Page 26, Line 21subsection (9), a deployer may designate the statement,
or documentation,Page 26, Line 22or records as including
proprietary information or a trade secret orPage 26, Line 23information otherwise protected from disclosure by applicable
Page 26, Line 24state or federal law. To the extent that any information contained in
Page 26, Line 25the risk management policy, impact assessment, or records includes
Page 27, Line 1information subject to attorney-client privilege or work-product
Page 27, Line 2protection, the disclosure does not constitute a waiver of the privilege or protection.
Page 27, Line 3(10) Nothing in this section creates a private right of
Page 27, Line 4action or provides a consumer with any new or additional rights
Page 27, Line 5under any other law, nor does this section limit or restrict any
Page 27, Line 6preexisting rights or remedies to consumers or provide any new
Page 27, Line 7or additional defenses to deployers, with respect to any other law.
Page 27, Line 8SECTION 4. In Colorado Revised Statutes, 6-1-1704, amend (1) as follows:
Page 27, Line 96-1-1704. Disclosure of an artificial intelligence system to
Page 27, Line 10consumer. (1) On and after
February 1, 2026, and January 1, 2027,Page 27, Line 11except as provided in subsection (2) of this section, a deployer or other
Page 27, Line 12developer that deploys, offers, sells, leases, licenses, gives, or otherwise
Page 27, Line 13makes available an artificial intelligence system that is intended to
Page 27, Line 14interact with consumers shall
ensure the disclosure disclose to eachPage 27, Line 15consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system.
Page 27, Line 16SECTION 5. In Colorado Revised Statutes, 6-1-1705, amend
Page 27, Line 17(1)(f), (1)(h), (3), (6), and (8)(a); repeal (1)(d), (2), (4), and (5); and add (1)(d.5), (1)(j), (1)(k), and (10) as follows:
Page 27, Line 186-1-1705. Compliance with other legal obligations -
Page 27, Line 19definitions. (1) Nothing in this part 17 restricts a developer's, a deployer's, or other person's ability to:
Page 27, Line 20(d)
Investigate, establish, exercise, prepare for, or defend legalPage 27, Line 21
claims;Page 28, Line 1(d.5) Prosecute or defend legal claims during ongoing or
Page 28, Line 2imminently anticipated legal proceedings, including complying
Page 28, Line 3with the rules of procedure, rules of evidence, or other
Page 28, Line 4applicable rules or orders before a court, an administrative
Page 28, Line 5enforcement agency, or other legal tribunal of competent jurisdiction;
Page 28, Line 6(f)
By any means other than the use Except for uses of facialPage 28, Line 7recognition technology otherwise prohibited by applicable law,
Page 28, Line 8prevent, detect, protect against, or respond to security incidents or
Page 28, Line 9illegal or tortious activity such as identity theft or fraud
Page 28, Line 10
harassment, malicious or deceptive activities, or illegal activity; orPage 28, Line 11investigate, report, or prosecute the persons responsible for
any suchPage 28, Line 12
action; or preserve the integrity or security of systems that illegal or tortious activity;Page 28, Line 13(h) Conduct research, testing, and development activities
Page 28, Line 14regarding an artificial intelligence system or model, other than testing
Page 28, Line 15conducted under real-world conditions, before the artificial intelligence
Page 28, Line 16system or model is used to make, or is used as a substantial factor
Page 28, Line 17in making, a consequential decision or is otherwise placed on the market, deployed, or put into service, as applicable;
orPage 28, Line 18(j) Effectuate a product recall; or
Page 28, Line 19(k) Identify and repair technical errors that impair existing or intended functionality.
Page 28, Line 20(2)
The obligations imposed on developers, deployers, or otherPage 28, Line 21
persons under this part 17 do not restrict a developer's, a deployer's, or other person's ability to:Page 28, Line 22
(a) Effectuate a product recall; orPage 29, Line 1
(b) Identify and repair technical errors that impair existing or intended functionality.Page 29, Line 2(3)
The obligations imposed on developers, deployers, or otherPage 29, Line 3
persons under this part 17 do not apply where compliance with this partPage 29, Line 4
17 by the developer, deployer, or other person would violate anPage 29, Line 5
evidentiary privilege An act taken by a developer, a deployer, orPage 29, Line 6other person to comply with their obligations under this part 17
Page 29, Line 7shall not be construed as a waiver of any evidentiary privilege
Page 29, Line 8recognized under the laws of this state, and nothing in this part
Page 29, Line 917 shall be construed as limiting or expanding the scope of any evidentiary privilege recognized under the laws of this state.
Page 29, Line 10(4)
Nothing in this part 17 imposes any obligation on a developer,Page 29, Line 11
a deployer, or other person that adversely affects the rights or freedomsPage 29, Line 12
of a person, including the rights of a person to freedom of speech or freedom of the press that are guaranteed in:Page 29, Line 13
(a) The first amendment to the United States constitution; or(b) Section 10 of article II of the state constitution.Page 29, Line 14(5)
Nothing in this part 17 applies to a developer, a deployer, or other person:Page 29, Line 15
(a) Insofar as the developer, deployer, or other person develops,Page 29, Line 16
deploys, puts into service, or intentionally and substantially modifies, as applicable, a high-risk artificial intelligence system:Page 29, Line 17
(I) That has been approved, authorized, certified, cleared,Page 29, Line 18
developed, or granted by a federal agency, such as the federal food andPage 29, Line 19
drug administration or the federal aviation administration, acting withinPage 29, Line 20
the scope of the federal agency's authority, or by a regulated entity subjectPage 29, Line 21
to the supervision and regulation of the federal housing finance agency; orPage 30, Line 1
(II) In compliance with standards established by a federal agency,Page 30, Line 2
including standards established by the federal office of the nationalPage 30, Line 3
coordinator for health information technology, or by a regulated entityPage 30, Line 4
subject to the supervision and regulation of the federal housing financePage 30, Line 5
agency, if the standards are substantially equivalent or more stringent than the requirements of this part 17;Page 30, Line 6
(b) Conducting research to support an application for approval orPage 30, Line 7
certification from a federal agency, including the federal aviationPage 30, Line 8
administration, the federal communications commission, or the federalPage 30, Line 9
food and drug administration or research to support an application otherwise subject to review by the federal agency;Page 30, Line 10
(c) Performing work under, or in connection with, a contract withPage 30, Line 11
the United States department of commerce, the United States departmentPage 30, Line 12
of defense, or the national aeronautics and space administration, unlessPage 30, Line 13
the developer, deployer, or other person is performing the work on aPage 30, Line 14
high-risk artificial intelligence system that is used to make, or is aPage 30, Line 15
substantial factor in making, a decision concerning employment or housing; orPage 30, Line 16
(d) That is a covered entity within the meaning of the federalPage 30, Line 17
"Health Insurance Portability and Accountability Act of 1996", 42 U.S.C.Page 30, Line 18
secs. 1320d to 1320d-9, and the regulations promulgated under the federalPage 30, Line 19
act, as both may be amended from time to time, and is providing health-care recommendations that:Page 30, Line 20
(I) Are generated by an artificial intelligence system;Page 30, Line 21
(II) Require a health-care provider to take action to implement thePage 30, Line 22
recommendations; and(III) Are not considered to be high risk.Page 31, Line 1(6) Nothing in this part 17 applies to any artificial intelligence system to the extent that the artificial intelligence system:
Page 31, Line 2(a) Is acquired by or for the federal government or any federal
Page 31, Line 3agency or department, including the United States department of
Page 31, Line 4commerce, the United States department of defense, or the national
Page 31, Line 5aeronautics and space administration;
unless the artificial intelligencePage 31, Line 6
system is a high-risk artificial intelligence system that is used to make, orPage 31, Line 7
is a substantial factor in making, a decision concerning employment or housing.Page 31, Line 8(b) Is necessary to comply with applicable federal law; or
Page 31, Line 9(c) Has been specifically approved by a federal agency or department for use in making a consequential decision.
Page 31, Line 10(8) (a) A bank, out-of-state bank, credit union chartered by the
Page 31, Line 11state of Colorado, federal credit union, out-of-state credit union, or any
Page 31, Line 12affiliate or subsidiary thereof is in full compliance with this part 17 if the
Page 31, Line 13bank, out-of-state bank, credit union chartered by the state of Colorado,
Page 31, Line 14federal credit union, out-of-state credit union, or affiliate or subsidiary is
Page 31, Line 15subject to examination by a state or federal prudential regulator under any
Page 31, Line 16published guidance or regulations that apply to the use of high-risk
Page 31, Line 17artificial intelligence systems, and the guidance or regulations, at a
Page 31, Line 18minimum, require the bank, out-of-state bank, credit union
Page 31, Line 19chartered by the state of Colorado, federal credit union, out-of-state credit union, or affiliate or subsidiary to:
Page 31, Line 20
(I) Impose requirements that are substantially equivalent to or more stringent than the requirements imposed in this part 17; andPage 31, Line 21
(II) At a minimum, require the bank, out-of-state bank, creditPage 32, Line 1
union chartered by the state of Colorado, federal credit union, out-of-state credit union, or affiliate or subsidiary to:Page 32, Line 2
(A) (I) Regularly audit the bank's, out-of-state bank's, credit unionPage 32, Line 3chartered by the state of Colorado's, federal credit union's, out-of-state
Page 32, Line 4credit union's, or affiliate's or subsidiary's use of high-risk artificial
Page 32, Line 5intelligence systems for compliance with state and federal
Page 32, Line 6anti-discrimination laws and regulations applicable to the bank,
Page 32, Line 7out-of-state bank, credit union chartered by the state of Colorado, federal credit union, out-of-state credit union, or affiliate or subsidiary;
andPage 32, Line 8
(B) (II) Mitigate any algorithmic discrimination caused by the usePage 32, Line 9of a high-risk artificial intelligence system or any risk of algorithmic
Page 32, Line 10discrimination that is reasonably foreseeable as a result of the use of a high-risk artificial intelligence system; and
Page 32, Line 11(III) Notify affected consumers that the high-risk
Page 32, Line 12artificial intelligence system is being used and of the categories
Page 32, Line 13and sources of personal data it processes when it makes, or is a substantial factor in making, a consequential decision.
Page 32, Line 14(10) If a developer or deployer withholds information
Page 32, Line 15pursuant to a provision in this section for which disclosure
Page 32, Line 16would otherwise be required by this part 17, the developer or
Page 32, Line 17deployer shall notify the person that would otherwise have a
Page 32, Line 18right to receive the information, state the basis for withholding
Page 32, Line 19the information, cite the provision that authorizes the
Page 32, Line 20withholding of the information, and provide all information to
Page 32, Line 21which the basis for withholding does not apply. The notification must comply with the requirements of section 6-1-1703 (4)(c).
Page 32, Line 22SECTION 6. In Colorado Revised Statutes, 6-1-1706, amend (1),
Page 33, Line 1(2), (3)(a) introductory portion, (3)(a)(III), (3)(b) introductory portion,
Page 33, Line 2(3)(b)(III), (4), and (5); repeal (3)(a)(I); and add (3)(a.5) and (3)(c) as follows:
Page 33, Line 36-1-1706. Enforcement by attorney general.
Page 33, Line 4(1) Notwithstanding section 6-1-103, the attorney general has exclusive
Page 33, Line 5authority to enforce this part 17 and may investigate and enforce violations of this part 17 beginning on January 1, 2027.
Page 33, Line 6(2) Except as provided in subsection (3) of this section,
a eachPage 33, Line 7violation of the requirements established in this part 17 constitutes an unfair trade practice pursuant to section 6-1-105 (1)(hhhh).
Page 33, Line 8(3) In any action commenced by the attorney general to enforce
Page 33, Line 9this part 17, it is an affirmative defense that the developer, deployer, or other person:
Page 33, Line 10(a)
Discovers and cures a Discovered a curable violation of this part 17 as a result of:Page 33, Line 11(I)
Feedback that the developer, deployer, or other personPage 33, Line 12
encourages deployers or users to provide to the developer, deployer, or other person;Page 33, Line 13(III) An internal review process;
andPage 33, Line 14(a.5) Cured the violation described in subsection (3)(a) of this section within seven days after its discovery;
Page 33, Line 15(b) Was at all relevant times otherwise in compliance with this part 17 and:
Page 33, Line 16(III) Any risk management framework for artificial intelligence
Page 33, Line 17systems that the attorney general, in the attorney general's discretion,
mayPage 33, Line 18
designate and, if has designatedshall and publiclydisseminatePage 33, Line 19disseminated; and
Page 34, Line 1(c) Demonstrates that the violation of this part 17 was
Page 34, Line 2inadvertent, affected fewer than one thousand consumers, and
Page 34, Line 3was not the result of negligence on the part of the developer, the deployer, or other person asserting the defense.
Page 34, Line 4(4) A developer, a deployer, or other person bears the burden of
Page 34, Line 5demonstrating
to the attorney general that the requirementsestablishedPage 34, Line 6described in subsection (3) of this section for establishing an affirmative defense have been satisfied.
Page 34, Line 7(5) Nothing in this part 17, including the enforcement authority
Page 34, Line 8granted to the attorney general under this section, preempts or otherwise
Page 34, Line 9affects any right, claim, remedy, presumption, or defense available at law
Page 34, Line 10or in equity.
A rebuttable presumption or An affirmative defensePage 34, Line 11established under this part 17 applies only to an enforcement action
Page 34, Line 12brought by the attorney general pursuant to this section and does not
Page 34, Line 13apply to any right, claim, remedy, presumption, or defense available at law or in equity.
Page 34, Line 14SECTION 7. In Colorado Revised Statutes, amend 6-1-1707 as follows:
Page 34, Line 156-1-1707. Rules. (1) The attorney general may
promulgate adoptPage 34, Line 16rules as necessary for the purpose of implementing and enforcing this part 17, including:
Page 34, Line 17(a) The documentation and requirements for developers pursuant to section 6-1-1702 (2);
Page 34, Line 18(b) The contents of and requirements for the notices and
Page 34, Line 19disclosures required by sections 6-1-1702
(5) and (7) (3); 6-1-1703 (3)and (4);(5), (7), and (9); and 6-1-1704;Page 34, Line 20(c) The content and requirements of the risk management policy and program required by section 6-1-1703 (2);
Page 35, Line 1(d) The content and requirements of the impact assessments required by section 6-1-1703;
(3);Page 35, Line 2
(e) The requirements for the rebuttable presumptions set forth in sections 6-1-1702 and 6-1-1703; andPage 35, Line 3
(f) (e) The requirements for the affirmative defense set forth inPage 35, Line 4section 6-1-1706 (3), including the process by which the attorney general
Page 35, Line 5will recognize any other nationally or internationally recognized risk management framework for artificial intelligence systems; and
Page 35, Line 6(f) Clarification of what constitutes a "consequential decision", as defined in section 6-1-1701 (3).
Page 35, Line 7SECTION 8. Act subject to petition - effective date. This act
Page 35, Line 8takes effect at 12:01 a.m. on the day following the expiration of the
Page 35, Line 9ninety-day period after final adjournment of the general assembly; except
Page 35, Line 10that, if a referendum petition is filed pursuant to section 1 (3) of article V
Page 35, Line 11of the state constitution against this act or an item, section, or part of this
Page 35, Line 12act within such period, then the act, item, section, or part will not take
Page 35, Line 13effect unless approved by the people at the general election to be held in
Page 35, Line 14November 2026 and, in such case, will take effect on the date of the official declaration of the vote thereon by the governor.