Artificial Intelligence

Colorado Enacts Law Regulating High-Risk Artificial Intelligence Systems | McGlinchey Stafford

American Bar Association’s Business Law Today. – July 23, 2024

On May 17, 2024, Colorado enacted SB 205, broadly regulating the use of high-risk artificial intelligence systems to protect consumers from unfavorable and unlawful differential treatment. The bill, which requires compliance on or after February 1, 2026, declares that both developers and users of high-risk artificial intelligence systems must comply with extensive monitoring and reporting requirements to demonstrate reasonable care has been taken to prevent algorithmic discrimination. A violation of the requirements set forth in Colorado SB 205 constitutes an unfair trade practice under Colorado’s Consumer Protection Act.

What Is an Artificial Intelligence System?

Colorado SB 205 defines “artificial intelligence system” as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”

The artificial intelligence system becomes “high risk” when it is deployed to make, or is a substantial factor in making, a consequential decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) education enrollment or an education opportunity; (b) employment or an employment opportunity; (c) a financial or lending service; (d) an essential government service; (e) health-care services; (f) housing; (g) insurance; or (h) a legal service.

A high-risk artificial intelligence system does not include, among others, technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.

Affirmative Obligations for Developers

Colorado SB 205 requires a developer of a high-risk artificial intelligence system to make available to the deployer, or user of the artificial intelligence system:

  1. A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk artificial intelligence system;
  2. Documentation disclosing:
    1. High-level summaries of the type of data used to train the high-risk artificial intelligence system;
    2. Known or reasonably foreseeable limitations of the high-risk artificial intelligence system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk artificial intelligence system;
    3. The purpose of the high-risk artificial intelligence system;
    4. The intended benefits and uses of the high-risk artificial intelligence system; and
    5. All other information necessary to allow the deployer to comply with the requirements of Section 6-1-1703 [Deployer Duty to Avoid Algorithmic Discrimination];
  3. Documentation describing:
    1. How the high-risk artificial intelligence system was evaluated for performance and mitigation of algorithmic discrimination before the high-risk artificial intelligence system was offered, sold, leased, licensed, given, or otherwise made available to the deployer;
    2. The data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases, and appropriate mitigation;
    3. The intended outputs of the high-risk artificial intelligence system;
    4. The measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and
    5. How the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when the high-risk artificial intelligence system is used to make, or is a substantial factor in making, a consequential decision; and
  4. Any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitor the performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.

Affirmative Obligations for Developers

Colorado SB 205 requires deployers of a high-risk artificial intelligence system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. Reasonable care is demonstrated by the deployer’s implementation of a risk management policy and program governing the deployment of the high-risk artificial intelligence system, completion of an annual impact assessment, and disclosure to consumers when they are interacting with an artificial intelligence system or when the system has made a decision adverse to the consumer’s interests.

Risk Management Policy and Program

The risk management policy and program must be “an iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates.” It must incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination.

The risk management policy and program must be reasonable considering:

  1. (a) The guidance and standards set forth in the latest version of the “Artificial Intelligence Risk Management Framework” published by the National Institute of Standards and Technology in the United States Department of Commerce, Standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, if the standards are substantially equivalent to or more stringent than the requirements of [the bill]; or (b) Any risk management framework for artificial intelligence systems that the Attorney General, in the Attorney General’s discretion, may designate;
  2. The size and complexity of the deployer;
  3. The nature and scope of the high-risk artificial intelligence systems deployed by the deployer, including the intended uses of the high-risk artificial intelligence systems; and
  4. The sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed by the deployer.

Impact Assessment

An impact assessment must be completed annually and within ninety days after any intentional and substantial modification to the high-risk artificial intelligence system is made available. The impact assessment must include, at a minimum, and to the extent reasonably known by or available to the deployer:

  1. A statement by the deployer disclosing the purpose, intended use cases, and deployment context of, and benefits afforded by, the high-risk artificial intelligence system;
  2. An analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks of algorithmic discrimination and, if so, the nature of the algorithmic discrimination and the steps that have been taken to mitigate the risks;
  3. A description of the categories of data the high-risk artificial intelligence system processes as inputs and the outputs the high-risk artificial intelligence system produces;
  4. If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize the high-risk artificial intelligence system;
  5. Any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system;
  6. A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that the high-risk artificial intelligence system is in use when the high-risk artificial intelligence system is in use; and
  7. A description of the post-deployment monitoring and user safeguards provided concerning the high-risk artificial intelligence system, including the oversight, use, and learning process established by the deployer to address issues arising from the deployment of the high-risk artificial intelligence system.

The impact assessment must also include a statement disclosing the extent to which the high-risk artificial intelligence system was used in a manner that was consistent with or varied from the developer’s intended uses of the high-risk artificial intelligence system. A deployer must maintain the most recently completed impact assessment, all records concerning each impact assessment, and all prior impact assessments, if any, for at least three years following the final deployment of the high-risk artificial intelligence system.

Notice to Consumer

On and after February 1, 2026, and no later than the time that a deployer deploys a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision concerning a consumer, the deployer must:

  1. Notify the consumer that the deployer has deployed a high-risk artificial intelligence system to make, or be a substantial factor in making, a consequential decision before the decision is made;
  2. Provide to the consumer a statement disclosing the purpose of the high-risk artificial intelligence system and the nature of the consequential decision; the contact information for the deployer; a description, in plain language, of the high-risk artificial intelligence system; and instructions on how to access the statement . . . ; and
  3. Provide to the consumer information, if applicable, regarding the consumer’s right to opt out of the processing of personal data concerning the consumer for purposes of profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer. . . .

The deployer must also comply with substantial notice requirements if the high-risk artificial intelligence system makes a consequential decision that is adverse to the consumer and allow the consumer to appeal or correct any incorrect personal data that the high-risk artificial intelligence system processed in making the decision.

If a deployer deploys a high-risk artificial intelligence system and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, must send to the Attorney General, in a form and manner prescribed by the Attorney General, a notice disclosing the discovery.

A deployer who uses a high-risk artificial intelligence system that is intended to interact with consumers must ensure it discloses to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. Disclosure is not required under circumstances in which it would be obvious to a reasonable person that the person is interacting with an artificial intelligence system.

Website Disclosures

A developer must make available, in a manner that is clear and readily available on the developer’s website or in a public use case inventory, a statement summarizing:

  1. The types of high-risk artificial intelligence systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer or other developer; and
  2. How the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of the types of high-risk artificial intelligence systems described in accordance with [the above].

Similarly, a deployer must also make available on its website a statement summarizing:

  1. The types of high-risk artificial intelligence systems that are currently deployed by the deployer;
  2. How the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each high-risk artificial intelligence system . . . ; and
  3. In detail, the nature, source, and extent of the information collected and used by the deployer.

Exemptions

These requirements do not apply to a deployer if, at the time the deployer deploys a high-risk artificial intelligence system and at all times while the high-risk artificial intelligence system is deployed,

  1. The deployer:
    1. Employs fewer than 50 full-time equivalent employees; and
    2. Does not use the deployer’s own data to train the high-risk artificial intelligence system;
  2. The high-risk artificial intelligence system:
    1. Is used for the intended uses that are disclosed to the deployer as required by [the developer]; and
    2. Continues learning based on data derived from sources other than the deployer’s own data; and
  3. The deployer makes available to consumers an impact assessment that:
    1. The developer of the high-risk artificial intelligence system has completed and provided to the deployer; and
    2. Includes information that is substantially similar to the information in the impact assessment required [to be submitted by the deployer pursuant to the requirements of the bill].

Reprinted with permission from the American Bar Association’s Business Law Today.


Read More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button