April 11, 2023
-
4
minute read

Regulation Spotlight: Colorado Draft AI Insurance Regulation

Regulation Spotlight

The proliferation of AI across industries has regulators working hard to keep pace. Government agencies recognize the security, ethical, and operational risks that AI models pose to the public, and are engaged in an ongoing dialogue with relevant stakeholders to develop a suite of regulatory responses.

As a part of this process, a concerted effort has been made to articulate voluntary guidelines and guardrails for safe and transparent AI systems in the United States, such as:

  1. Principles on AI, National Association of Insurance Commissioners (April 2020)
  2. Blueprint for an AI Bill of Rights, Office of Science and Technology Policy (October 2022)
  3. AI Risk Management Framework, National Institute of Standards and Technology (January 2023)

However, navigating the AI regulatory environment can be quite difficult. A myriad of government agencies have issued guidance that categorizes AI under existing regulations, while new laws and regulations have also been proposed at the state, federal, and international level. Since companies need to comply with the laws and regulations in the places they conduct business, it’s important that they stay ahead of all relevant proposals. Furthermore, this is proving difficult for non-technical compliance teams, as they typically need to engage with data scientists in a lengthy and inefficient process to satisfy requirements.

In February 2023, the Colorado Division of Insurance (CDOI) broke new ground in the AI regulation space by releasing a draft of Algorithm and Predictive Model Governance Regulation to ensure life-insurers are responsibly using external consumer data and information sources (ECDIS), algorithms, and AI models. The Colorado AI regulation draft is a significant step forward for AI governance.

The draft rules impose requirements regarding AI Governance and Risk Management on Colorado-licensed life insurance companies that use AI systems in their insurance practices. The development of this draft was required as a follow up to the state senate bill SB21-169 (”Protecting Consumers from Unfair Discrimination in Insurance Practices'') signed into law in July 2021. While technically specific to Colorado-licensed life insurers, this bill would impact all national and regional insurers. Additionally, these AI governance rules will likely influence a broad set of state, federal, and even international AI regulations because for the first time a set of concrete rules are laid out for companies to follow and map to.

CDOI’s draft AI regulation would require life-insurers to have robust AI governance and risk management frameworks, as well as meet specific documentation and reporting requirements. A summary of the proposed requirements of the draft are as follows:

1. Governance and Risk Management Framework

Life insurers that use algorithms and predictive models using ECDIS must establish a governance and risk management framework that supports policies, procedures, and systems designed to determine whether data sources are credible and insurance practices do not result in unfair discrimination.

2. Documentation

Life insurers must maintain comprehensive documentation for their use of all algorithms and/or predictive models, including regular reviews and updates to the documentation, as well as making this documentation easily accessible.

3. Reporting Requirements

Life insurers using algorithms and/or predictive models must submit a report summarizing their progress and approach to meeting the requirements demonstrating compliance with this regulation.

There was a public stakeholder meeting about the Colorado draft AI regulation on February 7th, and following a subsequent public comment period, it is currently in its next phase of incorporating suggested edits and changes. The specifics of the requirements have not been finalized yet, but regardless of the outcome, this ambitious effort is a big step forward for AI governance. In its current form, the draft provides comprehensive requirements for life insurance model governance and risk management frameworks to ensure the use of AI systems do not unfairly discriminate protected groups.

Robust Intelligence enables organizations to proactively address AI risk, using a continuous validation approach across the model lifecycle to instill integrity and simplify regulatory compliance. Our comprehensive testing, which includes a Fairness & Bias test suite, ensures that data science teams only deploy “production ready” models and maps statistical test results to compliance requirements. In addition, we offer auto-generated model cards for internal and external documentation and reporting capabilities. Through these capabilities, the Robust Intelligence platform allows insurers, and companies across all industries, to meet new and existing AI regulation in an automated and robust manner.

To learn more, request a product demo here.

April 11, 2023
-
4
minute read

Regulation Spotlight: Colorado Draft AI Insurance Regulation

Regulation Spotlight

The proliferation of AI across industries has regulators working hard to keep pace. Government agencies recognize the security, ethical, and operational risks that AI models pose to the public, and are engaged in an ongoing dialogue with relevant stakeholders to develop a suite of regulatory responses.

As a part of this process, a concerted effort has been made to articulate voluntary guidelines and guardrails for safe and transparent AI systems in the United States, such as:

  1. Principles on AI, National Association of Insurance Commissioners (April 2020)
  2. Blueprint for an AI Bill of Rights, Office of Science and Technology Policy (October 2022)
  3. AI Risk Management Framework, National Institute of Standards and Technology (January 2023)

However, navigating the AI regulatory environment can be quite difficult. A myriad of government agencies have issued guidance that categorizes AI under existing regulations, while new laws and regulations have also been proposed at the state, federal, and international level. Since companies need to comply with the laws and regulations in the places they conduct business, it’s important that they stay ahead of all relevant proposals. Furthermore, this is proving difficult for non-technical compliance teams, as they typically need to engage with data scientists in a lengthy and inefficient process to satisfy requirements.

In February 2023, the Colorado Division of Insurance (CDOI) broke new ground in the AI regulation space by releasing a draft of Algorithm and Predictive Model Governance Regulation to ensure life-insurers are responsibly using external consumer data and information sources (ECDIS), algorithms, and AI models. The Colorado AI regulation draft is a significant step forward for AI governance.

The draft rules impose requirements regarding AI Governance and Risk Management on Colorado-licensed life insurance companies that use AI systems in their insurance practices. The development of this draft was required as a follow up to the state senate bill SB21-169 (”Protecting Consumers from Unfair Discrimination in Insurance Practices'') signed into law in July 2021. While technically specific to Colorado-licensed life insurers, this bill would impact all national and regional insurers. Additionally, these AI governance rules will likely influence a broad set of state, federal, and even international AI regulations because for the first time a set of concrete rules are laid out for companies to follow and map to.

CDOI’s draft AI regulation would require life-insurers to have robust AI governance and risk management frameworks, as well as meet specific documentation and reporting requirements. A summary of the proposed requirements of the draft are as follows:

1. Governance and Risk Management Framework

Life insurers that use algorithms and predictive models using ECDIS must establish a governance and risk management framework that supports policies, procedures, and systems designed to determine whether data sources are credible and insurance practices do not result in unfair discrimination.

2. Documentation

Life insurers must maintain comprehensive documentation for their use of all algorithms and/or predictive models, including regular reviews and updates to the documentation, as well as making this documentation easily accessible.

3. Reporting Requirements

Life insurers using algorithms and/or predictive models must submit a report summarizing their progress and approach to meeting the requirements demonstrating compliance with this regulation.

There was a public stakeholder meeting about the Colorado draft AI regulation on February 7th, and following a subsequent public comment period, it is currently in its next phase of incorporating suggested edits and changes. The specifics of the requirements have not been finalized yet, but regardless of the outcome, this ambitious effort is a big step forward for AI governance. In its current form, the draft provides comprehensive requirements for life insurance model governance and risk management frameworks to ensure the use of AI systems do not unfairly discriminate protected groups.

Robust Intelligence enables organizations to proactively address AI risk, using a continuous validation approach across the model lifecycle to instill integrity and simplify regulatory compliance. Our comprehensive testing, which includes a Fairness & Bias test suite, ensures that data science teams only deploy “production ready” models and maps statistical test results to compliance requirements. In addition, we offer auto-generated model cards for internal and external documentation and reporting capabilities. Through these capabilities, the Robust Intelligence platform allows insurers, and companies across all industries, to meet new and existing AI regulation in an automated and robust manner.

To learn more, request a product demo here.

Blog

Related articles

March 9, 2022
-
4
minute read

What Is Model Monitoring? Your Complete Guide

For:
July 29, 2024
-
5
minute read

Bypassing Meta’s LLaMA Classifier: A Simple Jailbreak

For:
April 28, 2022
-
4
minute read

Why Model Validation Can End the AI “Explainability Crisis”

For:
No items found.