June 28, 2024
-
4
minute read

AI Governance Policy Roundup (June 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

June 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

NIST announced Assessing Risks and Impacts of AI (ARIA), a new evaluation program for safe and trustworthy AI. According to the ARIA program lead, Reva Schwartz, this is the “evaluation program that brings people directly into the equation, and how they use, adapt to or are impacted by AI technology” and covers three levels of evaluation: model testing, red teaming, and field testing.

A new bipartisan bill (S.4495) was introduced in Congress, which would would “require government contracts for AI capabilities to include safety and security terms for data ownership, civil rights, civil liberties and privacy, adverse incident reporting and other key areas” said Senator Peters. This adds to the growing activity in Congress regarding AI regulation.

Over the last month, lawmakers in California have advanced ~30 new measures around AI in an effort to protect consumers and jobs. This is by far the state that has been making the biggest regulatory effort with AI. There is some controversy (in particular with SB 1047 – AI Safety and Innovation Bill) surrounding how this might stall innovation and restrict the open source community. California’s legislature is expected to vote on the newly proposed laws by August 31st.

U.S. Treasury Department seeks public comment on the use of AI in the financial services sector in order to improve its understanding of the opportunities and risks presented by its adoption. The Treasury Secretary, Janet Yellen, warns that while there is potential for tremendous benefit from its use, the technology also comes with “significant risks.”

International

The EU’s AI Office has officially opened – comprised of a team of 140 people including technology specialists, lawyers, and policy experts – and held its first webinar on the Risk Management logic of the AI Act. The webinar helps clarify requirements and how to parse through overlapping international standards.

Singapore released its Model AI Governance Framework for Generative AI, providing a useful voluntary framework for organizations to adopt while deploying AI systems in order to meet best practices for AI risk management.

There is a global push for AI spending and local development across governments in an effort to have national AI champions and to train LLMs in their native languages. Importantly, this means AI systems are trained on local data and serves to safeguard local culture. In addition, avoiding reliance on outsourced AI systems also serves to preserve national security.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

June 28, 2024
-
4
minute read

AI Governance Policy Roundup (June 2024)

Regulation Spotlight

As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.

We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.

June 2024 Roundup

As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.

Domestic

NIST announced Assessing Risks and Impacts of AI (ARIA), a new evaluation program for safe and trustworthy AI. According to the ARIA program lead, Reva Schwartz, this is the “evaluation program that brings people directly into the equation, and how they use, adapt to or are impacted by AI technology” and covers three levels of evaluation: model testing, red teaming, and field testing.

A new bipartisan bill (S.4495) was introduced in Congress, which would would “require government contracts for AI capabilities to include safety and security terms for data ownership, civil rights, civil liberties and privacy, adverse incident reporting and other key areas” said Senator Peters. This adds to the growing activity in Congress regarding AI regulation.

Over the last month, lawmakers in California have advanced ~30 new measures around AI in an effort to protect consumers and jobs. This is by far the state that has been making the biggest regulatory effort with AI. There is some controversy (in particular with SB 1047 – AI Safety and Innovation Bill) surrounding how this might stall innovation and restrict the open source community. California’s legislature is expected to vote on the newly proposed laws by August 31st.

U.S. Treasury Department seeks public comment on the use of AI in the financial services sector in order to improve its understanding of the opportunities and risks presented by its adoption. The Treasury Secretary, Janet Yellen, warns that while there is potential for tremendous benefit from its use, the technology also comes with “significant risks.”

International

The EU’s AI Office has officially opened – comprised of a team of 140 people including technology specialists, lawyers, and policy experts – and held its first webinar on the Risk Management logic of the AI Act. The webinar helps clarify requirements and how to parse through overlapping international standards.

Singapore released its Model AI Governance Framework for Generative AI, providing a useful voluntary framework for organizations to adopt while deploying AI systems in order to meet best practices for AI risk management.

There is a global push for AI spending and local development across governments in an effort to have national AI champions and to train LLMs in their native languages. Importantly, this means AI systems are trained on local data and serves to safeguard local culture. In addition, avoiding reliance on outsourced AI systems also serves to preserve national security.

How We Can Help

The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.

At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.

Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!

Blog

Related articles

August 9, 2023
-
4
minute read

Robust Intelligence partners with MITRE to Tackle AI Supply Chain Risks in Open-Source Models

For:
March 27, 2024
-
5
minute read

AI Cyber Threat Intelligence Roundup: March 2024

For:
May 15, 2024
-
5
minute read

Takeaways from SatML 2024

For:
June 21, 2024
-
4
minute read

AI Cyber Threat Intelligence Roundup: June 2024

For:
May 28, 2024
-
5
minute read

Fine-Tuning LLMs Breaks Their Safety and Security Alignment

For:
February 8, 2024
-
3
minute read

Robust Intelligence Announces Participation in Department of Commerce Consortium Dedicated to AI Safety

For: