As the AI policy landscape continues its rapid evolution, many are having a difficult time navigating the complex amalgamation of frameworks, regulations, executive orders, and legislation. We have installed a monthly AI Governance Policy Roundup series to help you cut through the noise with a need-to-know snapshot of recent domestic and international updates.
We hope this will be a helpful resource as you develop your AI governance strategy. As an end-to-end AI risk management platform, Robust Intelligence can help you automate and operationalize the relevant policies to ensure compliance. Read on to learn more and contact us if you’d like to dive deeper into any specific AI governance policy.
April 2024 Roundup
As the use and development of generative AI models and applications has proliferated over the past year, national governments have been moving quickly to react with guidelines for safe, secure, and trustworthy use and development of this technology. Below is a curated list of notable updates from government agencies and organizations from the last month.
Domestic
Vice President Kamala Harris announced that the White House Office of Management and Budget (OMB) was issuing a “government-wide policy to mitigate risks of AI and harness its benefits.” The document contains OMB’s guidance to federal agencies regarding best risk management practices for using AI under several laws of recent years. The private sector should be paying attention to how the federal government is thinking about AI risk management, in part because the government is a big purchaser of AI systems from private-sector developers meaning government purchasing requirements will inevitably influence the development of these systems.
The National Telecommunications and Information Administration (NTIA) called for independent audits of high-risk AI systems. Specifically, “Federal agencies should require independent audits and regulatory inspections of high-risk AI model classes and systems – such as those that present a high risk of harming rights or safety.” This was a part of a series of recommendations included in a recently NTIA released Accountability Policy report.
The Justice Department announced five new federal agencies have joined a pledge to enforce civil rights laws in AI. “Federal agencies are sending a clear message: we will use our collective authority and power to protect individual rights in the wake of increased reliance on artificial intelligence in various aspects of American life,” said Assistant Attorney General Kristen Clarke.
20(+) technology and critical infrastructure executives, civil rights leaders, academics, and policymakers join the Department of Homeland Security’s new AI Safety and Security Board. Board members includes executives from companies, organizations and agencies like OpenAI, Anthropic, Microsoft, NVIDIA, White House Office of Science and Technology Policy, and Brookings Institution. The establishment of this board is yet another government push to protect the economy and society from known AI threats.
International
In a landmark agreement, the UK and US AI Safety Institutes have committed to a partnership to jointly test AI models, share frameworks and best AI safety practices, and sharing of expertise. Taking immediate effect, both countries will work together to build a common approach to AI safety testing recognizing the urgency around AI risk. This partnership will serve to help forge additional global partnerships and international alignment on AI safety.
EU and U.S. AI experts from the EU-US Trade and Technology Council (TTC) have developed an updated edition of their AI Taxonomy and Terminology. This helps align international governance efforts and create a shared understanding of how to effectively secure AI systems. The joint council also announced a new research alliance: AI for Public Good. This effort will focus on applying AI systems to the most important global challenges.
The National Security Agency released a new Cybersecurity Information Sheet (CSI) this month, “Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems.” This was done in partnership with other federal agencies (Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI)) and international partners (Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre (NCSC-NZ), and United Kingdom National Cyber Security Centre (NCSC-UK)). This guidance is crucial for anyone developing high-risk AI systems.
How we can help
The rapidly evolving landscape of AI regulation can be difficult to follow. Many of the frameworks and guidelines include vague language, which makes it difficult to ensure compliance.
At Robust Intelligence, we secure enterprises from AI risk. Our platform protects models in real time and surfaces risk in models and data throughout the AI lifecycle via automated red teaming. Our testing framework maps to policies to help customers streamline and operationalize regulatory compliance.
Please reach out if you’d like to learn more about the AI policy landscape or about our product, and stay tuned for next month’s update!