The proliferation of sophisticated, open-source models has been a boon for companies looking to accelerate AI adoption. But in the rush to leverage these resources, companies have largely overlooked the AI supply chain risk. Public model repositories like Hugging Face and PyTorch Hub make it simple for anyone to find and download models without first understanding potential vulnerabilities in third-party software, model, or data. The general lack of awareness of AI supply chain risk makes it a compelling opportunity for bad actors.
We released the AI Risk Database in March 2023 as a free and community-supported resource to help mitigate supply chain risk in open-source models. The database includes over 260,000 models, and provides supply chain risk exposure that includes file vulnerabilities, risk scores, and vulnerability reports submitted by AI and cybersecurity researchers. The resource has been very well received in the market and has become a trusted resource for many companies building on open-source models.
To support the continued advancement of the AI Risk Database, Robust Intelligence is proud to partner with MITRE, the federally funded research and development center renowned for its contributions to cybersecurity (e.g., MITRE ATT&CK™). An enhanced version of the AI Risk Database is now available on GitHub with a long-term plan to host it under the broader set of MITRE ATLAS™ tools. ATLAS is a globally accessible knowledge base that includes a list of adversary tactics and techniques based on real-world attack observations and AI red teaming. ATLAS also includes links to other tools that allow for the emulation of attacks.
The AI Risk Database directly aligns with the ATLAS mission of raising awareness of these unique and evolving AI security and assurance vulnerabilities, as the global community starts to incorporate AI into more systems.
By working together to incorporate ways to measure risk, such as risk scores, software vulnerabilities and related CVEs, the teams at Robust Intelligence and MITRE are creating increased awareness of risks and vulnerabilities that arise when users download and use specific open-source AI models.
“This collaboration and release of the AI Risk Database can directly enable more organizations to see for themselves how they are directly at risk and vulnerable in deploying specific types of AI-enabled systems” said Charles Clancy, Ph.D., senior vice president, general manager, MITRE Labs, and chief futurist. “As the latest open-source tool under MITRE ATLAS, this capability will continue to inform risk assessment and mitigation priorities for organizations around the globe”.
In addition to the MITRE and Robust Intelligence partnership that’s working to enhance the AI Risk Database, Indiana University’s Kelley’s Data Science and Artificial Intelligence Lab (DSAIL) is helping to enhance the automated risk assessment tools behind the AI Risk Database. University researchers are incorporating an ability to scan GitHub repositories used to create models available on third-party platforms, allowing users to spot publicly reported software vulnerabilities and weaknesses that exist upstream of the delivered model artifact.
Here’s to the safe use of open-source models! We look forward to hearing your feedback on the enhanced AI Risk Database.