June 21, 2021
-
1
minute read

How To Secure AI Systems @ Stanford MLSys Seminar

As organizations adopt AI technologies they inherit AI failures. These failures often manifest themselves in AI models that produce erroneous predictions that go undetected. In Stanford MLSys Seminar 2021, Robust Intelligence Co-founder & CEO Yaron Singer discusses root causes for AI models going haywire, and present a rigorous framework for eliminating risk from AI. He shows how this methodology can be used as building blocks for continuous testing and firewall systems for AI.

June 21, 2021
-
1
minute read

How To Secure AI Systems @ Stanford MLSys Seminar

As organizations adopt AI technologies they inherit AI failures. These failures often manifest themselves in AI models that produce erroneous predictions that go undetected. In Stanford MLSys Seminar 2021, Robust Intelligence Co-founder & CEO Yaron Singer discusses root causes for AI models going haywire, and present a rigorous framework for eliminating risk from AI. He shows how this methodology can be used as building blocks for continuous testing and firewall systems for AI.

Blog

Related articles

May 29, 2024
-
5
minute read

AI Cyber Threat Intelligence Roundup: May 2024

For:
February 29, 2024
-
4
minute read

AI Governance Policy Roundup (February 2024)

For:
June 20, 2023
-
5
minute read

Why We Need Risk Assessments for Generative AI

For:
Model Compliance Assessment
No items found.