June 21, 2021
-
1
minute read

How To Secure AI Systems @ Stanford MLSys Seminar

As organizations adopt AI technologies they inherit AI failures. These failures often manifest themselves in AI models that produce erroneous predictions that go undetected. In Stanford MLSys Seminar 2021, Robust Intelligence Co-founder & CEO Yaron Singer discusses root causes for AI models going haywire, and present a rigorous framework for eliminating risk from AI. He shows how this methodology can be used as building blocks for continuous testing and firewall systems for AI.

June 21, 2021
-
1
minute read

How To Secure AI Systems @ Stanford MLSys Seminar

As organizations adopt AI technologies they inherit AI failures. These failures often manifest themselves in AI models that produce erroneous predictions that go undetected. In Stanford MLSys Seminar 2021, Robust Intelligence Co-founder & CEO Yaron Singer discusses root causes for AI models going haywire, and present a rigorous framework for eliminating risk from AI. He shows how this methodology can be used as building blocks for continuous testing and firewall systems for AI.

Blog

Related articles

August 17, 2023
-
5
minute read

Observations from the Generative Red Team Challenge at DEF CON

For:
February 8, 2024
-
3
minute read

Robust Intelligence Announces Participation in Department of Commerce Consortium Dedicated to AI Safety

For:
June 20, 2023
-
5
minute read

Why We Need Risk Assessments for Generative AI

For:
Model Compliance Assessment
No items found.