Testing to Red Teaming: What’s Wrong with My AI?
Red teaming AI systems is one technique that should be used to protect against security, ethical, and operational vulnerabilities. This technique has been adapted from cybersecurity best practices and has proven effective for surfacing weaknesses in AI models. In fact, red teaming has been added to the White House Executive Order on AI, the NIST AI Risk Management Framework, and the EU AI Act.
But, what is really meant by red teaming? Is it more than manual tinkering? Is it more than automated testing? In this webinar, we cover:
• How AI security is co-evolving with decades-old security practices
• Why internal and external testing of AI models is important
• What AI red teaming looks like in the real world
• How automated testing can keep your AI systems safeSpeaker
Speaker bios:
Hyrum Anderson, PhD, is CTO at Robust Intelligence. Hyrum was previously the Principal Architect of Trustworthy Machine Learning at Microsoft, where he organized Microsoft’s AI Red Team and oversaw the first exercises on production AI systems as chair of the AI Red Team governing board; and Chief Scientist at cybersecurity company, Endgame.
Finn Howell is Engineering Tech Lead at Robust Intelligence, where she directs work on detecting and mitigating AI risk. Prior to this, she was an early engineer at One Medical, building their Electronic Health Record and NLP models over clinical data. She studied Cognitive Science and Computer Science at UC Berkeley.