Search

Word Search

Information System News

Rick W

AI incidents, audits, and the limits of benchmarks

AI is moving fast from research to real-world deployment, and when things go wrong, the consequences are no longer hypothetical. In this episode, Sean McGregor, co-founder of the AI Verification & Evaluation Research Institute and also the founder of the AI Incident Database, joins Chris and Dan to discuss AI safety, verification, evaluation, and auditing. They explore why benchmarks often fall short, what red-teaming at DEF CON reveals about machine learning risks, and how organizations can better assess and manage AI systems in practice.

Featuring:

Links:

Upcoming Events: 

Previous Article Faster decisions: How an AI agent is redefining executive workflows at one of the world’s largest building materials companies
Next Article A New Frontier for AI Agents: Cybersecurity
Print
4