Summary
Here’s a concise summary of our rating and some key findings:
Framework Overview
Structure
- Our framework contains three main components, mirroring traditional risk management frameworks: Risk identification, risk tolerance & analysis, and risk mitigation.
- It aligns with the framework used by industry standards (ISO/IEC 23894), regulations (EU AI Act), and initiatives (G7 Hiroshima Process).
- For each company, we report:
- Best-in-class: Areas where the company leads the industry
- Highlights: Reasons for the company's current grade
- Weaknesses: Reasons preventing a higher grade
Methodology
- Our methodology combines traditional risk management with AI industry best practices.
- We shared our framework and ratings with companies for feedback before publication.
Key Findings
General
- All companies are still far from "strong AI risk management" (grade >4 out of 5).
- The leading organizations in risk management also lead in AI capabilities.
- The largest organizations can improve their grade by integrating more systematically the research that their employees produce into company-wide risk management practices, publications and model cards.
Industry Weaknesses
- Acceptable risk levels (risk tolerance) are left undefined or unjustified.
- There's an absence of risk modeling to justify capability thresholds and corresponding mitigation objectives.
Industry Strengths
- Capability thresholds are often established, sometimes with significant detail.
- Best practices are industry-led for evaluation protocols, including frequency measurements in time and compute.