We rate frontier AI companies' risk management practices. Our objective is to enhance the accountability of private actors shaping the development of AI.
Our complete methodology can be found here.
Our complete methodology can be found here.
Risk Management Maturity of AI companies (as of October 2024):
Very weak
Weak
Moderate
Substantial
Strong
1.9/5
Anthropic
1.6/5
OpenAI
1.5/5
Google DeepMind
0.7/5
Meta
0.1/5
Mistral AI
0/5
xAI
Who
We selected frontier AI companies based on who we expect to be developing the most capable models in 2024 and 2025.
What
The first iteration of these ratings focuses on assessing AI companies' risk management practices across three dimensions: Risk Identification, Risk Tolerance & Analysis, and Risk Mitigation.
Why we are rating AI companies
Advanced AI systems are arguably one of the most transformative technologies ever built
Frontier AI companies are advancing technology at an astonishing rate, leading the three most cited AI researchers—Geoffrey Hinton, Ilya Sutskever, and Yoshua Bengio—to believe it will have a profound impact on societies and could even potentially contribute to their downfall.
Yet, we don’t currently know how to properly manage AI risks. The main barrier that protects us from large-scale harms arising from AI is not our risk management frameworks but the limitation of current AI systems’ capabilities. This situation underscores the urgent need for robust risk management practices in the AI industry.
Public accountability is important
For the purposes of these ratings, we have assessed the quality of AI companies' risk management practices. Considering the potential risks associated with increasingly powerful AI systems, it is crucial that organizations have robust frameworks in place to identify, assess, and mitigate risks.
Given the stakes, society has a right to know whether AI companies are implementing comprehensive risk management strategies. Those who fail to demonstrate them should not be entrusted with developing powerful AI systems. We therefore developed a framework to present what a comprehensive AI risk management approach should entail and achieve.
Companies assessed
Last updated on October, 2024
See our last iterations here
See our last iterations here
Risk Management Reporting in Three Dimensions
At a high-level, an AI company's risk management practices should be assessed across the following 3 dimensions:
Conclusion
1
Some AI companies are substantially better than others at risk management.
2
However, even the best AI companies still score relatively poorly.
3
Therefore, a lot of work remains to adequately manage AI risks.
We believe that these ratings and templates will equip companies with the tools to develop more comprehensive and transparent AI risk management procedures and will result in improved safety across the board.
We intend to continually update our ratings as companies improve their policies. Ultimately, we aim for these ratings to be used across the AI deployer and investment community.
We intend to continually update our ratings as companies improve their policies. Ultimately, we aim for these ratings to be used across the AI deployer and investment community.
FAQ
Why is it relevant to rate AI companies?
Our main concern is to push for transparency and accountability as AI progresses, and we view these ratings as a good first step in achieving that goal. At this stage there is no private actor we would feel comfortable moving forward and developing AI systems for the next few years without substantial overhaul - our ratings make that clear and incentivize change.
Won't these ratings encourage safety washing?
This is a challenge we’ve given a lot of thought to. Among actors who are developing AI systems with frontier capabilities, there’s already significant safety washing - although we’re not exempt from this, we expect our ratings to improve substantially the current situation. Moreover, to avoid the gaming of our ratings, we reserve the rights to update the scale over time as the industry practices mature.
Won’t this risk management framework prevent us from reaping the benefits of AI?
This risk management framework is designed to favor transparency from AI companies. It is designed to ensure that democratically chosen risk preferences are respected through AI development & deployment, accounting for the benefits. As such, it should incentivize AI companies to develop technologies that have the highest chance of delivering AI benefits safely.
Do you want corporate governance to substitute for regulation?
We want AI companies to improve their transparency and expect that to be complementary to existing regulations and to favor the development of future adequate regulations, providing well-needed data to encourage reasonable trade-offs when designing present and future rules.