Visual Capitalist January 4, 2026
Key Takeaways
- Anthropic (creators of Claude) scored highest overall (C+), standing out for not training on user data, leading in alignment research, and structuring itself as a Public Benefit Corporation committed to safety.
- Only three companies—Anthropic, OpenAI, and DeepMind—report any testing for high-risk capabilities like bio- or cyber-terrorism, and even these efforts often lack clear reasoning or rigorous standards.
AI systems are moving from novelty to infrastructure. They write, code, search, and increasingly act on our behalf.
That speed has put a spotlight on a harder question: how seriously are AI companies managing the risks that come with more capable models?
This graphic visualizes and compares the safety scores of major AI companies using data from the AI...







