Visual Capitalist January 4, 2026
Niccolo Conte

Key Takeaways

  • Anthropic (creators of Claude) scored highest overall (C+), standing out for not training on user data, leading in alignment research, and structuring itself as a Public Benefit Corporation committed to safety.
  • Only three companies—Anthropic, OpenAI, and DeepMind—report any testing for high-risk capabilities like bio- or cyber-terrorism, and even these efforts often lack clear reasoning or rigorous standards.

AI systems are moving from novelty to infrastructure. They write, code, search, and increasingly act on our behalf.

That speed has put a spotlight on a harder question: how seriously are AI companies managing the risks that come with more capable models?

This graphic visualizes and compares the safety scores of major AI companies using data from the AI...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
The Download: OpenAI’s plans for science, and chatbot age verification
Around the nation: Amazon's One Medical launches new AI chatbot
Physician assistants say paperwork and AI training still lag
More Data Isn’t Always Better for AI Decisions
The Download: why LLMs are like aliens, and the future of head transplants

Share Article