VentureBeat January 6, 2025
Louis Columbus

OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more.

The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them.

In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an...

Today's Sponsors

Venturous
Got healthcare questions? Just ask Transcarent

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Cybersecurity, Technology
The winner of today's AI race might be a company that doesn't exist yet
How DeepSeek And ‘Knowledge Distillation’ Will Reshape Medicine
GenAI’s unexpected impact: Disrupting high-skilled tech jobs, too
What happens to the data that doctors enter in ChatGPT?
Demystifying AI in healthcare: From innovation to implementation

Share This Article