VentureBeat January 6, 2025
Louis Columbus

OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more.

The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them.

In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an...

Today's Sponsors

Venturous
Got healthcare questions? Just ask Transcarent

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Cybersecurity, Technology
Apple Intelligence Comes To Vision Pro With VisionOS 2.4
Mental health provider launches AI initiative to train therapists
Emergence AI’s new system automatically creates AI agents rapidly in realtime based on the work at hand
AI empathy is a good fit for behavioral and mental healthcare
Report: Alibaba to Release Upgraded Qwen 3 AI Model in Late April

Share This Article