VentureBeat January 6, 2025
Louis Columbus

OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more.

The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them.

In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an...

Today's Sponsors

Venturous
Got healthcare questions? Just ask Transcarent

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Cybersecurity, Technology
The 3 most promising uses for GenAI in healthcare
OpenAI’s $40 Billion And Circle IPO: AI And Blockchain’s Revolution
The Flawed Assumption Behind AI Agents’ Decision-Making
Q&A: Researcher discusses agentic AI, expected to be the next trend in digital medicine
Generative AI Is A Crisis For Copyright Law

Share This Article