VentureBeat January 6, 2025
Louis Columbus

OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more.

The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them.

In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Cybersecurity, Technology
The Download: what’s next for AI, and stem-cell therapies
The Race For AI Agents. Who Will Supply Tomorrow’s Workforce?
Fred Hutch spearheads patient confidentiality-focused AI project: What to know
Getting A Colonoscopy? Ask Your Doctor About Using An AI Copilot
Microsoft to Invest $3 Billion in Cloud and AI Infrastructure in India

Share This Article