VentureBeat July 24, 2024
Emilia David

OpenAI announced a new way to teach AI models to align with safety policies called Rules Based Rewards.

According to Lilian Weng, head of safety systems at OpenAI, Rules-Based Rewards (RBR) automate some model fine-tuning and cut down the time required to ensure a model does not give unintended results.

“Traditionally, we rely on reinforcement learning from human feedback as the default alignment training to train models, and it works,” Weng said in an interview. “But in practice, the challenge we’re facing is that we spend a lot of time discussing the nuances of the policy, and by the end, the policy may have already evolved.”

Weng referred to reinforcement learning from human feedback, which asks humans to...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
Archetype AI’s Newton model learns physics from raw data—without any help from humans
Salesforce CEO Marc Beinoff slams Microsoft Copilot as ‘Clippy 2.0’
Ten Years of AI Venture Capital Deals and Exits
Where Trump & Harris Stand on Payers, AI, Drug Pricing and CMS
Harvard’s super-selective AI track

Share This Article