VentureBeat July 24, 2024
Emilia David

OpenAI announced a new way to teach AI models to align with safety policies called Rules Based Rewards.

According to Lilian Weng, head of safety systems at OpenAI, Rules-Based Rewards (RBR) automate some model fine-tuning and cut down the time required to ensure a model does not give unintended results.

“Traditionally, we rely on reinforcement learning from human feedback as the default alignment training to train models, and it works,” Weng said in an interview. “But in practice, the challenge we’re facing is that we spend a lot of time discussing the nuances of the policy, and by the end, the policy may have already evolved.”

Weng referred to reinforcement learning from human feedback, which asks humans to...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
AI-enabled clinical data abstraction: a nurse’s perspective
Contextual AI launches Agent Composer to turn enterprise RAG into production-ready AI agents
OpenAI’s latest product lets you vibe code science
WISeR in 2026: Legal, Compliance, and AI Challenges That Could Reshape Prior Authorization for Skin Substitutes
Dario Amodei warns AI may cause ‘unusually painful’ disruption to jobs

Share Article