VentureBeat July 24, 2024
OpenAI announced a new way to teach AI models to align with safety policies called Rules Based Rewards.
According to Lilian Weng, head of safety systems at OpenAI, Rules-Based Rewards (RBR) automate some model fine-tuning and cut down the time required to ensure a model does not give unintended results.
“Traditionally, we rely on reinforcement learning from human feedback as the default alignment training to train models, and it works,” Weng said in an interview. “But in practice, the challenge we’re facing is that we spend a lot of time discussing the nuances of the policy, and by the end, the policy may have already evolved.”
Weng referred to reinforcement learning from human feedback, which asks humans to...