VentureBeat September 27, 2021
Kyle Wiggers

In 2019, OpenAI released Safety Gym, a suite of tools for developing AI models that respects certain “safety constraints.” At the time, OpenAI claimed that Safety Gym could be used to compare the safety of algorithms and the extent to which those algorithms avoid making harmful mistakes while learning.

Since then, Safety Gym has been used in measuring the performance of proposed algorithms from OpenAI as well as researchers from the University of California, Berkeley and the University of Toronto. But some experts question whether AI “safety tools” are as effective as their creators purport them to be — or whether they make AI systems safer in any sense.

“OpenAI’s Safety Gym doesn’t feel like ‘ethics washing’ so much as...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Technology
Regulations, Innovations and AI Define This Week in Big Tech
Why health system AI predictions can fail
10 things you may have suspected about AI but didn’t know for sure till now
Meta's new AI assistant is rolling out across WhatsApp, Instagram, Facebook and Messenger
Exclusive: Powerful new AI model accurately converts speech to text, even your company's jargon

Share This Article