PYMNTS.com May 19, 2024

OpenAI is trying to ease concerns following the departure of two safety executives.

Last week saw the resignation of Ilya Sutskever, co-founder and chief scientist of OpenAI, and Jan Leike, both of whom headed the company’s “superalignment team,” which was focused on the safety of future advanced artificial intelligence (AI) systems. With their departure, the company has effectively dissolved that team.

While Sutskever said his departure was to pursue other projects, Leike wrote on X Friday (May 17) that he had reached a “breaking point” with OpenAI’s leadership over the company’s central priorities.

He also wrote that the company did not give safety enough emphasis, especially in terms of artificial general intelligence (AGI), an as-yet-unrealized version of AI that...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
In AI Businesses Trust—But Are Still Accountable For Integrity Lapses
Visualizing ChatGPT’s Rising Dominance
Sam Altman Speaks On Tech Progress
AI-Driven Dark Patterns: How Artificial Intelligence Is Supercharging Digital Manipulation
Bridging The Gap To Wisdom: Metacognition As The Next Frontier For AI

Share This Article