PYMNTS.com May 19, 2024

OpenAI is trying to ease concerns following the departure of two safety executives.

Last week saw the resignation of Ilya Sutskever, co-founder and chief scientist of OpenAI, and Jan Leike, both of whom headed the company’s “superalignment team,” which was focused on the safety of future advanced artificial intelligence (AI) systems. With their departure, the company has effectively dissolved that team.

While Sutskever said his departure was to pursue other projects, Leike wrote on X Friday (May 17) that he had reached a “breaking point” with OpenAI’s leadership over the company’s central priorities.

He also wrote that the company did not give safety enough emphasis, especially in terms of artificial general intelligence (AGI), an as-yet-unrealized version of AI that...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
AI-enabled clinical data abstraction: a nurse’s perspective
Contextual AI launches Agent Composer to turn enterprise RAG into production-ready AI agents
OpenAI’s latest product lets you vibe code science
WISeR in 2026: Legal, Compliance, and AI Challenges That Could Reshape Prior Authorization for Skin Substitutes
Dario Amodei warns AI may cause ‘unusually painful’ disruption to jobs

Share Article