PYMNTS.com May 19, 2024

OpenAI is trying to ease concerns following the departure of two safety executives.

Last week saw the resignation of Ilya Sutskever, co-founder and chief scientist of OpenAI, and Jan Leike, both of whom headed the company’s “superalignment team,” which was focused on the safety of future advanced artificial intelligence (AI) systems. With their departure, the company has effectively dissolved that team.

While Sutskever said his departure was to pursue other projects, Leike wrote on X Friday (May 17) that he had reached a “breaking point” with OpenAI’s leadership over the company’s central priorities.

He also wrote that the company did not give safety enough emphasis, especially in terms of artificial general intelligence (AGI), an as-yet-unrealized version of AI that...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
Mobile Threats, AI Innovation, Third-Party Risks: Trends Shaping Healthcare In 2025
The Future Of Work: When Human Expertise Meets AI Capabilities
Samsung’s C-Lab to Showcase AI and Health Projects at CES
Foxconn Invests in AI Data Center Firm Zettabyte to Boost Sustainable Computing
DeepSeek-V3, ultra-large open-source AI, outperforms Llama and Qwen on launch

Share This Article