MIT Technology Review September 16, 2024
Kevin Frazier

Existing measures to mitigate AI risks aren’t enough to protect us. Here’s what we need to do as well.

In the past couple of years, regulators have been caught off guard again and again as tech companies compete to launch ever more advanced AI models. It’s only a matter of time before labs release another round of models that pose new regulatory challenges. We’re likely just weeks away, for example, from OpenAI’s release of ChatGPT-5, which promises to push AI capabilities further than ever before. As it stands, it seems there’s little anyone can do to delay or prevent the release of a model that poses excessive risks.

Testing AI models before they’re released is a common approach to mitigating...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
Harnessing AI to reshape consumer experiences in healthcare
AI agents’ momentum won’t stop in 2025
The cybersecurity provider’s next opportunity: Making AI safer
OpenAI launches ChatGPT desktop integrations, rivaling Copilot
Apple’s AI-Powered Smart Home Hub May Include eCommerce Capabilities

Share This Article