MIT Technology Review September 16, 2024
Kevin Frazier

Existing measures to mitigate AI risks aren’t enough to protect us. Here’s what we need to do as well.

In the past couple of years, regulators have been caught off guard again and again as tech companies compete to launch ever more advanced AI models. It’s only a matter of time before labs release another round of models that pose new regulatory challenges. We’re likely just weeks away, for example, from OpenAI’s release of ChatGPT-5, which promises to push AI capabilities further than ever before. As it stands, it seems there’s little anyone can do to delay or prevent the release of a model that poses excessive risks.

Testing AI models before they’re released is a common approach to mitigating...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
Meta’s new BLT architecture replaces tokens to make LLMs more efficient and versatile
Johns Hopkins Medicine inks AI deal with Abridge
Congress' AI report leaves some tech-watchers on edge
Only 20% of AI devices for children used pediatric data to train: 3 notes
Top Decentralized AI Projects Of 2025 Amid OpenAI Copyright Concerns

Share This Article