AXIOS October 24, 2023
Ryan Heath

A damning assessment of 10 key AI foundation models in a new transparency index is stoking new pressure on AI developers to share more information about their products — and on legislators and regulators to require such disclosures.

Why it matters: The Stanford, MIT and Princeton researchers who created the index say that unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.

The big picture: Self-regulation hasn’t moved the field toward transparency. In the year since ChatGPT kicked the AI market into overdrive, leading companies have become more...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Govt Agencies, Regulations, Survey / Study, Technology, Trends
What Musk is telling Trump on AI regulation is anyone's guess
How AI is Transforming the Acute Patient Journey
Microsoft brings transactional databases to Fabric to boost AI agents
An AI pressure test
OpenAI Has A New Blueprint For American Automation

Share This Article