AXIOS October 24, 2023
Ryan Heath

A damning assessment of 10 key AI foundation models in a new transparency index is stoking new pressure on AI developers to share more information about their products — and on legislators and regulators to require such disclosures.

Why it matters: The Stanford, MIT and Princeton researchers who created the index say that unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.

The big picture: Self-regulation hasn’t moved the field toward transparency. In the year since ChatGPT kicked the AI market into overdrive, leading companies have become more...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Govt Agencies, Regulations, Survey / Study, Technology, Trends
AI and crypto drove gains in this year's top 5 tech stocks
How To Build An AI Strategy That Works For Your Employees
Visualizing Big Tech Company Spending On AI Data Centers
Design And Technology Industry Pros Predict Top AI Trends For 2025
Looking At Groundbreaking Capabilities With OpenAI O3

Share This Article