Computerworld March 5, 2024
Lucas Mearian

More than 150 researchers, ethicists, legal experts and professors signed onto a letter asking generative AI companies to open up their technology to outside evaluations for safety reasons.

More than 150 leading artificial intelligence (AI) researchers, ethicists and others have signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems, the lack of which has led to concerns about basic protections.

The letter, drafted by researchers from MIT, Princeton, and Stanford University, called for legal and technical protections for good-faith research on genAI models, which they said is hampering safety measures that could help protect the public.

The letter, and a study behind it, was created with the help of nearly two...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Govt Agencies, Healthcare System, Safety, Technology
In AI Businesses Trust—But Are Still Accountable For Integrity Lapses
Visualizing ChatGPT’s Rising Dominance
Sam Altman Speaks On Tech Progress
AI Makes Echocardiography Faster, More Accessible
AI-Driven Dark Patterns: How Artificial Intelligence Is Supercharging Digital Manipulation

Share This Article