VentureBeat December 18, 2023
Sharon Goldman

According to a new report from the World Privacy Forum, a review of 18 AI governance tools used by governments and multilateral organizations found that more than a third (38%) include “faulty fixes.” That is, the tools and techniques meant to evaluate and measure AI systems, particularly for fairness and explainability, were found to be problematic or ineffective. They may have lacked the quality assurance mechanisms typically found with software, and/or included measurement methods “shown to be unsuitable” when used outside of the original use case.

In addition, some of those tools and techniques were developed or disseminated by companies like Microsoft, IBM and Google, which, in turn, develop many of the AI systems being measured.

For example, the report...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Govt Agencies, Regulations, Survey / Study, Technology, Trends
Why AI Won’t Replace Human Psychotherapists
When life sciences met artificial intelligence
Mistral unleashes Pixtral Large and upgrades Le Chat into full-on ChatGPT competitor
Cloudian HyperStore Meets Nvidia GPUDirect: Object Storage For AI
Meet The New Boss: Artificial Intelligence

Share This Article