VentureBeat December 18, 2023
Sharon Goldman

According to a new report from the World Privacy Forum, a review of 18 AI governance tools used by governments and multilateral organizations found that more than a third (38%) include “faulty fixes.” That is, the tools and techniques meant to evaluate and measure AI systems, particularly for fairness and explainability, were found to be problematic or ineffective. They may have lacked the quality assurance mechanisms typically found with software, and/or included measurement methods “shown to be unsuitable” when used outside of the original use case.

In addition, some of those tools and techniques were developed or disseminated by companies like Microsoft, IBM and Google, which, in turn, develop many of the AI systems being measured.

For example, the report...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Govt Agencies, Regulations, Survey / Study, Technology, Trends
OpenAI’s GPT-5 Model Reportedly Behind Schedule With Uncertain Future
10 AI Predictions For 2025
Three Practical Reasons To Consider AI Agents For Your Organization
Dexcom Adds Generative AI Platform to Its Over-the-Counter CGM
My Medical AI Holiday Wish List

Share This Article