VentureBeat December 18, 2023
Sharon Goldman

According to a new report from the World Privacy Forum, a review of 18 AI governance tools used by governments and multilateral organizations found that more than a third (38%) include “faulty fixes.” That is, the tools and techniques meant to evaluate and measure AI systems, particularly for fairness and explainability, were found to be problematic or ineffective. They may have lacked the quality assurance mechanisms typically found with software, and/or included measurement methods “shown to be unsuitable” when used outside of the original use case.

In addition, some of those tools and techniques were developed or disseminated by companies like Microsoft, IBM and Google, which, in turn, develop many of the AI systems being measured.

For example, the report...

Today's Sponsors

Venturous
Got healthcare questions? Just ask Transcarent

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Govt Agencies, Regulations, Survey / Study, Technology, Trends
Perplexity AI launching $50 million venture fund to back early-stage startups
Epic to go 'beyond AI' at HIMSS25
Athenahealth to offer Abridge's AI scribe to its network of thousands of doctors
The Download: our relationships with robots, and DOGE’s AI plans
How health tech companies drop the ball in pitches to hospitals

Share This Article