VentureBeat December 18, 2023
According to a new report from the World Privacy Forum, a review of 18 AI governance tools used by governments and multilateral organizations found that more than a third (38%) include “faulty fixes.” That is, the tools and techniques meant to evaluate and measure AI systems, particularly for fairness and explainability, were found to be problematic or ineffective. They may have lacked the quality assurance mechanisms typically found with software, and/or included measurement methods “shown to be unsuitable” when used outside of the original use case.
In addition, some of those tools and techniques were developed or disseminated by companies like Microsoft, IBM and Google, which, in turn, develop many of the AI systems being measured.
For example, the report...