VentureBeat December 3, 2021
Kyle Wiggers

This week, the Partnership on AI (PAI), a nonprofit committed to responsible AI use, released a paper addressing how technology — particularly AI — can accentuate various forms of biases. While most proposals to mitigate algorithmic discrimination require the collection of data on so-called sensitive attributes — which usually include things like race, gender, sexuality, and nationality — the coauthors of the PAI report argue that these efforts can actually cause harm to marginalized people and groups. Rather than trying to overcome historical patterns of discrimination and social inequity with more data and “clever algorithms,” they say, the value assumptions and trade-offs associated with the use of demographic data must be acknowledged.

“Harmful biases have been found in algorithmic decision-making...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Technology
Zephyr AI Raises $111 Million in Series A Financing
Chatbot answers are all made up. This new tool helps you figure out which ones to trust.
AI degrades our work, nurses say
What If Generative AI Turned To Be A Flop In Healthcare?
A.C.C.E.S.S. AI: A New Framework For Advancing Health Equity In Health Care AI

Share This Article