Medical Xpress August 5, 2024
Daniel Strain, University of Colorado at Boulder

Some artificial intelligence tools for health care may get confused by the ways people of different genders and races talk, according to a new study led by CU Boulder computer scientist Theodora Chaspari.

The study hinges on a, perhaps unspoken, reality of human society: Not everyone talks the same. Women, for example, tend to speak at a higher pitch than men, while similar differences can pop up between, say, white and Black speakers.

Now, researchers have found that those natural variations could confound algorithms that screen humans for mental health concerns like anxiety or depression. The results add to a growing body of research showing that AI, just like people, can make assumptions based on race or gender.

“If AI...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Mental Health, Patient / Consumer, Provider, Survey / Study, Technology, Trends
ChatGPT outperformed doctors again? Pitfalls of reporting AI progress
Patient Portals 4.0: Future of Patient Engagement
AI tool could predict type 2 diabetes 10 years in advance
Google digs deeper into healthcare AI: 5 notes
JP Morgan Annual Healthcare Conference 2025: What are the key talking points likely to be?

Share This Article