Inside Precision Medicine September 25, 2024
These techniques can improve the evaluation of possible harms to health equity caused by AI responses generated by large language models (LLMs)
When an individual submits a query in a chatbox on a healthcare website, the utmost priority is to avoid offending or discriminating against the user, which could lead to a failure in delivering care.
Google Research published a study in Nature Medicine that lays out tools and techniques to improve the evaluation of possible harms to health equity caused by artificial intelligence (AI) responses generated by large language models (LLMs), like the ones used in Gemini, ChatGPT, and Claude. This method does not provide a comprehensive solution for all text-related AI applications, but it...