Lexology May 5, 2022
Epstein Becker Green

The success of an artificial intelligence (AI) algorithm depends in large part upon trust, yet many AI technologies function as opaque ‘black boxes.’ Indeed, some are intentionally designed that way. This charts a mistaken course.

Trust in AI is engendered through transparency, reliability and explainability. In order to achieve those ends, an AI application must be trained on data of sufficient variety, volume and verifiability. Given the criticality of these factors, it is unsurprising that regulatory and enforcement agencies afford particular attention to whether personally-identifiable information (“PII”) has been collected and employed appropriately in the development of AI. Thus, as a threshold matter, when AI training requires PII (or even data derived from PII), organizations need to address whether such...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Govt Agencies, Healthcare System, Privacy / Security, Technology
Zephyr AI Raises $111 Million in Series A Financing
Chatbot answers are all made up. This new tool helps you figure out which ones to trust.
AI degrades our work, nurses say
What If Generative AI Turned To Be A Flop In Healthcare?
A.C.C.E.S.S. AI: A New Framework For Advancing Health Equity In Health Care AI

Share This Article