Politico November 2, 2023
Shawn Zeller, Daniel Payne, Erin Schumaker and Evan Peng

Federal agencies that use artificial intelligence to approve drugs and medical devices or decide who gets access to health care in government programs would have to follow new protocols under a draft White House directive released yesterday.

How so? The memo from the Office of Management and Budget defines decisions it views as either “rights-impacting” or “safety-impacting” and spells out what agencies must do when they use AI to make such choices.

Rights-impacting decisions include those on approvals of medical devices or drugs, drug-addiction risk assessments, mental-health status to flag patients for interventions and the allocation of public insurance .

Safety-impacting decisions include those involving human life, serious injury, bodily harm, biological or chemical harm, occupational hazards, harassment or abuse...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Congress / White House, Govt Agencies, Regulations, Technology
Why AI Won’t Replace Human Psychotherapists
When life sciences met artificial intelligence
Mistral unleashes Pixtral Large and upgrades Le Chat into full-on ChatGPT competitor
Cloudian HyperStore Meets Nvidia GPUDirect: Object Storage For AI
Meet The New Boss: Artificial Intelligence

Share This Article