Politico November 2, 2023
Shawn Zeller, Daniel Payne, Erin Schumaker and Evan Peng

Federal agencies that use artificial intelligence to approve drugs and medical devices or decide who gets access to health care in government programs would have to follow new protocols under a draft White House directive released yesterday.

How so? The memo from the Office of Management and Budget defines decisions it views as either “rights-impacting” or “safety-impacting” and spells out what agencies must do when they use AI to make such choices.

Rights-impacting decisions include those on approvals of medical devices or drugs, drug-addiction risk assessments, mental-health status to flag patients for interventions and the allocation of public insurance .

Safety-impacting decisions include those involving human life, serious injury, bodily harm, biological or chemical harm, occupational hazards, harassment or abuse...

Today's Sponsors

Venturous
Got healthcare questions? Just ask Transcarent

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Congress / White House, Govt Agencies, Regulations, Technology
Finding Humanity In Healthcare With AI And Human Oversight
Reimagining TechOps: Generative AI's Impact On Data And Operations
The AI Hype Index: Falling in love with chatbots, understanding babies, and the Pentagon’s “kill list”
Family docs embrace AI tools
AI Won’t Kill Critical Thinking, But Your Opinion About AI Might

Share This Article