Politico November 2, 2023
Shawn Zeller, Daniel Payne, Erin Schumaker and Evan Peng

Federal agencies that use artificial intelligence to approve drugs and medical devices or decide who gets access to health care in government programs would have to follow new protocols under a draft White House directive released yesterday.

How so? The memo from the Office of Management and Budget defines decisions it views as either “rights-impacting” or “safety-impacting” and spells out what agencies must do when they use AI to make such choices.

Rights-impacting decisions include those on approvals of medical devices or drugs, drug-addiction risk assessments, mental-health status to flag patients for interventions and the allocation of public insurance .

Safety-impacting decisions include those involving human life, serious injury, bodily harm, biological or chemical harm, occupational hazards, harassment or abuse...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Congress / White House, Govt Agencies, Regulations, Technology
10 signs AI is ‘eating the world (of venture capital)’
2024: record year for AI trials
2025: Provider organizations will embrace new AI and analytics techniques
AI and Automation in Healthcare – 2025 Health IT Predictions
Why The Public And Private Sectors Must Jointly Define Responsible AI

Share This Article