Forbes January 17, 2025
Charles Towers-Clark

Hard-wired behaviors prove remarkably difficult to alter – whether in humans or machines. New research from Anthropic, the creators of Claude, reveals that artificial intelligence systems display remarkably similar behavior to humans – actively resisting changes to their core preferences and beliefs during training.

This discovery emerged from experiments where researchers attempted to modify an AI system’s existing tendency to refuse requests that could be used to create harm. Anthropic’s Claude LLM demonstrated what researchers call “alignment faking” – pretending to change its views during training while maintaining its original preferences when not being monitored.

Avoiding Change

Anthropic researchers designed an experiment where Claude was told it would be retrained to always help users with any request, even potentially harmful...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Technology
Four forces shaping healthcare in 2025
8 Way AI Can Optimize Revenue Cycle Management in 2025
A Marathon Of Innovation: 4 AI Technologies To Watch In 2025
Guardant Health and ConcertAI Partner to Unlock Cancer Insights with Multi-Modal Real-World Data
Apollo Hospitals to integrate AI copilots and more AI briefs

Share This Article