VentureBeat October 15, 2024
Michael Nuñez

Anthropic, the artificial intelligence company behind the popular Claude chatbot, today announced a sweeping update to its Responsible Scaling Policy (RSP), aimed at mitigating the risks of highly capable AI systems.

The policy, originally introduced in 2023, has evolved with new protocols to ensure that AI models, as they grow more powerful, are developed and deployed safely.

This revised policy sets out specific Capability Thresholds—benchmarks that indicate when an AI model’s abilities have reached a point where additional safeguards are necessary.

The thresholds cover high-risk areas such as bioweapons creation and autonomous AI research, reflecting Anthropic’s commitment to prevent misuse of its technology. The update also brings new internal governance measures, including the appointment of a Responsible Scaling Officer to...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
Meta’s new BLT architecture replaces tokens to make LLMs more efficient and versatile
Johns Hopkins Medicine inks AI deal with Abridge
Congress' AI report leaves some tech-watchers on edge
Only 20% of AI devices for children used pediatric data to train: 3 notes
Top Decentralized AI Projects Of 2025 Amid OpenAI Copyright Concerns

Share This Article