Politico July 8, 2025
Carmen Paun

DANGER ZONE

While many artificial intelligence companies don’t want the government to establish rules that they say could stifle their business, one San Francisco-based company is proposing new requirements.

Anthropic, the maker of the Claude AI chatbot, wants state or federal lawmakers to impose new transparency requirements on companies to help prevent potential harms in the wake of a failed push by congressional Republicans to freeze state AI regulations, POLITICO’s Chase Difeliciantonio reports.

The ChatGPT rival argued in a proposal Monday that AI companies like itself should disclose how they prevent their programs from building weapons or harming people or property.

How so? Anthropic suggests that large AI companies should publish a “Secure Development Framework” laying out how they’ll assess...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Congress / White House, Govt Agencies, Regulations, Technology
Infographic: ECRI’s Top 10 Tech Hazards of 2026
Doctors Increasingly See AI Scribes in a Positive Light. But Hiccups Persist.
The Download: OpenAI’s plans for science, and chatbot age verification
AI Personas Of Synthetic Clients Spurs Systematic Uplift Of Mental Health Therapeutic Skills
Models that improve on their own are AI's next big thing

Share Article