Politico July 8, 2025
DANGER ZONE
While many artificial intelligence companies don’t want the government to establish rules that they say could stifle their business, one San Francisco-based company is proposing new requirements.
Anthropic, the maker of the Claude AI chatbot, wants state or federal lawmakers to impose new transparency requirements on companies to help prevent potential harms in the wake of a failed push by congressional Republicans to freeze state AI regulations, POLITICO’s Chase Difeliciantonio reports.
The ChatGPT rival argued in a proposal Monday that AI companies like itself should disclose how they prevent their programs from building weapons or harming people or property.
How so? Anthropic suggests that large AI companies should publish a “Secure Development Framework” laying out how they’ll assess...







