VentureBeat August 8, 2024
Michael Nuñez

Anthropic, the artificial intelligence startup backed by Amazon, launched an expanded bug bounty program on Thursday, offering rewards up to $15,000 for identifying critical vulnerabilities in its AI systems. This initiative marks one of the most aggressive efforts yet by an AI company to crowdsource security testing of advanced language models.

The program targets “universal jailbreak” attacks — methods that could consistently bypass AI safety guardrails across high-risk domains like chemical, biological, radiological, and nuclear (CBRN) threats and cybersecurity. Anthropic will invite ethical hackers to probe its next-generation safety mitigation system before public deployment, aiming to preempt potential exploits that could lead to misuse of its AI models.

AI safety bounties: A new frontier in tech security

This move comes...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
AI-Powered Smartphones Could Offset a Data Center Downturn
Tech dollars flood into AI data centers in capital expenditure boom-
What will AI do for telemedicine in 2025? More than you might think
Is Artificial Intelligence The Cure For Healthcare’s Chronic Problems?
Trends 2025: Healthcare leaders are focusing on patient access, AI and Medicare Advantage

Share This Article