Computerworld August 30, 2024
Anirban Ghoshal

The agreements signed with the US AI Safety Institute also include the entities engaging in collaborative research on evaluating capabilities and safety risks, and methods to mitigating those risks.

Large language model (LLM) providers OpenAI and Anthropic have signed individual agreements with the US AI Safety Institute under the Department of Commerce’s National Institute of Standards and Technology (NIST) in order to collaborate on AI safety research that includes testing and evaluation.

As part of the agreements, both Anthropic and OpenAI will share their new models with the institute before they are released to the public for safety checks.

“With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
AI-Powered Smartphones Could Offset a Data Center Downturn
Tech dollars flood into AI data centers in capital expenditure boom-
What will AI do for telemedicine in 2025? More than you might think
Is Artificial Intelligence The Cure For Healthcare’s Chronic Problems?
Trends 2025: Healthcare leaders are focusing on patient access, AI and Medicare Advantage

Share This Article