Computerworld August 30, 2024
Anirban Ghoshal

The agreements signed with the US AI Safety Institute also include the entities engaging in collaborative research on evaluating capabilities and safety risks, and methods to mitigating those risks.

Large language model (LLM) providers OpenAI and Anthropic have signed individual agreements with the US AI Safety Institute under the Department of Commerce’s National Institute of Standards and Technology (NIST) in order to collaborate on AI safety research that includes testing and evaluation.

As part of the agreements, both Anthropic and OpenAI will share their new models with the institute before they are released to the public for safety checks.

“With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
Harnessing AI to reshape consumer experiences in healthcare
AI Ambient Scribes are poised to become indispensable tools for healthcare providers in 2025
Where AI Ambient Scribes Are Heading
AI agents’ momentum won’t stop in 2025
The cybersecurity provider’s next opportunity: Making AI safer

Share This Article