Computerworld August 30, 2024
Anirban Ghoshal

The agreements signed with the US AI Safety Institute also include the entities engaging in collaborative research on evaluating capabilities and safety risks, and methods to mitigating those risks.

Large language model (LLM) providers OpenAI and Anthropic have signed individual agreements with the US AI Safety Institute under the Department of Commerce’s National Institute of Standards and Technology (NIST) in order to collaborate on AI safety research that includes testing and evaluation.

As part of the agreements, both Anthropic and OpenAI will share their new models with the institute before they are released to the public for safety checks.

“With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
Balancing AI Expertise and Industry Acumen in Vertical Applications
Companies may be more liable for genAI use than they think
7 Futuristic Professions In Healthcare You Can Still Prepare For - 2
Where healthcare AI startups are raising venture capital funding
Report: OpenAI Must Alter Structure to Reach $150 Billion Valuation

Share This Article