Computerworld August 30, 2024
The agreements signed with the US AI Safety Institute also include the entities engaging in collaborative research on evaluating capabilities and safety risks, and methods to mitigating those risks.
Large language model (LLM) providers OpenAI and Anthropic have signed individual agreements with the US AI Safety Institute under the Department of Commerce’s National Institute of Standards and Technology (NIST) in order to collaborate on AI safety research that includes testing and evaluation.
As part of the agreements, both Anthropic and OpenAI will share their new models with the institute before they are released to the public for safety checks.
“With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science...