Computerworld January 23, 2025
With its MIT license and ultra-low costs, DeepSeek could be an appealing and cost-effective option for enterprise adoption.
Chinese AI developer DeepSeek has unveiled an open-source version of its reasoning model, DeepSeek-R1, featuring 671 billion parameters and claiming performance superior to OpenAI’s o1 on key benchmarks.
“DeepSeek-R1 achieves a score of 79.8% Pass@1 on AIME 2024, slightly surpassing OpenAI-o1-1217,” the company said in a technical paper. “On MATH-500, it attains an impressive score of 97.3%, performing on par with OpenAI-o1-1217 and significantly outperforming other models.”
On coding-related tasks, DeepSeek-R1 achieved a 2,029 Elo rating on Codeforces and outperformed 96.3% of human participants in the competition, the company added.
“For engineering-related tasks, DeepSeek-R1 performs slightly better than DeepSeek-V3 [another model from...