AITechnology & Science

DeepSeek R1: The AI Model Challenging OpenAI and Google

Chinese AI startup DeepSeek has released a new revision of its R1 model. DeepSeek-R1 marks a key moment in the open-source AI ecosystem. The updated model shows large skill improvements in complex reasoning, coding, and logic problems—difficult areas even for top-tier models.

Benchmark tests show that DeepSeek-R1 attained a staggering 87.5% on the AIME 2025 test (increased from 70% on the prior model).

On the LiveCodeBench coding benchmark, the R1 model’s performance increased from 63.5% to 73.3%. On the incredibly difficult “Humanity’s Last Exam”, it more than doubled its score, moving from 8.5% to 17.7%.

DeepSeek’s R1 is open-source under the MIT License, unlike the proprietary models of OpenAI and Google. Open-source licensing allows developers to freely use and modify the model and deploy their own applications using the model.

This has enabled the AI community to easily develop applications using the model without licensing fees and give it the possibility to stimulate innovation at a lower cost.

It is a relatively efficient model. The previous models were trained in an eye-popping 55 days, on approximately 2,000 GPUs at a cost of $5.58 million (considerably below the cost of usually associated with in-sourcing to train comparable models in the U.S.)

The launch of DeepSeek-R1 highlights the increasing competition in the open-source artificial intelligence landscape, providing competition to larger entities like OpenAI and Google. Its enhanced capabilities and accessibility could shape the path of future artificial intelligence developments that may focus on the contributions of open-source innovations.

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

Leave a reply

Your email address will not be published. Required fields are marked *

More in:AI

0 %