So DeepSeek launched their R1 model which they claim is on par with OpenAI-O1.
More details here: Link to X post
This is a big update for CP as the model is open sourced and the chat can be accessed for free. As for the R1 model itself, I think its great particularly the chain of thought feels like that of a human.
I tested it out on 6 different problems in total, 3 problems of 1700 rating, 1 problem of 1800, 1 of 1900 and one unrated problem from recent contest (Div. 2 — 996).
This is what happened:
- Problem 1 — It managed to solve this in the first attempt.
- Problem 2 — Again managed to solve in the first attempt.
- Problem 3 — For this problem, it took an extra prompt.
Problem 4 — Unable to solve after 5 attempts.
- Problem 5 — Unable to solve after 4 attempts.
- Problem 6 — Unable to solve after 4 attempts.
Was curious to know your thoughts on this model, is this something that contests would be affected by in the near future?