The GPT models have made great advancements in competitive programming recently. I saw from Qingyu's blog earlier that the new o3 model claims to achieve a 2727 performance rating. However, this makes me increasingly uncertain of how this 'rating achieved' should be interpreted.
There is very little detail publicly available on how the benchmarks are computed. The only available detail I could find regarding their benchmark methodology is the following quote from the 'reasoning with LLMs' blog:
Finally, we simulated competitive programming contests hosted by Codeforces to demonstrate this model’s coding skill. Our evaluations closely matched competition rules and allowed for 10 submissions.
This does not really offer any detailed benchmarking methods. Given the limited details, I'm sceptical of if a performance rating of 2727 can be interpreted as 'having a fair chance at solving problems close to 2700 rating'. This is due to the following reasons:
- Codeforces contests have a time-decay factor in the scoring system; human solvers at x rating usually only solve a x-rated problem in the later half of the contest, resulting in only <60% of the total score awarded for the problem;
- LLMs, on the other hand, might solve a problem very quickly if it is able to solve it at all. Therefore, the model doesn't need to solve as many problems as a normal human contestant in order to achieve the same 'performance rating' on Codeforces contests.
To quantify the above, I've collected some data from the recent Division 1 contests. For each contest, we gather the score of the contestant with a performance rating closest to 2727, and computed the number of the first n questions the LLM model need to be able to solve to achieve that score. The following assumption is made:
- The LLM only solves problems in increasing order of difficulty (i.e. not a rainboy model);
- If the LLM is able to solve a problem, then it solves it in its first try, therefore incurring no penalty for wrong submissions;
- If the LLM is able to solve a problem, then it solves it within the first minute, therefore incurs no time-decay penalty. Note that extending this to e.g. 5 minute changes the outcome very little, as it only translates to a ~1% score decay.
The sum of the first n problems usually don't sum up exactly to the required score for the performance rating. As a result, the requirements reported are in the format of 'solving up to problem X with a probability p', where p is the probability that makes the score of the first X-1 problems plus p * score of problem X equal to the required score.
The results are illustrated in the following table:
Round Name | Score Required | Problems Needed to be Solved | Rating of the Threshold Problem | Link to the Threshold Problem
Codeforces Global Round 28 (Div 1+2) | 5984 | Solving F with 61.7% chance | — | 2048F Codeforces Round 990 (Div 1) | 2492 | Solving C with 82.8% chance | 2100 | 2046C Rayan Programming Contest 2024 — Selection (Codeforces Round 989, Div. 1 + Div. 2) | 7252 | Solving F1 with 50.1% chance | 2500 | 2034F1