Hi!
As many have noticed, sometimes problem ratings were assigned in a strange way that was not consistent with expectations. For example, ratings for complex problems of Div3 rounds were often overestimated. This was mainly due to the fact that high-ranking unofficial participants did not try such problems. It turned out that despite the high rating of a participant, a problem is not solved by the participant, and this fact raised the rating of the problem. It is not entirely correct to take into account only official participants since ratings for difficult problems are sometimes more accurately determined by unofficial participants.
Somewhere in the comments, I've read that problem ratings are set manually. Of course, this is not so. The process is automated, but I start it manually (I will fix it somehow).
I changed the formulas for calculating problem ratings, now they slightly better correspond to expectations. New problem ratings are already available on the website. I don't think they are perfect (but I hope that they are much better). If somewhere ratings obviously are wrong — it would be great to see such examples in the comments.
Thanks!
UPD 1: Thank you for examples of unexpected problem ratings. I'll try to fix them (I don't think that it is possible to fix all of them without manual work) and return with an update.
UPD 2 [May, 2]: I made another attempt to adjust the coefficients, to take into account some facts differently. The ratings are recalculated again. I carefully went through most of the comments and indicated new ratings. Now it looks a little better. I afraid, there are still some issues with some problems. Try to find them and demonstrate them in the comments. Thanks!