It generally seems that many GMs and even LGMs (and tourists!) still participate in all rated ranges unofficially (even div4 contests for some reason). It also seems that many div2 hardest problems are way harder than any reasonable "actual div2 participant" should be able to solve (hello 3000+ rated problems in div2).
However there is very little gamification around such unofficial participation:
- Unrated, duh.
- Under your profile, the rank is shown as "—"
- So can't even track your historical best rank and compete with that etc.
- Even in the specific contest standings, there is one checkbox "show unrated". Virtual participants are shown in the same category as official-but-out-of-bounds div1 contestants. So basically if you want to hide that idiot gray virtual participant who submitted somebody else's code for all problems at 00:00 and won first place, you will also hide all div1 participants (including yourself).
- So can't even reliably check what rank you got after some time.
Now yes, even as an unofficial participant, you can still participate and enjoy all the problems for their intrinsic beauty and for the joy of problem solving. And yes, you can still compete, obviously you still see the scoreboard, and it might take some time for that gray virtual participant to appear on the top, so it's actually perfectly valid for quite a while too. And some people keep saying you shouldn't care much about rating anyway.
But that doesn't mean it couldn't be made even more fun. Just curious — how interested is the community in either a separate rating that doesn't care about rating bounds or perhaps that partially rated contests idea?
(In theory I think this could also be a browser extension and not necessarily an official CF feature. But rating is kinda like currency — it feels more desirable the more people care about it. So a third party extension might not feel quite the same. Also not sure how much it would spam the API or how easy it would be to implement reliably.)
Please thumb your opinions (mutually exclusive):
- Please have a separate rating that ignores rating bounds
- Please have partially rated contests as described by TheScrasse here
- Maybe some minor improvements would be cool but don't add/change ratings
- Please don't waste time doing any of the above
- I'll never be Div.1 so I don't care
Agreed!... It would be great if unofficial participation is separated from virtual participation and we have our unofficial ranks displayed in our contests page. Now that we have unrated participation as well... it makes sense to club it with the unofficial rank list.
Make Separate Rating changes for every divison. so, in every divison participiants will have to compete within their divison only. this can be done for every contest.
IDK about separate rating.
The point of unrated participation is to remove the competition and with a separate rating, we still have that competition going, and likely this separate rating will be as or even more important than the main rating if it gets added.
I don't think such a rating would ever come close to being as important as the real rating, because it measures an entirely different skillset. Just think about competing against an LGM in a 3h div1 vs in a div4 contest. In the former case you might get stuck and feel entirely powerless to solve any of the next problems and just have them solve 3 more problems than you, but in the latter case they might just be twice as fast but you'll both likely solve all the problems during contest. (basically SOLVE vs FAST from that blog)
Also why is the point of unrated participation to "remove competition"? I mean I think the main point of rating bounds is just that you literally can't let LGM's be rated in div4 contests, because then the rating would start reflecting speed instead of problem solving ability. And all the other rating bounds similarly protect the integrity of the rating system, but are less extreme examples on the same spectrum. (Another issue is the originality and uniqueness of the problems, but which again leads to protecting the integrity of the rating system — rating should reward problem solving not having memorized something they've seen before.)
We are also talking about people who chose to unofficially participate in the contest — if they preferred to "remove competition", they could've just solved the problems a bit later out of contest instead (with distinct advantages like doing it whenever they preferred and perhaps only solving a subset of problems most interesting to them).
Create plag check for virtual participation and these grays won't bother anyone
TLDR: should only make a rating which does not fall.
I believe I should have some say given that I am frequently a player in div2/3/4:
A rating system that, is actually rating and can rise and fall is a horrible idea. It would force us to commit completely to a round
The reality is that div2/3/4 rounds are less polished than the div1 counterparts, and they are never intended to balance for div1 participants. They never should. They also may not always entertain div1 participants. Sometimes the div2F is just not fun, and it is only fun to play unofficially if I know I can give up at any point (and of course forfeit any achievements in this round). A rated variant is way too much pressure for something of this kind. Sometimes, the whole round is bad (at least to my liking).
Having this will also accidentally damage people that just want to submit one question in a round unofficially. This means you will have to explicitly register for speed rated.
Speed rating is also far more volatile than regular rounds, so rating changes must be lower than the usual version. Speed rounds are mostly hit or miss: either you solve the problem in 1 second ,as you already know all the prerequisites and tricks (and then implement it) — and if you do that 6 times you get good result. If you got stuck in one question it is usually a matter of 10 minutes, with penalty multiplied by how many questions left — so this is most disastrous for D2C/D.
The only way a rating system could work, is a rating that does not fall: It might have sounded silly, if we still keep thinking as it as “rating”, it is a number aggregate for achievements. Simply, it is usually a weighted average of your best performances.
I once considered making a rating system of this sort: mapping div2 rank 1 to 3200, rank 2 to 3100 etc. as performance. This unfortunately is very bad: as a rank 1 is almost immediately not achievable for many LGMs (myself included) as soon as tourist tourist participates in the round, so a nontrivial mathematical calculations is needed, and then I gave up on making such a formula.
You reminded me how many times I've quit contests half way through as soon as I didn't like one of the problems. I also agree that it happens way more often in div2/3/4. Perhaps a monotone achievement aggregate would be best indeed.
Anyhow, on designing a performance calculator, I think this would be OK in theory:
This way, whether or not a tourist participates shouldn't affect you too much (unless you rank higher), because you'd be expected to lose anyway based on rating difference.
I made this calculator. It's annoyingly slow though, maybe that whole running in the browser without a server thing wasn't such a great idea...
It seemed pretty good, in fact I am not sure if it can be done any better:
It seems that performance rating calculated this way must be carefully interpreted: I am very sure an X performance in usual div1 is a much higher accomplishment than X performance here (looking at the top of standings mostly). As long as we understand that these performance is on a separate scale though, I think the numbers are perfectly sensible. (It is also worth noting that performance this way may not have log-coefficient 400 unlike usual ELO, just another point to the different scale)
This blog claims the win probability formula has constants 10 and 400. If I picked different constants, then I'd change the definition of win probability.
I assumed that one important property is that the perf should be tourist-invariant. That is, your perf should stay roughly the same whether or not tourist participates. That shouldn't be possible without taking ratings into account at least to some degree.
The current implementation should achieve this, because the formula mentioned above gives only ~0.3% chance to win against tourist even at 3000 rating.
(1 / (1+10^((4000-3000)/400)) = 0.00315
I agree, I think top perf is higher and bottom perf is lower than it should be compared to official perf right now. Not sure why.
It's not super far off, but indeed quite a bit (e.g. from round 2006):
this calculator -> (old_rating + 4 * rating change)
For the base 400, what I mean is that under the two assumptions
Then it would result in speed ratings that predict win probability not necessarily (10,400) but something else (intuitively, something that says it is more luck dependent). This is ok: I think the first assumption is critical and second assumption is proven to be mostly true. It may just feel weird if we are used to a (10,400) scale, but is ok when I just remind myself this is a different scale.
As for iterations: I am thinking it definitely base on CF ratings, but if at some point a speed ratings is available (from previous contests), then we may be able to take p*(usual CF)+(1-p)*(speed) instead, say p is somewhere from 0.5-0.8. Just an idea though, I am not clear if this makes a rating more or less useful.
About difference with official perf, what you said seems to be how codeforces implemented it. Just a total guess if you accidentally implemented as the same bug as First time tourist 4000
While we're on custom achievement systems, I'd also like to suggest factor of $$$* 0.994$$$, applied for each contest skipped by user. That's kind of a forgiving streak measure, which only decreases so fast: 10 skips would move user from $$$1700$$$ to $$$1600$$$.
maybe you can just make it so that the option under "unrated" when you see people's ratings displays a score that takes into consideration unrated contests. This is a great idea actually