Hello! Today, I virtually participated in BalticOI 2015 day 1 and had quite an unpleasant surprise concerning task 1 (bowling). I had two solutions for it, one that took around 2e8 operations on the worst case but also on the best case, and was quite independant on the subtasks, and one that maybe could get to 2e9 operations or something but with a very good constant factor. The first one scored 0 on cses, while the second one scored 42. After the contest ended, I wanted to see the running times of my sources and submitted them on oj.uz, and guess what: the first one took 39 points, while the second one took 82, failing on test 96. The exact same source codes. Now, of course I wonder, what would have my score been during a contest? And I can't stop thinking: "should I consider that I solved this problem for 82 or for 42 when considering my virtual participation"? I think this is actually something that might have happened to many coders throughout the years.
I see this as a problem with a very simple solution: adapting the time limits according to each online judge's capacities. For instance, take the model code's running time on the contest server and get the ratio between it and the time limit. Then, set the time limit on the online judge as to have the same ratio. For instance, I assume that in the example I gave, if the contest was held on CMS with a 1s time limit (I just suppose, correct me if I'm wrong), then probably the TL should have been a bit bigger on oj.uz (like 1.2 s) since I think it works a tiny bit slower (again, correct me if I'm wrong, this is my personal experience) and maybe 1.5 or 1.4 or something on cses since it is clearly slower. If every OJ did this, the virtual participations would be much more appropriate, and there wouldn't be such huge 40p differences between two different ones.
Thanks for reading and please, if you are an OJ admin, consider this when setting your time limits!