Is it possible to hack authors' solutions?
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
9 | nor | 153 |
Is it possible to hack authors' solutions?
Name |
---|
Sometimes it is possible to hack the authors' solution, as seen here. However, it is generally rare, as there are many people (the coordinator and the testers) who read the problem and the solution and verify that they are correct.
Another way that authors ensure that the solutions are correct is by writing an exponential brute force, and checking their solutions against thousands of small cases. This is usually enough to catch most errors.
So will the contest be unrated if this happens?
It depends on how much it affects the contest. For example. when issue is fixed quickly enough (by changing solution or problem statement) or when error in solution appears only on systests and correct solution exists, contest will be rated.
Also, reference solutions with added bugs to ensure tests actually catch them.
For Codeforces, the problem selection procedure is pretty high quality, so the chance of this happening is very low. What follows below is only from my experience with problemsetting/coordinating for my uni contests (which spans about a dozen contests or so, and about a hundred problems).
Whenever I coordinate contests for my university CP club contests, I ask problemsetters to submit a formal mathematical proof of correctness of the intended solution as well as implementation (line-by-line, if it isn't standard) before setting the problem on Polygon (for instance, proof of claims needed for greedy, dp and binary search problems; structural properties for graph problems and so on). Any problem that I have ever approved using this strategy has had a correct solution.
From my experience, the more formal (and low-level) the proofs are, the less probable it is to make mistakes and miss out corner cases. For instance, while a high-level proof might have a chance of missing the fact that there are no cycles with length $$$\le 2$$$ in an undirected graph but there are such cycles in directed graphs, a low-level proof will definitely not miss out that detail. Maybe this arises out of my mathematical inclination, but I've seen that the more "machine-verifiable" a plausible proof seems to be, the more the chances are that it is correct.
An interesting thing that we've been thinking of implementing is to somewhat formalize the test-generation pipeline (since the judge has the responsibility to minimize false positives and eliminate false negatives), which is overlooked by quite a few new problemsetters. Testing with a brute-force solution also helps detect any incorrect author's solution, so it is imperative that the tests comprise a diverse enough subsets of all permissible inputs which can eliminate false positives and negatives on most sane approaches, even for the author's solution. Some of the things we currently think of while making tests are as follows, but they are just heuristics and not totally formal:
If there are some more test-generation ideas that I missed out on, or some strategies that can "formalize" this procedure to a good extent, please let me know!