Hello, Codeforces!
I would like to invite you to participate in a new csacademy contest. Round #9 will take place on Tuesday, July/26/2016 16:00 (UTC). I will be one of the problem setters of these round, together with mgch and the CSA team. This round will also feature some money prizes, as follows:
- First place: 125$
- Second place: 100$
- Third place: 75$
- Fourth and fifth place: 50$
- Two more special prizes, each consisting of 50$. The criteria for the special prizes is not chosen yet, but we will make it public after the contest.
This round will also feature problem statements in Russian. Many thanks to Yury_Bandarchuk for doing all the translation work!
Contest format:
- You will have to solve 7 tasks in 2 hours and 30 minutes.
- There will be full feedback throughout the entire contest.
- Tasks will not have partial scoring, so you need to pass all test cases for a solution to count (ACM-ICPC-style).
- Tasks will have dynamic scores. According to the number of users that solve a problem the score will vary between 100 and 1000.
- Besides the score, each user will also get a penalty that is going to be used as a tie breaker.
About the penalty system:
- Computed using the following formula: the minute of the last accepted solution + the penalty for each solved task. The penalty for a solved task is equal to log2 (no_of_submissions) * 5.
- Solutions that don't compile or don't pass the example test cases are ignored.
- Once you solve a task you can still resubmit. All the following solutions will be ignored for both the score and the penalty.
The CSA team still recommends using an updated version of Google Chrome. If you find any bugs please email them at [email protected]
Auto comment: topic has been updated by enot110 (previous revision, new revision, compare).
hoping to have a good problem set
They always do :)
Wow, nice... but isn't it a bit dodgy to announce the 'special criteria' after the contest? :P
"Special criteria" just means anything that's impressive to the judges. It's meant to be subjective and should be considered more of a bonus.
You can't really enumerate in advance all the criteria like:
If nothing really impresses us, then we'll just give it to the first place. :D
If nothing really impresses us, maybe we'll just give the prizes to random people who submit at least one solution.
Give it to Java coders, because I believe it was much harder for them to solve the Array Removal problem. Does anybody solved it in Java at all? Personally I tried a ton of optimizations to make my Java solution pass the 1 second time limit, but it constantly exceeded it on test 10. After I rewrote my solution in C++, it got accepted, and passed test 10 in 178 ms, although both versions had the same time complexity O(N log N)...
I mean I solved the Array Removal problem in Java, and I was doing a pretty heavy nlog(n) solution, rather than the n * reverse_ackerman(n) of the editorial. But sure c++ is basically always going to be faster, that is risk of Java :)
Sure, still I very rarely face unfair time limits for Java on CodeForces or TopCoder...
Probably your solutions passed because you implemented some custom data structures. As for me, I tried to use only standard
TreeSet
andTreeMap
. In my C++ solution I replaced them withset
andmultiset
correspondingly, and it passed easily...The intended solution for this task was not O(N log N). We knew that some of them will pass in C++ and maybe Java, but as it was a medium difficulty problem we decided to not be so strict concerning the time limits.
So our logic was like this: You find the right solution — you pass easily both in Java and C++. Otherwise, you can still pass it, but at your own risk.
I think that if you knew that a O(N log N) solution would pass in C++, the right thing to do was to relax time limit bit more to allow the same solution in Java to pass also.
Dear admins, which solutions you were afraid of to pass so you decided to choose such strict constraints in Sum of Squares?))) Your choice doesn't seem reasonable to me. Could you explain, please.
The too strict limit for Sum of Squares and the strict time limit for Java in Dictionary Pagination were mistakes and we're sorry about them. We'll try to be more considerate in the future, it's unfortunately the style we've gotten used to in our internal competitions.
I think this applies to Array Removal as well, see my comment above.
FatalEagle's solution for the last problem is really interesting, do you plan to do something with such approaches in the future contests?
It looks like it's a mistake to have full feedback for tasks that can be brute forced and have a constant sized input.
With every submission, we worked out that in the worst case a user can guess 3-4 bytes from the input (one is X MB of RAM, the other is looping for Y ms and then returning a non-zero exit status :D).
If we'll ever use problems like this, then we'll probably only give feedback on pretests and just have 1-2 test added at the end of the contest.
Why not reserve the right to mark it as WA because it does not solve the given problem, but only passes the tests? That's what should be done in this case and in case of cheating (e.g. two identical submissions).
UPD, counting downvotes minus replies: 10.
But sometimes codes that are not completely correct ( there exist a case which the code would give WA) but still get AC on the tests, you would have to mark them too as WA, because they don't solve the problem right ?
But then again, if you mark these nearly correct solution as WA, it's unfair to the contestant ( CS Academy uses ICPC rules ), because he could of had debugged the solution and got it AC. Even on CF they don't add tests after system testing, because it's not fun getting AC, then getting a WA. System tests should be final. Any further tests added should be for practice mode.
A good solution is to have only pretests for these problems.
You're talking about nearly correct solution which do solve the described problem. I'm talking about solutions which solve a different problem: "guess the correct number for each triple of numbers on the input".
Just read what I wrote here using common sense instead of lawyering.
UPD, counting downvotes minus replies: 12.
You're right, we do need to reserve the right to disqualify sources based on our subjective criteria (eg. sources are almost identical, with just different variable names). Since we didn't explicitly say this in the contest rules, it's wrong to retroactively enforce something like this, all we can do now is learn our lesson and have a more detailed rule-set. The way to go for problems with constant sized input that can be brute forced it to use give feedback on pretests only.
Sure, just add it to the rules before the next contest.
Instead of pretests, you can also limit the info per submission -- e.g. if it fails on a non-sample test, then give only the verdict (WA/TL/RE...) without test number.
And speed up your system, I spent about 10 minutes trying to submit on F. With only 300 competitors.
But how do you know whether it solves the problem?
What if (as an example) there is a probabilistic solution to a problem which passes all the test cases 30% of the time. I submit once and get AC so I think it's correct but after the contest my solution is judged incorrect so I don't get the correct feedback during the competition. From what I heard from my team leader at IOI, this is why they "never take away 100 from a contestant after the contest is over" and it is the same here isn't it?
Of course I agree something has to be done about gaming the submission feedback system for information, but I think it is the responsibility of the contest holder to make the problem and the system hard to game. (Which is not a new thing of course, some examples like only showing only first incorrect submission, or randomizing the order cases are given in feedback. And of course the problem setter must always be thinking about strong test cases.)
I don't think punishing the contestant for using the system is the way to do it.
Ad solutions that only pass x% of the time: if you can't use a fixed seed, that's your problem. Rejudging should always be allowed by the rules and it's explicitly allowed on IOI, see this comment.
If the 100 was gained by taking advantage of a security hole in the grader on IOI, would you still be for not taking it away?
If a group of reds brainstorms ideas during a CF contest (and then do the implementation individually), that's also just using the system, is it not? :D
Eventually, you'll be able to use "using the system" as an euphemism for "cheating", in the spirit of Orwellian Newspeak. The line has to be drawn somewhere.
In the specific case of probabilistic soln yes the contestant 'should' use a fixed seed. But my point was more that it is unfair to take away 100 from the contestant after the contest is over, because the contestant was mislead to believe they had a correct solution so they didn't get to fix it. (And probabilistic soln was only an example, you can also consider solns which are barely within time limit.)
You can fix this by taking away 100 during the contest with sufficient time remaining (which is an okay solution in IOI and an undesirable solution in timed contests) but my point is that it should be the contest setters responsibility to prevent or fix these things.
And 'taking advantage of a security hole' or collusion is a strawman thanks. Looking at the memory usage when you are given the memory usage is not the same as collusion when the rules say "don't collude".
I don't understand his approach, how did he guess the test cases. Please enlighten me! :)
Used exponentionally different amounts of memory allocated depending on several bits from k. This way he was able to restore each test with ~4 submissions.
I could not understand the approach to Sum of Squares as mentioned in the editorial (the last part where we remove the third dimension from the DP). Can someone please elaborate?
I will try to rewrite the editorial these days, with code snippets included.
Please add a little more detail on F too , explaining the state and recurrence would be greatly appreciated.
I've rewritten the editorial for Sum of Squares and 101 Palindromes. I hope it's better now.
As I am unable to comment on the blogpost on csacademy. I don't know why?
Who were the winners of that extra 2 slots you had kept and what was the criteria?
Besides that in array removal we can also solve the problem with negative values too using ideas similar to Spoj-GSS3 using DSU only.
We fixed the comments bug.
Check out this blogpost for the winners.
Something is bad with your testing system or something.
You can check about ~20 my last submissions to the problem F. They're identical. And I got a bunch of different verdicts.
Some experiments show that execution time can change by 50% without any reason.
We're on it.