Currently this contest will start in less than 20 minutes, and I find no blog about this contest, so I decide to write one. Here is the link:
Code Jam 2022 Round 2 link
Last year I ranked 1672 and failed to qualify for round 3, this year, I feel that I improve so much, hope I can get a T-shirt. Good luck everybody, and any discussion is welcomed.
(updated: Finally I ranked 484 and won a T-shirt, so excited, see you in round 3! )
Imagine qualifying to round 2 :(
master didn't qualify to round 2? i can be relaxed then as not qualified expert
As a pupil, I qualified to R2 :)
You had 1761 points. It doesn't count :)
I think I should try every problem's subtasks before trying to solve a full problem next time...
The bitmask dp and n^2 dp brutes for Jelly and I, O Bot respectively are practically free 21 points.
I opened last problem with 10 mins left and somehow coded the subtask (and fixed a WA) in the last 8 mins...
How's the dp brute for IO Bot? I only think abt it for the last 10 minutes
Realize that positive and negative positions are independent problems, so assume positive position. Sort a list of zeroes and a list of ones. Any operation you do is beneficial to do on closest remaining numbers. So DP is
a[i][j]
= the minimal cost to deliveri
closest zeroes andj
closest ones. From every position there are at most 5 transitions.Basically my performance during this round:
This was before the final standings :)(
Was enough to qualify though!
Oh, the $$$O(n^2)$$$ solution failed A.
I'm disqualified and my day is ruined.
sobs
I think a lot of people got AC with O(n^2) on the 3rd testset in a, because the testset did not represent the actual worst-case.
The analysis for test set 1 in "Saving the Jelly" is incredibly terse.
"With N≤10, recursively enumerating all N! orders that Mr. Jolly could call out the children's names is sufficient. The total complexity is roughly O(N!×N)."
How do we handle sweets that are equal distance from a given child? Backtracking? I guess the worst case is probably all children having 2 sweets at equal distance which is just a factor 2^(N/2) ~ 32. I wonder if a Python solution would pass.
You can fix any order of equally distanced sweets (and try only first in that order). It follows from the proof of correct solution for large. Would make sense to mention that in the analysis, tho
I thought it is provable only if one knows how to solve the large test. Thats weird..
I agree, I don't understand how to provably solve small without solving also large :)
I did bitmask DP for C small in the last half hour after bricking very hard. The complexity is $$$O(4^n n \sqrt{n})$$$. The dp keeps track of which subsets of kids and sweets are left.
The direct bitmask DP brute ($$$dp[childmask][candymask] = \text{is reachable}$$$) that might appear to be $$$4^n \cdot n^2$$$ actually has a much smaller constant factor and works within the time limit.
Since $$$popcount(childmask) = popcount(candymask)$$$ for any valid state we can reach, the actual complexity is something like
$$$2^{2n + 1} + \sum_{i=0}^{n} {n \choose i} \cdot {n + 1 \choose i} \cdot i \cdot n$$$
which is around $$$2 \cdot 10^7$$$ operations for $$$n = 10$$$. So even with an increased constant factor this runs comfortably in 10s for $$$T=100$$$.
I did the same thing, though I thought it was $$$O(4^{n}n^2)$$$. How did you get the square root in the complexity?
I did the same thing but already roughly analyzed the runtime benefit of having only states reachable which have the same number kids as sweets. I calculated:
Nice, thank you!
Thanks for the pointer, that makes sense. Also, I wonder how to do it in O(N!xN). My implementation pre-computes the ordered list of sweets by distance for each child. Then, for each child in each permutation, it iterates over the sweets until it finds the first one that has not been "taken" yet. This is O(N!xN^2) and TLEs with Pypy.
I did the same which didn't pass. I then tried pruning which passed first set. Code
Even I was confused after reading this, but one of my friends gave an explanation which sounds intuitively correct to me.
Suppose there exists a participant $$$P_1$$$ such that two chocolates $$$a$$$ and $$$b$$$ are equally close, both are equivalent here, but suppose there exists some remaining $$$P_2$$$ such that only a is closer than the blue candy.
In this case, its fine if we wrongly say its impossible since there will be some other permutation where $$$P_2$$$ comes before $$$P_1$$$ and this isn't a problem. More formally, the permutation where $$$P_1$$$ is moved to just after $$$P_2$$$ in the initial permutation should suffice.
First problem was the worst among all codejam problems this year!
No, the second was worse.
Analysis of the third one was worse.
Most GCJ Final problems last year are worse than the first one today :)
I feel the problems in GCJ are worse and worse these years. Enjoy them.
Adhoc Jam
KEKL, got AC on B thanks to OEIS (https://oeis.org/A022846)
if
ans[r]
is number of black cells in "incorrect" circle of radius r, then:ans[r] - ans[r-1] = 4*A022846[r]
; number of black cells in "correct" circle is easy to count in O(n)number of black cells in "correct" circle is easy to count in O(n)
how? :)Just bruteforce x from -R to R, write inequality and move y to left part of inequality. Then you Will have y <= sqrt((R+0.5)*(R+0.5)-x*x))
where 0.5 come from?
Function round from pseudo-code floors number to <=X if it is <= X+0.5
Holy shit, totally forgot oeis
I got the same sequence without OEIS, but don't know how to use it during the contest.
Ugly A and B. And problems were too difficult to properly choose the top 1000.
What's the brute force for $$$N \leq 10$$$ in C? Trying every order of children is not enough because of ties. Do we just say that there can't be many ties? What's the proof? I implemented $$$O(2^{2n})$$$ dp. EDIT: solved in comments above https://codeforces.net/blog/entry/102849?#comment-912506
I think I have a different bash than the one described in the comments? I iterated over all possible pairings of children to sweets. Then I made a graph based on whether each child had to be before another child based on their distances and ran topological sort. Overall this takes $$$\mathcal O(N^2 \cdot N!)$$$.
U can try to every possible assignments from children to sweets and then reduce the problem to some toposort problem where u add edges from sweets to children if its closer than the the assigned sweet.
Ugh failed C because my construction method was bad.
But If i keep the bad construction method (basically just look through all people and put it if min edge), but when I detect a bad state I just shuffle everything and redo the matching, it passes.
(Bad construction worked for small :( so though it might work, since I was sorting edges by distance, and I though the algorithm would do something so that it ensures the construction works, but clearly wrong lol)
Ah even just redoing the matching seems to work, no need to shuffle. I think it's because the matching algo finds min lexicographic matching and since I sorted by distances there's gonna be at least one edge that's minimum in the cycle that includes 1 (since it's lower lexicographical)
Did anybody else solve B (Pixelated Circle) by precomputing prefix sums over
draw_circle_perimeter
in $$$O(R^2)$$$ and hardcoding them?I didn't hardcode solutions into the source, but I did run a quadratic-time pre-computation at runtime. It was a bit more complex than the official solution, checking whether every possible (x, y) pair (up to some symmetries) would be left as a hole by draw_circle_filled_wrong.
Let me explain how I experienced this contest.
I finish the whole problem 1, and small test set of problem 2 very quickly. Then I proceed to problem 3. luckily, I can come up with small problem test set (just bitmask DP), but have no idea about large test set. Then I decide to choose problem 2, large test set. I have 75 minutes, and I think is enough. The bad thing is that problem 2 large test set is so hard, and I cannot make it till the end.
Then it takes more than 6 minutes to unveil the hidden test set, if more than 1000 people solve large test set of problem 2, I am doomed, so it is the longest 6 minutes I have ever experienced.
Luckily, only 200+ solved problem B large test set, and my final ranking is 484. I am so excited, it is the first time I proceed to round 3 and win Code Jam T shirt! The only regret that I have is that I should tried Q4 for small test set, not struggling for Q2 large, it's much easier.
So, my solution for 3rd problem:
For each child sort the allowed edges from closest to the farthest (ties in any order) and run Kuhn algorithm (without heuristic to pre-allocate arbitrary matching). Idea is to claim that it's valid to call this edges in some order.
Can somebody prove to disprove it?
Did it pass? I did the same and it only got me first subtask
Yes, it passed.
Solution is here
I also had the same solution. Proof is based on the claim that in min cost perfect matching, there will be some student that'll be matched to its closest jelly. Then we can use induction.
Let best[x] denote the closest jelly to student x, and match[x] be the assigned jelly for student x. Also, for a jelly y let match[y] be the student who gets y.
Then, if you keep going form x to best[x] for each student x, and y to match[y] for each jelly y, you'll eventually get a cycle, augmenting which will result in a better cost. (For each student x in the cycle you're replacing match[x] by best[x])
I think I understand from your explanation, why "there exists matching" is equivalent to "there exists a valid order of calling children".
But what's not clear for me, why unmodified Kuhn algo will find it (and will it always) For example, if we had a blackbox which just find any matching if it exists, it won't be enough. E.g for case
we are allowed to print
or
but not
which is also valid matching
Ah! I confused Kuhn's with Kuhn–Munkres. In my solution, I had costs equal to the rank in priority order, instead of just sorting the adjacent edges and running Kuhn's.
I thought about that but but though O(n^3) will be too slow. I know Kuhn also technically cubic, but it's hard to achieve big number of edges and NM complexity at the same time
A matching is invalid if and only if there is a augmenting cycle that gives every student in it a closer jelly.
Kuhn with your edge ordering will never produce such augmenting cycles: If an augmenting path were to produce such a cycle, it could instead have gone the other way around the cycle, which would use a lower-index edge at the vertex where it first enteres the cycle. In particular, Kuch with your edge ordering would find this augmenting path instead of the original one. (In other words, we could xor the augmenting path with the cycle to get a different augmenting path, that Kuhn would find first.)
Undoubtedly one of the CP contests of all time
worst
one of the what?
one of contests
Not a well balanced set at all. A huge portion of people qualifying are through just or mostly from partials (including myself). That wouldn't be an issue if the subtasks weren't so terribly bland. I don't recall doing anything smart at all the entire round. Seems it all came down to who decided to skip the full solutions sooner.
How does $$$O(N^2)$$$ TLE/MLE A hidden???
Because $$$T$$$ is $$$100$$$ and $$$N$$$ can be as big as $$$10^4-1$$$. So $$$T\cdot N^2$$$ can be as big as $$$10^{10}$$$, which is too slow.
My $$$O(K)$$$ solution passed.
Same lmao
If they didn't want it to pass, they shouldn't have given us 20 seconds!
Or they should have given the usual $$$N\leq 10^5$$$ if they wanted $$$\mathcal{O}(N)$$$. Then no one would have even risked $$$\mathcal{O}(T\cdot N^2)$$$ for 20 seconds.
One of the worst rounds I ever solved:
A is simple trash: you can't even easily fill the board to brutforce. I got idea, but it's really hard to implement (look at tourist's 138 lines, for example). I thought that last year's problem with clocks was bad, but forget, that was brilliant task
I was late 15 minutes, but when I opened standings, I saw only about 5 people who got AC. It's awful. First problem should be easier
B is trash too: no idea in first subtask (just copy the code from statemnet and count diff), second is hard math with some handwaving proofs and praying that round and sqrt work fine:
Note that result in different when using NORMAL check if point is in circle
C: could be nice problem. But how to prove that N! in small works??? I saw N = 10 and thought about permutations, but moment later I found example with equal distances that makes this solution run in N!^2. Solving hard? Guess with AC because nothing else?
D: first subtask is nice. But large is some dp optimization with hard and handwaving proofs again.
I liked D problem, but one(last!) out of four?
Also using of subtasks was really strange: you either code some easy bruteforce or solve full
For A I think I came up with a reasonably easy to implement solution. If N <= 5, just hardcode all solutions. For larger N+2, all lengths can be achieved by going to the top-left corner of N – either by not using shortcuts at all or going immediately right and down; so we check if we can still finish by avoiding shortcuts on this level and reduce the problem to the smaller one.
Still didn't enjoy the task though.
Its possible to code it fairly cleanly, without requiring any hard coding of small $$$n$$$.
Basically realize that the difference when we move from a cell to the "inner" adjacent cell depends only on
If you count these, for a given side length $$$s$$$ you get:
Now we can realize that all odd numbers from $$$1$$$ to $$$4n - 5$$$ exist in this set. Since all skips are odd length, we can only save an even distance over the normal path.
So now we can treat this as having a decreasing set $$${x, x - 1, x - 2, \ldots, 1}$$$ (just multiplied by 2) and we want to select some cells such that their sum is $$$y$$$. Clearly the easy greedy of taking the largest cell we can at any stage works fine.
As for the actual cells we move through, we can realize the middle element of each initial layer works on any layer. Their value for the first layer is just $$$\frac{n + 1}{2}$$$ for the top side plus $$$n - 1$$$ for each of the remaining sides on the layer. Then calculating this as we move to inner layers is trivial using the previously calculated side length values.
Though I also did not like the problems AB, what you described was not the case for me:
In problem A, after some thought, I ended up with smth like 20 lines of meaningful code, which involved no cases (I do not consider printing
IMPOSSIBLE
when needed as "handling cases").In problem B there was no need to pray for precision, it was not so difficult to rewrite inequalities of type
round(sqrt(x)) < y
until all sides are integer.Honestly, I do not know how to guess the solution to C without proving it. It probably depends on the solution itself. The only unproven thing about ABC I had during the contest was the fact that my solution for C was fast enough (Hungarian).
You may refer to https://youtu.be/AaweAsTZT8o for details of my solutions, once it is uploaded.
such $$$O(n^2)$$$ passed in IOBot
Had cleared round 2 with 100 points , in my dream :)
It could have been worse... (no pun intended)
Round 2 last year was actually much worse. There were 2 cakewalks, 284 full scores and solving 3/4 tasks was not enough to qualify.
giga plus
I thought this year's round 2 was much worse than last year's. At least last year, the problems were ok. This year, I did not enjoy any problem I attempted (for the same reasons that many other people have commented above). Halfway through this contest, I realized I wasn't even having fun anymore, even though I eventually qualified.
My contest experience is summed up by me repeatedly asking myself "What is happening???" for 150 minutes straight. This is despite having the right ideas for problems A and B early on and eventually implementing it. My family probably thought I was delirious. Well, they are right.
Problems so bad, after 2 min of contest I lost all motivation to solve problems. (Though, as jqdai0815 said, it could be worse)
Overcoming competition
I am astonished as to why highly experienced contestants could not solve both test sets for B. One could brute-force the solution locally to precompute all possible answers from $$$1$$$ to $$$10^5$$$ and simply look up the answer in an array in the actual submission.
The limit on the source code is 100KB. You only get 1 byte per possible answer for precomputation.
Oh I totally missed this point. Although you can still use some tricks to compress the data (python zlib for example).
Compressing doesn't improve much when numbers are mostly random. e.g. simply try zipping a file containing all answers only reduce the file size by 2x, which is not enough for this problem.
there is a nice precomputation solution by dacin21, which uses some clever trick (if I understand correctly he only pasted prefix sums for n divisble by 20 and then computes the rest when needed)
Yes. I used that approach too. I felt that computing the whole table was quite easy, but you have to compute the rest (answer for small interval) for each tests as well, I think that part is pretty tedious to write.
Each wrong circle, starting with $$$r = 2$$$, has either 4 or 8 more black cells than the previous circle; therefore, it can be encoded using 1 bit per layer. Here is the working code; I grouped bits by sextuplets encoded by
[0-9A-Za-z.,]
.Shit round!! It wasn't even a CP round. People qualified for "global top 1000" using partial points which involved translating given pseudocode or some kind of stupid brute force.
Horrible problemset
Interestingly enough, I disagree with "no easy points on the table" in both cases. Last year had a lot of full-scorers, and this year has at least B-small.
During the contest: Made a lot of mistakes, bogged down in implementation, barely got through problems A and B, see my rank is 2000, giving up... After system tests: Jump up to rank 406
Thanks, problem B! Finally getting that GCJ T-shirt
Svlad_Cjelli I had the exact same experience! Was so despondent when my rank was 2000+ while struggling with problem B thinking way over 1000 people already had full submissions. Kept trying to go for B in full and when I submitted problem B and my rank was still 1800, gave up hope. Totally shocked after score reveal.
Nice comeback bro
Congrats with qualifying! Will be there next year too :)
I was rather surprised that I succeeded on D large: my solution is quadratic time (basically the official solution for test set 1), just optimised to use linear space.
Man. I get the whole visible/hidden thing adding excitement but this was a contest that just rewarded giving up on large solves. I was bitten in the backside for sticking too long with B large because it appeared highly likely I'd need it; turns out I could've just brute forced C small and been fine. I don't feel like brute force solving should be deciding things at this stage in the competition (though I wish I had done it).
Brute force solving shouldn't, but understanding that you are better off reading all problems and submitting all of the easy subtasks probably should
Disagree. That entirely depends on the contest, and only with hindsight of how many large solves each question has. Last year (when I did qualify) had I followed that advice I would have wasted valuable time submitting small subtasks and failed. This year, it transpired after the contest that such an approach would have sufficed, but when the hidden scoreboard said > 2000 solves on B, there was no way of knowing that.
This contest rewarded giving up easily on large solutions. Last year's rewarded sticking at the complete problems. It feels like going to the effort of learning more complex techniques and honing implementation skills essentially counted for nothing — the cheap points won the day. To me that feels hollow, and not how a CP contest should be settled.
It does depend on the contest and I completely agree with your second paragraph, but if you wanted to maximize your result you should've noticed that 10 + 11 easy points is greater than 16 probably harder points on a weird problem and you should've prioritized the easy ones.