Thank you for participating in this round! Problems I are authored by orzdevinwang and the rest by me.
Sorry for the weak pretest in problem B.
hints and code are WIP.
Hint 1
Solution
Hint 1
Hint 2
Hint 3
Solution
Hint 1
Hint 2
Solution
Hint 1
Hint 2
Hint 3
Solution
Hint 1
Hint 2
Hint 3
Solution
2061F1 - Kevin and Binary String (Easy Version)
2061F2 - Kevin and Binary String (Hard Version)
Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Solution
Hint 1
Hint 2
Hint 3
Solution
2061H1 - Kevin and Stones (Easy Version)
2061H2 - Kevin and Stones (Hard Version)
Solution
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).
Congrats jiangly on becoming jiangly :))
Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).
Fun facts for problem D: When I was testing this round, this problem was Div. 2 B with $$$n \le 500$$$.(=・ω・=)
Fun fact: It is still Div. 2 B level problem
someone please tell me why did this 302137216 code for D got accepted and this 302137051 didn't, even though the time complexity is same
The base case should be a=1 b=1, otherwise you will try to split 1 in halves and one of them will be 1 again.
can you please tell me why it's giving tle for E https://codeforces.net/contest/2061/submission/302147220
If v[0] = 0 isn’t it obvious that he is telling truth as nobody on the left?
he could still be a liar even if what he says is correct
E is a greed-bait.
but.... greedy passes.... my greedy solution
Tourist -> <-
Congrats jiangly
Thanks to the authors for the amazing problems ! I really enjoyed the round !
For G, a good way to convince yourself that $$$\frac{n + 1}{3}$$$ is attainable is to fix three vertices a, b, c. Say the edge a-b is 1, then by induction among the other vertices there is a matching of size $$$\frac{n + 1}{3} - 1$$$ : if it is of color 1 we are done, and elsewise the edges a-c and b-c have to be 1 as well (or we get a 0-color matching). But now we can make another choice for the vertex c, and since a-b is 1 we will still get a-c and b-c to be 1. So every edge from a or b is 1, as well as every edge from c for any c ... and finally the whole graph is 1, absurd.
I've found this to be helpful, since knowing the answer gives you the idea to get one edge from 3 vertices, hence the mixed-color triplets from the solution, from which you can find a linear construction
nvm
nvm
C is a nice problem. Thank you
Same same but different
On problem D, I found a solution that actually does build A into B rather than the reverse by recursively trying smaller elements to build into B. Definitely way more complicated then the intended solution though.
let the rank name be both alternative ^="tourist", "jiangly". now tourist will have to reach "jiangly" rating xD
Hmm... So it was not intentional? I found this a very good kind of problem to have weak pretests. Some 2017 Codeforces vibes :) There is one simple correct solution and plenty of ways to get an "almost-correct" solution.
Just wondering, is there a way to tell whether main test 11 existed before the round or not? If yes, was it deliberately excluded from protests?
I did not realize that pretests were this weak, so I just assumed the hacks were all related to hashmaps. :P
Take a look at the first hack, it is indeed that test so it was added by hacks
Oh ok. I assume all of these wrong solutions would have passed if it were not for hacks? So then "weak pretest" should actually be "weak tests."
I was thinking of the same idea for E but couldn't figure out that this is a convex function, hence couldn't reach the conclusion that taking $$$k$$$ largest reductions works. Can someone share how they got the intuition for this being a convex function, or is there any other solution that doesn't use this?
You could do the same thing with brute force up to storing all "improvements"; say
arr[i][j] = num(i, j+1) - num(i, j)
. Then you could maintain a multiset or some other sorted list data structure, where initially you add all arr[i][0] to the multiset. You can remove the maximum element, say arr[i][j], and insert arr[i][j+1] back into the multiset (since now that move became available). Repeat k times and you can get the final sum.Edit: nvm this is completely wrong
But without knowing the fact that this is convex, this greedy can't be proved right. Like in a scenario when this isn't convex, maybe there can be a case where for some $$$i_1$$$, the first improvement is not as good as the first improvement for some other $$$i_2$$$, but say the next improvement for $$$i_1$$$ is very huge. If $$$k=2$$$, then the greedy algorithm will pick the first improvement for $$$i_2$$$, and then the first improvement for $$$i_1$$$, whereas the correct answer should be first and second improvement of $$$i_1$$$.
wait yeah you're totally right. I completely fakesolved it
Consider the list of bits that are turned off due to using the AND operator multiple times. Say that the first operation turns off the bit in position 8. After that, no set of operations should modify anything in bits position 8 or above. This is because otherwise, we would have a better first move. Now, note that 2^x>=sum(2^y) over any subset of y in {0,1,2,...,x-1}. Thus, convexity.
it's over
it's over
E: wtf bruteforce?! Using the facts that sum of convex functions is convex and $$$2^b > \sum_{k=0}^{b-1} 2^k$$$ don't justify it being E and I think it would've had a lot more solutions with lower constraint on M and lower placement in contest. It's really hard to see why this simple nested loop should be fast when many other ones with same number of repetitions aren't, especially without knowledge of cache efficiency and assembly instructions (specifically that min/max are their own instructions, not compare+branch). This just isn't a very good algorithmic problem.
Note that you don't need to prove convexity. Think about it as greedy decisions: operations on different $$$i$$$ are independent and it's always better to remove the largest possible bits from any $$$i$$$, so the first $$$k$$$ operations on any $$$i$$$ will "do more" than anything we could possibly do with $$$k+1$$$ 'th and on. This gives enough intuition and then bruteforce goes brrr.
I dont understand your comment? The complexity is of the order 10^8, and expecting it to pass is not at all that unreasonable??
I agree with you on this part, I was also very hesitant on using __builtin_popcount inside and thought of precomputing it but it is fast enough,
I think these days its easier to think that solution is fast enough instead of using proper idea in many problems, I was thinking N <= 1e6 and M <= 5 would have been a better choice too.
can you please check this solution for E its giving tle on 19 https://codeforces.net/contest/2061/submission/302147220
Your code is O(N * M * 2^M) make it O(N * 2 ^ M) for precomputation
Speed of __builtin_popcount is pretty hardware dependent (and for me, compiler dependent too ) so I would not dare to put it through 1e8 loops. This is why I went for the second option below.
That said, if it does not pass, it is easy to change to a pre-computation. So the options are like
I think it is a fair and balanced (even interesting) decision. It is very different if you are only allowed 1 submission and no chances to correct it if TLE.
How much lower? I feel that even if the brute force was more evident it is still a harder problem than C and D. I also don't understand the part where you say we don't need to prove convexity. We can't prove that greedily taking the best decisions across all $$$i$$$ is correct without convexity right (comment regarding the same).
The part where you mention it is better to remove largest bits first, that is indirectly the same argument that convexity claimed isn't it? The convexity proof in the editorial basically says the same, that when we go from $$$g_{i-1}$$$ to $$$g_{i+1}$$$, the largest bit is removed when we first go from $$$g_{i-1}$$$ to $$$g_{i}$$$ and the smaller bits are removed subsequently.
I don’t think it is bad to test understandings of big differences of constant factor, especially when that constant factor is as big as a factor of 10 away — I am very sure that an iteration of this kind with 1e9 would have passed (in C++) too. (Not all implementations equal, like bit counts has to be precomputed).
In Problem C, can someone please explain why "If they are a liar, since two liars cannot stand next to each other, the (i−1)-th person must be honest. In this case, there are a(i−1) + 1 liars to their left."
Shouldn't it be a(i-1) liars to the left of i-th person (i-th person being a liar). Because (i-1)th is honest, so there are a(i-1) liars upto position i-1, so there are a(i-1) liars to the left of the i-th person, right??
Yeah shouldn't that be a(i-1)+1 liars to left of (i+1)th person? (that's what makes the dp transition for liar)
D was a good implementation of use of heaps
Is codeforces problemset really this hard? I have to concentrate hard on B and C problems for a long time in order to arrive at the solution.
My D TLE'd due to hash collisions smh....
if you could not understand the editorial solution for C,
Let dp[i][0], and dp[i][1] denote number of sequences ending at i, where i-th person is honest and liar respectively
i-1th person and ith person can be:
honest-honest, in this case, the condition is: a[i-1] should be the same as a[i] since both are true.
honest-liar, we do not need any condition in this case.
liar-honest, in this case, the condition is: i-2th person should be honest. so a[i-2]+1= a[i].
Overall, dp[n-1][0]+dp[n-1][1] will give us the answer
I had the same structure for my dp but couldn't find the transition... Thank you!!
Hi jqdai0815, why does 302120893 solution for D gets TLE while 302121766 passes comfortably. The time complexity should be about same.