jqdai0815's blog

By jqdai0815, history, 4 hours ago, In English

Thank you for participating in this round! Problems I are authored by orzdevinwang and the rest by me.

Sorry for the weak pretest in problem B.

hints and code are WIP.

2061A - Kevin and Arithmetic

Hint 1
Solution

2061B - Kevin and Geometry

Hint 1
Hint 2
Hint 3
Solution

2061C - Kevin and Puzzle

Hint 1
Hint 2
Solution

2061D - Kevin and Numbers

Hint 1
Hint 2
Hint 3
Solution

2061E - Kevin and And

Hint 1
Hint 2
Hint 3
Solution

2061F1 - Kevin and Binary String (Easy Version)

2061F2 - Kevin and Binary String (Hard Version)

Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Solution

2061G - Kevin and Teams

Hint 1
Hint 2
Hint 3
Solution

2061H1 - Kevin and Stones (Easy Version)

2061H2 - Kevin and Stones (Hard Version)

Solution

2061I - Kevin and Nivek

Hint 1
Hint 2
Hint 3
Hint 4
Solution
  • Vote: I like it
  • +39
  • Vote: I do not like it

»
4 hours ago, # |
  Vote: I like it +9 Vote: I do not like it

Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).

»
4 hours ago, # |
  Vote: I like it +23 Vote: I do not like it

Congrats jiangly on becoming jiangly :))

»
4 hours ago, # |
  Vote: I like it +6 Vote: I do not like it

Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).

»
4 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

Fun facts for problem D: When I was testing this round, this problem was Div. 2 B with $$$n \le 500$$$.(=・ω・=)

  • »
    »
    3 hours ago, # ^ |
      Vote: I like it +16 Vote: I do not like it

    Fun fact: It is still Div. 2 B level problem

»
3 hours ago, # |
  Vote: I like it +8 Vote: I do not like it

someone please tell me why did this 302137216 code for D got accepted and this 302137051 didn't, even though the time complexity is same

»
3 hours ago, # |
  Vote: I like it +7 Vote: I do not like it

If v[0] = 0 isn’t it obvious that he is telling truth as nobody on the left?

  • »
    »
    3 hours ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    he could still be a liar even if what he says is correct

»
3 hours ago, # |
  Vote: I like it +1 Vote: I do not like it

E is a greed-bait.

»
3 hours ago, # |
  Vote: I like it +18 Vote: I do not like it

Untitled-Export-V4

Tourist -> <-

Congrats jiangly

»
3 hours ago, # |
  Vote: I like it +2 Vote: I do not like it

Thanks to the authors for the amazing problems ! I really enjoyed the round !

For G, a good way to convince yourself that $$$\frac{n + 1}{3}$$$ is attainable is to fix three vertices a, b, c. Say the edge a-b is 1, then by induction among the other vertices there is a matching of size $$$\frac{n + 1}{3} - 1$$$ : if it is of color 1 we are done, and elsewise the edges a-c and b-c have to be 1 as well (or we get a 0-color matching). But now we can make another choice for the vertex c, and since a-b is 1 we will still get a-c and b-c to be 1. So every edge from a or b is 1, as well as every edge from c for any c ... and finally the whole graph is 1, absurd.

I've found this to be helpful, since knowing the answer gives you the idea to get one edge from 3 vertices, hence the mixed-color triplets from the solution, from which you can find a linear construction

»
3 hours ago, # |
Rev. 2   Vote: I like it +1 Vote: I do not like it

nvm

»
3 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

C is a nice problem. Thank you

»
3 hours ago, # |
  Vote: I like it -9 Vote: I do not like it

1636142131289

Same same but different

»
3 hours ago, # |
Rev. 2   Vote: I like it +1 Vote: I do not like it

On problem D, I found a solution that actually does build A into B rather than the reverse by recursively trying smaller elements to build into B. Definitely way more complicated then the intended solution though.

»
3 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

let the rank name be both alternative ^="tourist", "jiangly". now tourist will have to reach "jiangly" rating xD

»
2 hours ago, # |
Rev. 2   Vote: I like it +6 Vote: I do not like it

Sorry for the weak pretest in problem B

Hmm... So it was not intentional? I found this a very good kind of problem to have weak pretests. Some 2017 Codeforces vibes :) There is one simple correct solution and plenty of ways to get an "almost-correct" solution.

  • »
    »
    90 minutes ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Just wondering, is there a way to tell whether main test 11 existed before the round or not? If yes, was it deliberately excluded from protests?

    I did not realize that pretests were this weak, so I just assumed the hacks were all related to hashmaps. :P

    • »
      »
      »
      84 minutes ago, # ^ |
        Vote: I like it +2 Vote: I do not like it

      Take a look at the first hack, it is indeed that test so it was added by hacks

      • »
        »
        »
        »
        81 minute(s) ago, # ^ |
        Rev. 2   Vote: I like it +4 Vote: I do not like it

        Oh ok. I assume all of these wrong solutions would have passed if it were not for hacks? So then "weak pretest" should actually be "weak tests."

»
2 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

I was thinking of the same idea for E but couldn't figure out that this is a convex function, hence couldn't reach the conclusion that taking $$$k$$$ largest reductions works. Can someone share how they got the intuition for this being a convex function, or is there any other solution that doesn't use this?

  • »
    »
    100 minutes ago, # ^ |
    Rev. 3   Vote: I like it +1 Vote: I do not like it

    You could do the same thing with brute force up to storing all "improvements"; say arr[i][j] = num(i, j+1) - num(i, j). Then you could maintain a multiset or some other sorted list data structure, where initially you add all arr[i][0] to the multiset. You can remove the maximum element, say arr[i][j], and insert arr[i][j+1] back into the multiset (since now that move became available). Repeat k times and you can get the final sum.

    Edit: nvm this is completely wrong

    • »
      »
      »
      90 minutes ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      But without knowing the fact that this is convex, this greedy can't be proved right. Like in a scenario when this isn't convex, maybe there can be a case where for some $$$i_1$$$, the first improvement is not as good as the first improvement for some other $$$i_2$$$, but say the next improvement for $$$i_1$$$ is very huge. If $$$k=2$$$, then the greedy algorithm will pick the first improvement for $$$i_2$$$, and then the first improvement for $$$i_1$$$, whereas the correct answer should be first and second improvement of $$$i_1$$$.

      • »
        »
        »
        »
        87 minutes ago, # ^ |
          Vote: I like it +2 Vote: I do not like it

        wait yeah you're totally right. I completely fakesolved it

  • »
    »
    44 minutes ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Consider the list of bits that are turned off due to using the AND operator multiple times. Say that the first operation turns off the bit in position 8. After that, no set of operations should modify anything in bits position 8 or above. This is because otherwise, we would have a better first move. Now, note that 2^x>=sum(2^y) over any subset of y in {0,1,2,...,x-1}. Thus, convexity.

»
2 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

it's over

»
2 hours ago, # |
  Vote: I like it +4 Vote: I do not like it

E: wtf bruteforce?! Using the facts that sum of convex functions is convex and $$$2^b > \sum_{k=0}^{b-1} 2^k$$$ don't justify it being E and I think it would've had a lot more solutions with lower constraint on M and lower placement in contest. It's really hard to see why this simple nested loop should be fast when many other ones with same number of repetitions aren't, especially without knowledge of cache efficiency and assembly instructions (specifically that min/max are their own instructions, not compare+branch). This just isn't a very good algorithmic problem.

Note that you don't need to prove convexity. Think about it as greedy decisions: operations on different $$$i$$$ are independent and it's always better to remove the largest possible bits from any $$$i$$$, so the first $$$k$$$ operations on any $$$i$$$ will "do more" than anything we could possibly do with $$$k+1$$$ 'th and on. This gives enough intuition and then bruteforce goes brrr.

  • »
    »
    2 hours ago, # ^ |
    Rev. 2   Vote: I like it +4 Vote: I do not like it

    I dont understand your comment? The complexity is of the order 10^8, and expecting it to pass is not at all that unreasonable??

  • »
    »
    74 minutes ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    I agree with you on this part, I was also very hesitant on using __builtin_popcount inside and thought of precomputing it but it is fast enough,

    I think these days its easier to think that solution is fast enough instead of using proper idea in many problems, I was thinking N <= 1e6 and M <= 5 would have been a better choice too.

    • »
      »
      »
      68 minutes ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      can you please check this solution for E its giving tle on 19 https://codeforces.net/contest/2061/submission/302147220

      • »
        »
        »
        »
        41 minute(s) ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Your code is O(N * M * 2^M) make it O(N * 2 ^ M) for precomputation

    • »
      »
      »
      63 minutes ago, # ^ |
      Rev. 2   Vote: I like it +3 Vote: I do not like it

      Speed of __builtin_popcount is pretty hardware dependent (and for me, compiler dependent too ) so I would not dare to put it through 1e8 loops. This is why I went for the second option below.

      That said, if it does not pass, it is easy to change to a pre-computation. So the options are like

      • do the simple thing, -50 and a few minutes if wrong, nothing else if right
      • do the correct thing, down maybe a minute but definitely OK

      I think it is a fair and balanced (even interesting) decision. It is very different if you are only allowed 1 submission and no chances to correct it if TLE.

  • »
    »
    73 minutes ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    How much lower? I feel that even if the brute force was more evident it is still a harder problem than C and D. I also don't understand the part where you say we don't need to prove convexity. We can't prove that greedily taking the best decisions across all $$$i$$$ is correct without convexity right (comment regarding the same).

    The part where you mention it is better to remove largest bits first, that is indirectly the same argument that convexity claimed isn't it? The convexity proof in the editorial basically says the same, that when we go from $$$g_{i-1}$$$ to $$$g_{i+1}$$$, the largest bit is removed when we first go from $$$g_{i-1}$$$ to $$$g_{i}$$$ and the smaller bits are removed subsequently.

  • »
    »
    70 minutes ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I don’t think it is bad to test understandings of big differences of constant factor, especially when that constant factor is as big as a factor of 10 away — I am very sure that an iteration of this kind with 1e9 would have passed (in C++) too. (Not all implementations equal, like bit counts has to be precomputed).

»
2 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

In Problem C, can someone please explain why "If they are a liar, since two liars cannot stand next to each other, the (i−1)-th person must be honest. In this case, there are a(i−1) + 1 liars to their left."

Shouldn't it be a(i-1) liars to the left of i-th person (i-th person being a liar). Because (i-1)th is honest, so there are a(i-1) liars upto position i-1, so there are a(i-1) liars to the left of the i-th person, right??

  • »
    »
    98 minutes ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Yeah shouldn't that be a(i-1)+1 liars to left of (i+1)th person? (that's what makes the dp transition for liar)

»
99 minutes ago, # |
  Vote: I like it 0 Vote: I do not like it

D was a good implementation of use of heaps

»
90 minutes ago, # |
  Vote: I like it 0 Vote: I do not like it

Is codeforces problemset really this hard? I have to concentrate hard on B and C problems for a long time in order to arrive at the solution.

»
89 minutes ago, # |
  Vote: I like it 0 Vote: I do not like it

My D TLE'd due to hash collisions smh....

»
52 minutes ago, # |
Rev. 2   Vote: I like it +3 Vote: I do not like it

if you could not understand the editorial solution for C,

Let dp[i][0], and dp[i][1] denote number of sequences ending at i, where i-th person is honest and liar respectively

i-1th person and ith person can be:

  1. honest-honest, in this case, the condition is: a[i-1] should be the same as a[i] since both are true.

  2. honest-liar, we do not need any condition in this case.

  3. liar-honest, in this case, the condition is: i-2th person should be honest. so a[i-2]+1= a[i].

Overall, dp[n-1][0]+dp[n-1][1] will give us the answer

  • »
    »
    10 minutes ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I had the same structure for my dp but couldn't find the transition... Thank you!!

»
12 minutes ago, # |
  Vote: I like it 0 Vote: I do not like it

Hi jqdai0815, why does 302120893 solution for D gets TLE while 302121766 passes comfortably. The time complexity should be about same.