jqdai0815's blog

By jqdai0815, history, 7 hours ago, In English

Thank you for participating in this round! Problems I are authored by orzdevinwang and the rest by me.

Sorry for the weak pretest in problem B.

hints and code are WIP.

2061A - Kevin and Arithmetic

Hint 1
Solution

2061B - Kevin and Geometry

Hint 1
Hint 2
Hint 3
Solution

2061C - Kevin and Puzzle

Hint 1
Hint 2
Solution

2061D - Kevin and Numbers

Hint 1
Hint 2
Hint 3
Solution

2061E - Kevin and And

Hint 1
Hint 2
Hint 3
Solution

2061F1 - Kevin and Binary String (Easy Version)

2061F2 - Kevin and Binary String (Hard Version)

Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Solution

2061G - Kevin and Teams

Hint 1
Hint 2
Hint 3
Solution

2061H1 - Kevin and Stones (Easy Version)

2061H2 - Kevin and Stones (Hard Version)

Solution

2061I - Kevin and Nivek

Hint 1
Hint 2
Hint 3
Hint 4
Solution
  • Vote: I like it
  • +51
  • Vote: I do not like it

»
7 hours ago, # |
  Vote: I like it +9 Vote: I do not like it

Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).

»
7 hours ago, # |
  Vote: I like it +25 Vote: I do not like it

Congrats jiangly on becoming jiangly :))

»
6 hours ago, # |
  Vote: I like it +8 Vote: I do not like it

Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).

»
6 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

Fun facts for problem D: When I was testing this round, this problem was Div. 2 B with $$$n \le 500$$$.(=・ω・=)

  • »
    »
    6 hours ago, # ^ |
      Vote: I like it +21 Vote: I do not like it

    Fun fact: It is still Div. 2 B level problem

»
6 hours ago, # |
  Vote: I like it +8 Vote: I do not like it

someone please tell me why did this 302137216 code for D got accepted and this 302137051 didn't, even though the time complexity is same

»
6 hours ago, # |
  Vote: I like it +8 Vote: I do not like it

If v[0] = 0 isn’t it obvious that he is telling truth as nobody on the left?

  • »
    »
    6 hours ago, # ^ |
      Vote: I like it +7 Vote: I do not like it

    he could still be a liar even if what he says is correct

»
6 hours ago, # |
  Vote: I like it +3 Vote: I do not like it

E is a greed-bait.

»
6 hours ago, # |
  Vote: I like it +20 Vote: I do not like it

Untitled-Export-V4

Tourist -> <-

Congrats jiangly

»
6 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

Thanks to the authors for the amazing problems ! I really enjoyed the round !

For G, a good way to convince yourself that $$$\frac{n + 1}{3}$$$ is attainable is to fix three vertices a, b, c. Say the edge a-b is 1, then by induction among the other vertices there is a matching of size $$$\frac{n + 1}{3} - 1$$$ : if it is of color 1 we are done, and elsewise the edges a-c and b-c have to be 1 as well (or we get a 0-color matching). But now we can make another choice for the vertex c, and since a-b is 1 we will still get a-c and b-c to be 1. So every edge from a or b is 1, as well as every edge from c for any c ... and finally the whole graph is 1, absurd.

I've found this to be helpful, since knowing the answer gives you the idea to get one edge from 3 vertices, hence the mixed-color triplets from the solution, from which you can find a linear construction

»
6 hours ago, # |
Rev. 2   Vote: I like it +1 Vote: I do not like it

nvm

»
6 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

C is a nice problem. Thank you

»
6 hours ago, # |
  Vote: I like it -18 Vote: I do not like it

1636142131289

Same same but different

»
6 hours ago, # |
Rev. 2   Vote: I like it +1 Vote: I do not like it

On problem D, I found a solution that actually does build A into B rather than the reverse by recursively trying smaller elements to build into B. Definitely way more complicated then the intended solution though.

»
5 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

let the rank name be both alternative ^="tourist", "jiangly". now tourist will have to reach "jiangly" rating xD

»
5 hours ago, # |
Rev. 2   Vote: I like it +13 Vote: I do not like it

Sorry for the weak pretest in problem B

Hmm... So it was not intentional? I found this a very good kind of problem to have weak pretests. Some 2017 Codeforces vibes :) There is one simple correct solution and plenty of ways to get an "almost-correct" solution.

  • »
    »
    4 hours ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Just wondering, is there a way to tell whether main test 11 existed before the round or not? If yes, was it deliberately excluded from protests?

    I did not realize that pretests were this weak, so I just assumed the hacks were all related to hashmaps. :P

    • »
      »
      »
      4 hours ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      Take a look at the first hack, it is indeed that test so it was added by hacks

      • »
        »
        »
        »
        4 hours ago, # ^ |
        Rev. 2   Vote: I like it +15 Vote: I do not like it

        Oh ok. I assume all of these wrong solutions would have passed if it were not for hacks? So then "weak pretest" should actually be "weak tests."

»
5 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

I was thinking of the same idea for E but couldn't figure out that this is a convex function, hence couldn't reach the conclusion that taking $$$k$$$ largest reductions works. Can someone share how they got the intuition for this being a convex function, or is there any other solution that doesn't use this?

  • »
    »
    4 hours ago, # ^ |
    Rev. 3   Vote: I like it +2 Vote: I do not like it

    You could do the same thing with brute force up to storing all "improvements"; say arr[i][j] = num(i, j+1) - num(i, j). Then you could maintain a multiset or some other sorted list data structure, where initially you add all arr[i][0] to the multiset. You can remove the maximum element, say arr[i][j], and insert arr[i][j+1] back into the multiset (since now that move became available). Repeat k times and you can get the final sum.

    Edit: nvm this is completely wrong

    • »
      »
      »
      4 hours ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      But without knowing the fact that this is convex, this greedy can't be proved right. Like in a scenario when this isn't convex, maybe there can be a case where for some $$$i_1$$$, the first improvement is not as good as the first improvement for some other $$$i_2$$$, but say the next improvement for $$$i_1$$$ is very huge. If $$$k=2$$$, then the greedy algorithm will pick the first improvement for $$$i_2$$$, and then the first improvement for $$$i_1$$$, whereas the correct answer should be first and second improvement of $$$i_1$$$.

      • »
        »
        »
        »
        4 hours ago, # ^ |
          Vote: I like it +3 Vote: I do not like it

        wait yeah you're totally right. I completely fakesolved it

  • »
    »
    4 hours ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Consider the list of bits that are turned off due to using the AND operator multiple times. Say that the first operation turns off the bit in position 8. After that, no set of operations should modify anything in bits position 8 or above. This is because otherwise, we would have a better first move. Now, note that 2^x>=sum(2^y) over any subset of y in {0,1,2,...,x-1}. Thus, convexity.

»
5 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

it's over

»
5 hours ago, # |
  Vote: I like it +1 Vote: I do not like it

E: wtf bruteforce?! Using the facts that sum of convex functions is convex and $$$2^b > \sum_{k=0}^{b-1} 2^k$$$ don't justify it being E and I think it would've had a lot more solutions with lower constraint on M and lower placement in contest. It's really hard to see why this simple nested loop should be fast when many other ones with same number of repetitions aren't, especially without knowledge of cache efficiency and assembly instructions (specifically that min/max are their own instructions, not compare+branch). This just isn't a very good algorithmic problem.

Note that you don't need to prove convexity. Think about it as greedy decisions: operations on different $$$i$$$ are independent and it's always better to remove the largest possible bits from any $$$i$$$, so the first $$$k$$$ operations on any $$$i$$$ will "do more" than anything we could possibly do with $$$k+1$$$ 'th and on. This gives enough intuition and then bruteforce goes brrr.

  • »
    »
    5 hours ago, # ^ |
    Rev. 2   Vote: I like it +1 Vote: I do not like it

    I dont understand your comment? The complexity is of the order 10^8, and expecting it to pass is not at all that unreasonable??

    • »
      »
      »
      3 hours ago, # ^ |
        Vote: I like it +6 Vote: I do not like it

      Complexity of order 10^8 can easily hit TL. Constructing 1e7 2D vectors by randomly appending 3e7 elements takes 2 seconds on custom invocation because cache and indirection; that's still an order of magnitude less loop executions than this case.

      There's a rule of thumb, yes, but that doesn't have 10^8, it has 10^7 (or 10^6 in older times, but CPU performance hasn't been increasing all that much) and it's still with caveats — even if it's plenty in 99% of cases, there could be many hidden instructions behind C++ expressions sometimes. You can't base real runtime on a number without specific code features, like the ones I mentioned above.

      Examples: At IOI 2012, I burned myself by reconstructing std::queue ~10^6 times — literally adding static keyword made the code several times faster. In 1975G - Zimpha Fan Club, complexity in the order of 10^9 exceeded TL simply because division with dynamic modulo can't be optimised by compiler and therefore uses divq which has high CPI. Many times I've sped up complicated DFS-processing a tree by doing a simple preorder renumbering at the start and the end. Then there's this meme I made simply because it's such an efficient and simple optimisation.

  • »
    »
    4 hours ago, # ^ |
      Vote: I like it +13 Vote: I do not like it

    I agree with you on this part, I was also very hesitant on using __builtin_popcount inside and thought of precomputing it but it is fast enough,

    I think these days its easier to think that solution is fast enough instead of using proper idea in many problems, I was thinking N <= 1e6 and M <= 5 would have been a better choice too.

    • »
      »
      »
      4 hours ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      can you please check this solution for E its giving tle on 19 https://codeforces.net/contest/2061/submission/302147220

      • »
        »
        »
        »
        4 hours ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Your code is O(N * M * 2^M) make it O(N * 2 ^ M) for precomputation

    • »
      »
      »
      4 hours ago, # ^ |
      Rev. 2   Vote: I like it +13 Vote: I do not like it

      Speed of __builtin_popcount is pretty hardware dependent (and for me, compiler dependent too ) so I would not dare to put it through 1e8 loops. This is why I went for the second option below.

      That said, if it does not pass, it is easy to change to a pre-computation. So the options are like

      • do the simple thing, -50 and a few minutes if wrong, nothing else if right
      • do the correct thing, down maybe a minute but definitely OK

      I think it is a fair and balanced (even interesting) decision. It is very different if you are only allowed 1 submission and no chances to correct it if TLE.

  • »
    »
    4 hours ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    How much lower? I feel that even if the brute force was more evident it is still a harder problem than C and D. I also don't understand the part where you say we don't need to prove convexity. We can't prove that greedily taking the best decisions across all $$$i$$$ is correct without convexity right (comment regarding the same).

    The part where you mention it is better to remove largest bits first, that is indirectly the same argument that convexity claimed isn't it? The convexity proof in the editorial basically says the same, that when we go from $$$g_{i-1}$$$ to $$$g_{i+1}$$$, the largest bit is removed when we first go from $$$g_{i-1}$$$ to $$$g_{i}$$$ and the smaller bits are removed subsequently.

    • »
      »
      »
      2 hours ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I mean we don't need to formally prove convexity — or even anything remotely close to that. What I described is intuitively directing towards the proof in editorial, but isn't itself a proof, just intuition. The kind of thinking that makes sense but could easily overlook something. A solution can be presented in full to demonstrate a problem's difficulty or handwaved to make it seem easier — I know, I've done it.

      I think with more obvious constraints, C/D/E are basically interchangeable. C and D have basically equal number of solutions and there's always a dropoff for later problems, though it's not too steep early on.

  • »
    »
    4 hours ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I don’t think it is bad to test understandings of big differences of constant factor, especially when that constant factor is as big as a factor of 10 away — I am very sure that an iteration of this kind with 1e9 would have passed (in C++) too. (Not all implementations equal, like bit counts has to be precomputed).

    • »
      »
      »
      2 hours ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      It's not too bad and it's a useful skill that helps from time to time in various problems anyway, but I don't think it should have a major role in not-too-hard problems. Sometimes you screw up implementation, don't pay attention to constant factor and your code is slow. Sometimes you can squeeze an inefficient algorithm through very efficient implementation, even with tricks of its own. If it's one of the last problems that requires constant optimisation on top of other stuff — that's fine, I'll just take it as a juicy bone. But in this case I feel like it's artificially inflating complexity and at the level it was placed, it's measuring weight of balls more than that specific skill.

      It's a valid contest problem, but there ought to be a tendency to use them only rarely and even then preferrably at higher level.

»
5 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

In Problem C, can someone please explain why "If they are a liar, since two liars cannot stand next to each other, the (i−1)-th person must be honest. In this case, there are a(i−1) + 1 liars to their left."

Shouldn't it be a(i-1) liars to the left of i-th person (i-th person being a liar). Because (i-1)th is honest, so there are a(i-1) liars upto position i-1, so there are a(i-1) liars to the left of the i-th person, right??

  • »
    »
    4 hours ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Yeah shouldn't that be a(i-1)+1 liars to left of (i+1)th person? (that's what makes the dp transition for liar)

»
4 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

D was a good implementation of use of heaps

»
4 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

Is codeforces problemset really this hard? I have to concentrate hard on B and C problems for a long time in order to arrive at the solution.

»
4 hours ago, # |
  Vote: I like it 0 Vote: I do not like it

My D TLE'd due to hash collisions smh....

»
4 hours ago, # |
Rev. 2   Vote: I like it +6 Vote: I do not like it

if you could not understand the editorial solution for C,

Let dp[i][0], and dp[i][1] denote number of sequences ending at i, where i-th person is honest and liar respectively

i-1th person and ith person can be:

  1. honest-honest, in this case, the condition is: a[i-1] should be the same as a[i] since both are true.

  2. honest-liar, we do not need any condition in this case.

  3. liar-honest, in this case, the condition is: i-2th person should be honest. so a[i-2]+1= a[i].

Overall, dp[n-1][0]+dp[n-1][1] will give us the answer

  • »
    »
    3 hours ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I had the same structure for my dp but couldn't find the transition... Thank you!!

    • »
      »
      »
      3 hours ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      same, I panicked, went into random ideas and stopped seeing things clearly during the contest :(

      Lesson learnt!

»
3 hours ago, # |
Rev. 2   Vote: I like it -7 Vote: I do not like it

Hi, why does 302120893 solution for D gets TLE while 302121766 passes comfortably. The time complexity should be about same.

  • »
    »
    2 hours ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Suppose you have 2^30 in query

    You query 2^29 2 times Than for each of them. In total you query 2^28 4 times

    Similarly you reach 1 2^30 times

»
2 hours ago, # |
Rev. 9   Vote: I like it 0 Vote: I do not like it

Can anyone tell me why in QUESTION-D my 302152821 is causing sigterm in testcase (Given in question) :

1
1 1
1
1000000000

while other than this everything is fine.

In other submission 302153118 I just added only one line (Mentioned in comment) and it got accepted.

»
2 hours ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

In problem D, I'm trying to understand the time complexity of this solution. This straightforward DFS using std::map runs in 300ms:

https://codeforces.net/contest/2061/submission/302153025

For each target x, I appear to be doing 2^(log x) recursive calls in worst case, each with a logn map operation, but the actual runtime looks much better. What am I missing? Is it just the testcases are not strict enough?

Also, I don't understand why is std::map is much faster here? If I use unordered_map with exact same solution it TLE's.

Edit: ok. nvm i got it. amount of recursive calls is just bound by the input size it gets consumed. and unordered_map is just targeted for collisions, works fine with a custom hash function.