jqdai0815's blog

By jqdai0815, history, 5 weeks ago, In English

Thank you for participating in this round! Problems I are authored by orzdevinwang and the rest by me.

Sorry for the weak pretest in problem B.

2061A - Kevin and Arithmetic

Hint 1
Solution

2061B - Kevin and Geometry

Hint 1
Hint 2
Hint 3
Solution

2061C - Kevin and Puzzle

Hint 1
Hint 2
Solution

2061D - Kevin and Numbers

Hint 1
Hint 2
Hint 3
Solution

2061E - Kevin and And

Hint 1
Hint 2
Hint 3
Solution

2061F1 - Kevin and Binary String (Easy Version)

2061F2 - Kevin and Binary String (Hard Version)

Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Solution

2061G - Kevin and Teams

Hint 1
Hint 2
Hint 3
Solution

2061H1 - Kevin and Stones (Easy Version)

2061H2 - Kevin and Stones (Hard Version)

Hint 1
Hint 2
Hint 3
Solution

Draft in Chinese

2061I - Kevin and Nivek

Hint 1
Hint 2
Hint 3
Hint 4
Solution
  • Vote: I like it
  • +78
  • Vote: I do not like it

»
5 weeks ago, # |
  Vote: I like it +9 Vote: I do not like it

Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).

»
5 weeks ago, # |
  Vote: I like it +72 Vote: I do not like it

Congrats jiangly on becoming jiangly :))

»
5 weeks ago, # |
  Vote: I like it +8 Vote: I do not like it

Auto comment: topic has been updated by jqdai0815 (previous revision, new revision, compare).

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Fun facts for problem D: When I was testing this round, this problem was Div. 2 B with $$$n \le 500$$$.(=・ω・=)

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +58 Vote: I do not like it

    Fun fact: It is still Div. 2 B level problem

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +11 Vote: I do not like it

    Whatever $$$O(n^3)$$$ solution was intended is probably harder to find than the current solution.

»
5 weeks ago, # |
  Vote: I like it +8 Vote: I do not like it

someone please tell me why did this 302137216 code for D got accepted and this 302137051 didn't, even though the time complexity is same

»
5 weeks ago, # |
  Vote: I like it +8 Vote: I do not like it

If v[0] = 0 isn’t it obvious that he is telling truth as nobody on the left?

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +13 Vote: I do not like it

    he could still be a liar even if what he says is correct

»
5 weeks ago, # |
  Vote: I like it +13 Vote: I do not like it

E is a greed-bait.

»
5 weeks ago, # |
  Vote: I like it +45 Vote: I do not like it

Untitled-Export-V4

Tourist -> <-

Congrats jiangly

»
5 weeks ago, # |
  Vote: I like it +33 Vote: I do not like it

Thanks to the authors for the amazing problems ! I really enjoyed the round !

For G, a good way to convince yourself that $$$\frac{n + 1}{3}$$$ is attainable is to fix three vertices a, b, c. Say the edge a-b is 1, then by induction among the other vertices there is a matching of size $$$\frac{n + 1}{3} - 1$$$ : if it is of color 1 we are done, and elsewise the edges a-c and b-c have to be 1 as well (or we get a 0-color matching). But now we can make another choice for the vertex c, and since a-b is 1 we will still get a-c and b-c to be 1. So every edge from a or b is 1, as well as every edge from c for any c ... and finally the whole graph is 1, absurd.

I've found this to be helpful, since knowing the answer gives you the idea to get one edge from 3 vertices, hence the mixed-color triplets from the solution, from which you can find a linear construction

»
5 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

nvm

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

C is a nice problem. Thank you

»
5 weeks ago, # |
  Vote: I like it -70 Vote: I do not like it

1636142131289

Same same but different

»
5 weeks ago, # |
Rev. 2   Vote: I like it +1 Vote: I do not like it

On problem D, I found a solution that actually does build A into B rather than the reverse by recursively trying smaller elements to build into B. Definitely way more complicated then the intended solution though.

»
5 weeks ago, # |
  Vote: I like it -6 Vote: I do not like it

let the rank name be both alternative ^="tourist", "jiangly". now tourist will have to reach "jiangly" rating xD

»
5 weeks ago, # |
Rev. 2   Vote: I like it +38 Vote: I do not like it

Sorry for the weak pretest in problem B

Hmm... So it was not intentional? I found this a very good kind of problem to have weak pretests. Some 2017 Codeforces vibes :) There is one simple correct solution and plenty of ways to get an "almost-correct" solution.

  • »
    »
    5 weeks ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Just wondering, is there a way to tell whether main test 11 existed before the round or not? If yes, was it deliberately excluded from protests?

    I did not realize that pretests were this weak, so I just assumed the hacks were all related to hashmaps. :P

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it +34 Vote: I do not like it

      Take a look at the first hack, it is indeed that test so it was added by hacks

      • »
        »
        »
        »
        5 weeks ago, # ^ |
        Rev. 2   Vote: I like it +47 Vote: I do not like it

        Oh ok. I assume all of these wrong solutions would have passed if it were not for hacks? So then "weak pretest" should actually be "weak tests."

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

I was thinking of the same idea for E but couldn't figure out that this is a convex function, hence couldn't reach the conclusion that taking $$$k$$$ largest reductions works. Can someone share how they got the intuition for this being a convex function, or is there any other solution that doesn't use this?

  • »
    »
    5 weeks ago, # ^ |
    Rev. 3   Vote: I like it +3 Vote: I do not like it

    You could do the same thing with brute force up to storing all "improvements"; say arr[i][j] = num(i, j+1) - num(i, j). Then you could maintain a multiset or some other sorted list data structure, where initially you add all arr[i][0] to the multiset. You can remove the maximum element, say arr[i][j], and insert arr[i][j+1] back into the multiset (since now that move became available). Repeat k times and you can get the final sum.

    Edit: nvm this is completely wrong

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      But without knowing the fact that this is convex, this greedy can't be proved right. Like in a scenario when this isn't convex, maybe there can be a case where for some $$$i_1$$$, the first improvement is not as good as the first improvement for some other $$$i_2$$$, but say the next improvement for $$$i_1$$$ is very huge. If $$$k=2$$$, then the greedy algorithm will pick the first improvement for $$$i_2$$$, and then the first improvement for $$$i_1$$$, whereas the correct answer should be first and second improvement of $$$i_1$$$.

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it +3 Vote: I do not like it

        wait yeah you're totally right. I completely fakesolved it

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +9 Vote: I do not like it

    Consider the list of bits that are turned off due to using the AND operator multiple times. Say that the first operation turns off the bit in position 8. After that, no set of operations should modify anything in bits position 8 or above. This is because otherwise, we would have a better first move. Now, note that 2^x>=sum(2^y) over any subset of y in {0,1,2,...,x-1}. Thus, convexity.

  • »
    »
    5 weeks ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    1

    3 3 2

    15

    5 6 9

    -- trap .. largest reductions , 15 -> 5 -> 1 -> 0

    but we can do 15 -> 6 -> 0 optimally

»
5 weeks ago, # |
  Vote: I like it -8 Vote: I do not like it

it's over

»
5 weeks ago, # |
  Vote: I like it +33 Vote: I do not like it

E: wtf bruteforce?! Using the facts that sum of convex functions is convex and $$$2^b > \sum_{k=0}^{b-1} 2^k$$$ don't justify it being E and I think it would've had a lot more solutions with lower constraint on M and lower placement in contest. It's really hard to see why this simple nested loop should be fast when many other ones with same number of repetitions aren't, especially without knowledge of cache efficiency and assembly instructions (specifically that min/max are their own instructions, not compare+branch). This just isn't a very good algorithmic problem.

Note that you don't need to prove convexity. Think about it as greedy decisions: operations on different $$$i$$$ are independent and it's always better to remove the largest possible bits from any $$$i$$$, so the first $$$k$$$ operations on any $$$i$$$ will "do more" than anything we could possibly do with $$$k+1$$$ 'th and on. This gives enough intuition and then bruteforce goes brrr.

  • »
    »
    5 weeks ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    I dont understand your comment? The complexity is of the order 10^8, and expecting it to pass is not at all that unreasonable??

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Complexity of order 10^8 can easily hit TL. Constructing 1e7 2D vectors by randomly appending 3e7 elements takes 2 seconds on custom invocation because cache and indirection; that's still an order of magnitude less loop executions than this case.

      There's a rule of thumb, yes, but that doesn't have 10^8, it has 10^7 (or 10^6 in older times, but CPU performance hasn't been increasing all that much) and it's still with caveats — even if it's plenty in 99% of cases, there could be many hidden instructions behind C++ expressions sometimes. You can't base real runtime on a number without specific code features, like the ones I mentioned above.

      Examples: At IOI 2012, I burned myself by reconstructing std::queue ~10^6 times — literally adding static keyword made the code several times faster. In 1975G - Zimpha Fan Club, complexity in the order of 10^9 exceeded TL simply because division with dynamic modulo can't be optimised by compiler and therefore uses divq which has high CPI. Many times I've sped up complicated DFS-processing a tree by doing a simple preorder renumbering at the start and the end. Then there's this meme I made simply because it's such an efficient and simple optimisation.

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it -22 Vote: I do not like it

        302209959 This code does $$$N \cdot 2^M \cdot 7$$$ operations, 7 per loop for no point at all. It still passes.

        Your claims are bogus. Yes, indeed $$$10^8$$$ isn't always safe. However, here it clearly should be. Bitwise AND operation is one of the fastest operations. Also it's not like we are measuring the algorithmic complexity here, but the exact number of operations.

        You could make an argument for popcount not being fast (even though in my experience, it is also fast), however that is simple to avoid.

        Clearly, they wanted to cut (most) $$$O(N \cdot 2^M \cdot M)$$$. I don't support this decision, however neither am i against it. If you really failed to assume that the intended complexity would pass, I am sorry for you.

        There's a rule of thumb, yes, but that doesn't have 10^8, it has 10^7

        Speak for yourself, my rule of thumb is < 10^8 mostly safe, 10^8 and 10^9 unsure depends on operations.

        • »
          »
          »
          »
          »
          5 weeks ago, # ^ |
            Vote: I like it +24 Vote: I do not like it

          Fuck off with condescending statements like "your claims are bogus" or "I'm sorry for you", ok? Nobody appreciates passive aggressiveness. Stay direct and to the point.

          The exact number of operations isn't 10^8. My code is $$$O(N 2^M)$$$ with 2e8 loop executions and 7 instructions per loop, like this:

          	movl	(%rdx), %esi
          	andl	%edi, %esi
          	cmpl	%esi, %ecx
          	cmovg	%esi, %ecx
          	addq	$4, %rdx
          	cmpq	%rdx, %r8
          	jne	.L115
          

          The code you link has 36 instructions per equivalent loop (not counting labels and noops). In addition, one of them on my system (likely different from CF) is call __popcountdi2@PLT. We're at 7e9 for exact number or a bit over an order of magnitude above the number of loop executions, which is as expected. That's far above even your rule of thumb.

          Not all operations are equal. CPI matters, branch prediction matters, avoiding branching by proper assembly matters. AND is one of the fastest, but it's only one among many. Compare, branch and memory access are the important ones, not AND. Then there's a whole another factor — pipelining.

          Correctly predicting that the code you linked would fit in TL is nigh impossible as the exact number of operations is well above even your optimistic rule of thumb and there are some slow operations. Predicting that my code would fit is easier since it's intentionally written to be more constant-optimal, but still unreliable when compared to other problems, and much better to just write and measure. That's what I did.

          The issue is that this bruteforce is intended to pass, not that it can pass. Unintended real-time-efficient alternatives to an optimal algorithm happen often enough but this is a completely different situation.

          • »
            »
            »
            »
            »
            »
            5 weeks ago, # ^ |
              Vote: I like it -19 Vote: I do not like it

            We're at 7e9 for exact number or a bit over an order of magnitude above the number of loop executions, which is as expected. That's far above even your rule of thumb.

            Precisely my point. If orders of magnitude non trivially higher than your rule of thumb are still ok, it is maybe reasonable to reevaluate it?

            The code is not designed to show you that it should pass, but that this can also pass, and thus an argument that the intended is not supposed to (or we can't tell whether it should) is unreasonable. You are not supposed to predict that the code I linked passes, but the actual intended code should?

            The issue is that this bruteforce is intended to pass, not that it can pass.

            it is not? the editorial clearly states O(N * 2^m) complexity, not O(N * 2^m * m) [which is what my code is trying to mimic, but actually doing it TLEs without other optimizations at least].

            There are several problems I can link you to where intended complexity is greater than the order of 10^8 per second, and sometimes not even simple operations. (I am not saying that this is always right, but in this example I see nothing wrong). The only difference I suppose as you pointed out, is that they are harder. Ok, so? Looking at submissions, certainly doesn't seem the TL was tight.

            Meanwhile, take a look at I. The AC solution in-contest has a complexity of $$$O(N \cdot sqrt(N \cdot log(N)))$$$ with $$$N = 3 \cdot 10^5$$$. Even with my estimates, it is a stretch to believe it. Also, btw $$$10^9$$$ simple operations passing is not exactly a new thing.

            You mentioned vector push backs, division operations which are somewhat well known to be costly. Comparing them with min and AND operations is ridiculous. It's like comparing indexed_set (ordered_set) and priority queue. (Ok comparing division isn't that extreme).

            302227228 I put bitwise operators and max operations into a code and ran it 10^9 times (10^4 testcases each 10^5 size). I don't know how compilers work so maybe it's just optimized.

            • »
              »
              »
              »
              »
              »
              »
              5 weeks ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Followup : for about 30 seconds in contest, I considered whether N * 2^m * m would pass. If i had not gotten the optimization soon enough, probably I would also submit it.

              I would not be completely wrong either since it does come close. Some others also did submit that complexity.

            • »
              »
              »
              »
              »
              »
              »
              4 weeks ago, # ^ |
              Rev. 2   Vote: I like it -20 Vote: I do not like it

              To be clear, by bruteforce I mean anything that checks all subsets of operations for each element. Not amortised, not $$$O(N 2^M)$$$ operations, but exactly $$$N$$$ $$$2^M$$$ repetitions of AND, min/max, no optimisations of which subsets are checked. The AND values can be precomputed, the popcount values can be precomputed instead of using an intrinsic in case the intrinsic would be slow on CF judge (a valid concern raised before, though it doesn't seem the case here), but those are implementation details with many possible effects and the core idea is still that we can afford to check everything as long as we don't do it in some stupid way.

              There are several problems I can link you to where intended complexity is greater than the order of 10^8 per second, and sometimes not even simple operations.

              Also, btw 109 simple operations passing is not exactly a new thing.

              There are also many problems where the intended complexity is much smaller than the order of 10^8. It's exactly part of my point that it massively depends on problem, it's quite a high level skill to predict specific code's real runtime beyond "DP tends to be fast, tree processing tends to be slower..." since it requires some understanding of hardware. I don't care that many operations pass here, but that it's artificially increasing the problem's difficulty and making it measure balls rather than skill, since it's much easier to just submit bruteforce as a hail mary approach than to have the required skills.

              Comparing them with min and AND operations is ridiculous

              Again with the snide remarks. I'm comparing them on purpose to show that CPI matters a lot, not just number of operations. You don't have a better reply than trying to ridicule it. Knowing which operations are slow and which are fast, again beyond general patterns like "float arithmetic is slow compared to int", is quite a high level skill and it ties into the argument I made above.

              The code you sent is... running the same asm in the loop that I posted, plus minus a few operations.

              There is no rule of thumb number of operations that'd make someone decide that my submission will easily fit in TL without also causing a lot of TLEs on many (though not all) other problems. That's not debatable, it's simply (10^9 > 10^8) == true. That requires additional knowledge, which I believe is largely lacking even among people who solved this problem. It should've been treated as either easier or harder, but it isn't like actually hard algorithmic problems that require some of this skill, so decreasing the limit on $$$M$$$ and score would've been a good decision.

              If the intent was to prevent solutions with an additional log-factor ($$$M$$$) from passing, then that's not following a fairly common guidance in problemsetting that we shouldn't try to separate strictly passing and strictly failing solutions based on one log-factor because it's too difficult, and this problem shall stand as another example why it's a good guidance.

          • »
            »
            »
            »
            »
            »
            5 weeks ago, # ^ |
              Vote: I like it +1 Vote: I do not like it

            For simple loops like this , you should really evaluate it as loops unrolled. (Of course I get a cost-free way of doing this by JIT). This brings the operations down to basically 4. In fact, it is lower because of executing ahead of time (there is no dependencies to not load the data ahead of the current AND).

            it is just a special rule and special case that, for simple and independent branchless sequential array access solutions, 1e9 operations can be achieved.

            It felt you are just stating reasons why it could be hard to recognize this passes for a normal person with too pessimistic of a runtime estimation, without seeing that you also provided all the reasons why it easily passes: branching, memory, instruction throughput/latency are beyond perfect in this solution. It is not that you can come to this conclusion without considering branching,memory, instructions type. It is a statement that this is very easy to do on this case, with lots of margins for error, in general, both from first principles and by experiences (if you have sent any similar solution and a similar brute force nature before. If you felt 1e8 is not fast enough in this case, you should have been surprised by runtimes on so many occasions. )

            • »
              »
              »
              »
              »
              »
              »
              4 weeks ago, # ^ |
                Vote: I like it -10 Vote: I do not like it

              For simple loops like this , you should really evaluate it as loops unrolled.

              The assembly I posted isn't what I thought up, it's exact assembly of my code. The jump at the bottom goes to the top. Sometimes loops could be unrolled, but not in this case. It's exactly 7 operations per loop. It runs in 600 ms along with $$$O(T 2^M M)$$$ preprocessing.

              In fact, it is lower because of executing ahead of time (there is no dependencies to not load the data ahead of the current AND).

              Yes, I mentioned pipelining.

              A normal person doesn't have a runtime estimation. A normal person finds A somewhat challenging. All that you mention is additional, harder knowledge that just demonstrates how difficult it truly is to estimate running time without just writing and running the code. In this case, it's not strictly needed: sequential memory access and cheap operations AND, max = this should be pretty fast, it's worth trying. However, that's not very common cp skill and confidently deciding that it will be really fast needs much more skill, so it boils down to "I solved ABCD, stuck on E, might as well try bruteforce, maybe it'll work". If it was a problem that less people try to seriously solve because there's a bunch of mid-level problems before it, that wouldn't be a concern.

              It is a statement that this is very easy to do on this case

              For me. For you. Not for 1400 that solved E and who knows how many that didn't try.

              if you have sent any similar solution and a similar brute force nature before

              I don't think I've done that for years. They're more of a hallmark of low quality ICPC subregionals.

              If you felt 1e8 is not fast enough in this case, you should have been surprised by runtimes on so many occasions.

              I'm most often negatively surprised by runtimes. TLE would've been in line with my experiences — I gave a few examples far above that do more complicated stuff, but much fewer operations. Problems with 10^8 non-simple operations tend to TLE, problems with well over 10^9 simple operations tend to barely pass or TLE. Depends on the problem. In this case, I concluded that it could pass, but if it doesn't then I know how to optimise it, which isn't always an option.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +16 Vote: I do not like it

    I agree with you on this part, I was also very hesitant on using __builtin_popcount inside and thought of precomputing it but it is fast enough,

    I think these days its easier to think that solution is fast enough instead of using proper idea in many problems, I was thinking N <= 1e6 and M <= 5 would have been a better choice too.

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      can you please check this solution for E its giving tle on 19 https://codeforces.net/contest/2061/submission/302147220

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Your code is O(N * M * 2^M) make it O(N * 2 ^ M) for precomputation

        • »
          »
          »
          »
          »
          5 weeks ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          i think its 2^m * (n + m) for precomputation can you please recheck

          • »
            »
            »
            »
            »
            »
            5 weeks ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            seems constant factor issue, just optimizing it, for example vector arr with differences instead of each i, j is enough

            • »
              »
              »
              »
              »
              »
              »
              5 weeks ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              did that, but then also its just passing like 1850ms

              • »
                »
                »
                »
                »
                »
                »
                »
                5 weeks ago, # ^ |
                  Vote: I like it 0 Vote: I do not like it

                you can also do arr.reserve(n * m) make it a little bit more faster

    • »
      »
      »
      5 weeks ago, # ^ |
      Rev. 2   Vote: I like it +18 Vote: I do not like it

      Speed of __builtin_popcount is pretty hardware dependent (and for me, compiler dependent too ) so I would not dare to put it through 1e8 loops. This is why I went for the second option below.

      That said, if it does not pass, it is easy to change to a pre-computation. So the options are like

      • do the simple thing, -50 and a few minutes if wrong, nothing else if right
      • do the correct thing, down maybe a minute but definitely OK

      I think it is a fair and balanced (even interesting) decision. It is very different if you are only allowed 1 submission and no chances to correct it if TLE.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    How much lower? I feel that even if the brute force was more evident it is still a harder problem than C and D. I also don't understand the part where you say we don't need to prove convexity. We can't prove that greedily taking the best decisions across all $$$i$$$ is correct without convexity right (comment regarding the same).

    The part where you mention it is better to remove largest bits first, that is indirectly the same argument that convexity claimed isn't it? The convexity proof in the editorial basically says the same, that when we go from $$$g_{i-1}$$$ to $$$g_{i+1}$$$, the largest bit is removed when we first go from $$$g_{i-1}$$$ to $$$g_{i}$$$ and the smaller bits are removed subsequently.

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I mean we don't need to formally prove convexity — or even anything remotely close to that. What I described is intuitively directing towards the proof in editorial, but isn't itself a proof, just intuition. The kind of thinking that makes sense but could easily overlook something. A solution can be presented in full to demonstrate a problem's difficulty or handwaved to make it seem easier — I know, I've done it.

      I think with more obvious constraints, C/D/E are basically interchangeable. C and D have basically equal number of solutions and there's always a dropoff for later problems, though it's not too steep early on.

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it +2 Vote: I do not like it

      All one needs to handwavy is when we do subsequent operations, the delta by which ai changes decreases. Here are handwavy exchange arguments.

      Let's say we have a sequence of operations A,A&B1,A&B1&B2,A&B1&B2&B3, and so on.... Let say D1=A&B1 — A, D2 = A&B1&B2 — A&B1 and so on...

      If D3 < D4, means using B3 on A&B1&B2 decreases value lesser than using B4 on A&B1&B2&B3. If this happens, why not just use B4 instead of B3, you will definitely remove all the bits that have been turned off between A&B1&B2&B3 and A&B1&B2&B3&B4, with some potentially more bits.

      Spoiler
  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I don’t think it is bad to test understandings of big differences of constant factor, especially when that constant factor is as big as a factor of 10 away — I am very sure that an iteration of this kind with 1e9 would have passed (in C++) too. (Not all implementations equal, like bit counts has to be precomputed).

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it -10 Vote: I do not like it

      It's not too bad and it's a useful skill that helps from time to time in various problems anyway, but I don't think it should have a major role in not-too-hard problems. Sometimes you screw up implementation, don't pay attention to constant factor and your code is slow. Sometimes you can squeeze an inefficient algorithm through very efficient implementation, even with tricks of its own. If it's one of the last problems that requires constant optimisation on top of other stuff — that's fine, I'll just take it as a juicy bone. But in this case I feel like it's artificially inflating complexity and at the level it was placed, it's measuring weight of balls more than that specific skill.

      It's a valid contest problem, but there ought to be a tendency to use them only rarely and even then preferrably at higher level.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    For my understanding--you don't even need the fact that some removal has lower reduction than the sum of all previous reductions right? Just having a lower reduction than the last removal is sufficient (implying that the sequence of reductions is just non-increasing)?

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I agree with you. It's not the end of the world, but that's how I felt when I was solving E lol

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

In Problem C, can someone please explain why "If they are a liar, since two liars cannot stand next to each other, the (i−1)-th person must be honest. In this case, there are a(i−1) + 1 liars to their left."

Shouldn't it be a(i-1) liars to the left of i-th person (i-th person being a liar). Because (i-1)th is honest, so there are a(i-1) liars upto position i-1, so there are a(i-1) liars to the left of the i-th person, right??

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Yeah shouldn't that be a(i-1)+1 liars to left of (i+1)th person? (that's what makes the dp transition for liar)

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Exactly thats what i m thinking if ith person is lier and i-1 is not then to the left of ith person there will be a(i-1) liers not a(i-1)+1 even i m confused of this +1 in the editorial are they counting the ith lier also butin ques its mentioned of strictly left and not including the current one

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      its missprint int editorial i will be a[i-2]+1 actually

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

D was a good implementation of use of heaps

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Is codeforces problemset really this hard? I have to concentrate hard on B and C problems for a long time in order to arrive at the solution.

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    you registered 6 years ago lil bro

    • »
      »
      »
      4 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Fair. I guess I forgot. Back after a long time

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

My D TLE'd due to hash collisions smh....

»
5 weeks ago, # |
Rev. 2   Vote: I like it +15 Vote: I do not like it

if you could not understand the editorial solution for C,

Let dp[i][0], and dp[i][1] denote number of sequences ending at i, where i-th person is honest and liar respectively

i-1th person and ith person can be:

  1. honest-honest, in this case, the condition is: a[i-1] should be the same as a[i] since both are true.

  2. honest-liar, we do not need any condition in this case.

  3. liar-honest, in this case, the condition is: i-2th person should be honest. so a[i-2]+1= a[i].

Overall, dp[n-1][0]+dp[n-1][1] will give us the answer

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I had the same structure for my dp but couldn't find the transition... Thank you!!

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      same, I panicked, went into random ideas and stopped seeing things clearly during the contest :(

      Lesson learnt!

      • »
        »
        »
        »
        3 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        could you elaborate alittle on how you set the initial conditions in your code?

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    Help a lot! "The final answer is given by dp[n]+dp[n-1]" Can you explain this sentence in the Editorial?

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it +4 Vote: I do not like it

      I think its the same as what he meant, the first dp[n], is for the case where nth person tells the truth, dp[n-1] is the case where the n-1'th person tells the truth, which is when nth person lies so its basically the same as "dp[n-1][0]+dp[n-1][1]" or "dp[n][0]+dp[n][1]"

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    excellent approach

»
5 weeks ago, # |
Rev. 2   Vote: I like it -8 Vote: I do not like it

Hi, why does 302120893 solution for D gets TLE while 302121766 passes comfortably. The time complexity should be about same.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Suppose you have 2^30 in query

    You query 2^29 2 times Than for each of them. In total you query 2^28 4 times

    Similarly you reach 1 2^30 times

»
5 weeks ago, # |
Rev. 9   Vote: I like it 0 Vote: I do not like it

Can anyone tell me why in QUESTION-D my 302152821 is causing sigterm in testcase (Given in question) :

1
1 1
1
1000000000

while other than this everything is fine.

In other submission 302153118 I just added only one line (Mentioned in comment) and it got accepted.

»
5 weeks ago, # |
Rev. 2   Vote: I like it +6 Vote: I do not like it

In problem D, I'm trying to understand the time complexity of this solution. This straightforward DFS using std::map runs in 300ms:

https://codeforces.net/contest/2061/submission/302153025

For each target x, I appear to be doing 2^(log x) recursive calls in worst case, each with a logn map operation, but the actual runtime looks much better. What am I missing? Is it just the testcases are not strict enough?

Also, I don't understand why is std::map is much faster here? If I use unordered_map with exact same solution it TLE's.

Edit: ok. nvm i got it. amount of recursive calls is just bound by the input size it gets consumed. and unordered_map is just targeted for collisions, works fine with a custom hash function.

»
5 weeks ago, # |
  Vote: I like it +16 Vote: I do not like it

Problem C can be solved (overcomplicated) with 2-sat.

Let's figure out what the array should look like. It should look something like:

$$$0 \ \ 0 \ \ 0 \ \ \texttt{LIAR} \ \ 1 \ \ 1 \ \ 1 \ \ 1 \ \ \texttt{LIAR} \ \ 2 \ \ \texttt{LIAR} \ \ 3 \ \ 3$$$

The algorithm consists of iterating over the array from $1$ to $$$n$$$. Let's maintain the current variable $$$\texttt{sync}$$$, initially $$$0$$$. An element $$$a_i$$$ is out of sync if it does not equal $$$\texttt{sync}$$$. Naturally, a liar is found nearby and we do $$$\texttt{sync++}$$$. Formally, what this means is that at least one of the $$$i$$$ or $$$(i-1)$$$ is a liar. We can write $$$(p_{i-1} \ \lor \ p_i)$$$. Additionally, we write $$$(\neg p_{i-1} \ \lor \ \neg p_i)$$$, since at least one of them tells the truth. There are a few cases. For example, if $$$a_i > \texttt{sync}+1$$$, then $$$i$$$ is surely a liar. We write down $$$(p_i \ \lor \ p_i)$$$.

A good way to check if a solution exists is to run a standard SCC 2-sat. I personally avoided doing so during the contest and chose to do tedious casework. Afterward, when you are sure at least one valid solution exists, find adjacent clause components. If there are intersections or singular clauses like $$$(p_i \ \lor \ p_i)$$$, the component has exactly $$$1$$$ way to be filled with LIARs. If the component has no intersections, such as in $$$(4,5), (6,7), (8,9)$$$, it has $$$k+1$$$ ways to be filled with LIARs, where $$$k$$$ is the total number of clauses in the component. Let me show an example:

$$$(\textbf{*},\ \cdot \ ), (\textbf{*},\ \cdot\ ), (\textbf{*},\ \cdot\ )$$$
$$$(\textbf{*}, \ \cdot\ ), (\textbf{*},\ \cdot\ ), (\ \cdot\ ,\textbf{*})$$$
$$$(\textbf{*},\ \cdot\ ), (\ \cdot\ ,\textbf{*}), (\ \cdot\ ,\textbf{*})$$$
$$$(\ \cdot\ ,\textbf{*}), (\ \cdot\ ,\textbf{*}), (\ \cdot\ ,\textbf{*})$$$

Since the components are independent, just multiply the results of each.

Here is my submission: 302126337

»
5 weeks ago, # |
  Vote: I like it +3 Vote: I do not like it

FST contest

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Maybe solution of G can be intuitively understood considering induction proof of this: https://math.stackexchange.com/questions/4603759/show-that-rmk-2-mk-2-3m-1

»
5 weeks ago, # |
  Vote: I like it +8 Vote: I do not like it

Hi, could someone explain F2 in greater detail?

I don't really understand this "Let the distance between the i-th block and the nearest preceding 1-block be p. The number of 0s between blocks j and i cannot exceed p."

Thanks a lot

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

[submission:302161510][

(https://codeforces.net/contest/2061/submission/302161510)

can anyone help, why is my code getting WA on test 6?

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Hello everyone, I tried to solve problem C using dp. First, I thought that we can make dp state of $$$dp[i][k][isLiar]$$$, which represents the number of ways till $$$i^{th}$$$ person if $$$k$$$ liars are there till now and last person was the liar or not. Since, it can go till $$$O(n * n/2 * 2)$$$, so it is not possible. Then, I changed the solution to $$$dp[i][isLiar] = {ways, k}$$$, where I was storing the number of ways and liar till now in dp value and not in state, and (suprisingly to me) it passed.

Now, in editorial a dp solution with only $$$O(n)$$$ states is given. So, my question is that is there any way or observation by which we can see that some variables are not necessary for the dp state and the dp state can be made smaller?

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The constraints. The constraints should force you to think better, it was obvious o(n*n) wont pass.

    • »
      »
      »
      4 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Well from constraints, I could have wrongly understood that DP might not be possible so I should try some greedy approach.

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        It should have atleast hinted that if dp would work it needs to be <=nlog(n),now its upto your dp skills if you could fit it or not.

»
5 weeks ago, # |
Rev. 2   Vote: I like it +25 Vote: I do not like it

Can someone explain F2 in detail?

»
5 weeks ago, # |
  Vote: I like it +3 Vote: I do not like it

In E , why doesn't the greedy method for finding num(i , j) (going iteratively) work ?

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    not sure I can visualize this well, but imagine that the "best" operation has two highest bits: A, B (and nothing else on). The second best operation has only bit A on, but also some other bits CD, the third-best operation has bit B and other bits EF.

    If you can use exactly two operations on that number, it's better to choose second and third best, skipping the top one (as together they have the bits ABCDEF which is the most you can get with two operations).

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

It is possible to solve E without proving convexity or really any property of the function g for that matter.
Submission

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Can we solve problem C with combinatrics(star and bars)?

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone explain the DP solution of C in a better way Can't the last person be liar?as in editorial why this is not considered in transition!?

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +2 Vote: I do not like it

    If the last person was a liar then the second last person must be honest. Hence the extra $$$dp_{n-1}$$$

»
5 weeks ago, # |
Rev. 2   Vote: I like it +53 Vote: I do not like it

Just upsolved F2. It is one of the best problems I have seen. On the other hand, I have no clue on how the editorial just skips to the following claim :

Let the distance between the i-th block and the nearest preceding 1-block be p. The number of 0s between blocks j and i cannot exceed p. There is a restriction: j≥li.

I do get there eventually, but the steps are definitely non trivial. I decided to write my own solution for both F1 and F2, explaining the intermediate steps.

F1 Solution
F2 Solution
  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Great explanation!

    The ending characters need to be handled properly (and evaluating if the ranges $$$(0,x), (x,n+1)$$$ are valid). One way is to brute-force on the final values of $$$S_1$$$ and $$$S_n$$$ but that too has a few cases (in my way at least).

    Could you elaborate more on why it's necessary to "brute-force the final values of $$$S_1$$$ and $$$S_n$$$"? Couldn't we just add dummy characters ("fixed" block) on both ends of $$$S$$$ and $$$T$$$? Thanks a lot.

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Yeah that should be fine too i think. I think i overlilled fixing this issue.

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Hmm. I read jiangly's submission (same as yours) and he both added dummy characters and bruteforce both ends. I think there must be a reason and I'm missing something.

        • »
          »
          »
          »
          »
          5 weeks ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          right i remembered the issue now

          you will need to calculate whether the interval (0, x) is good regardless of bruteforcing over the first/last or not, but (0, x) good-ness depends on the choice of the first character

          Any alternate way to calculate (0, x) good-ness should be fine

          • »
            »
            »
            »
            »
            »
            5 weeks ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            ... but (0, x) good-ness depends on the choice of the first character

            Could you give an example? Appreciate!

            • »
              »
              »
              »
              »
              »
              »
              5 weeks ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              well it is specific to the method used for calculating r_i and l_i (because i used the same method to calculate r_0 and l_(n + 1))

              To caculate r_i for S[i] = 1, i found the position of the next 0, say p, and limited the number of 1s in the interval (i, j) to at most (p — i).

              For S[i] = 0, the algorithm is the same, but it depends on the position of the next 1, and limits the number of 0s. Hence the difference

          • »
            »
            »
            »
            »
            »
            4 weeks ago, # ^ |
              Vote: I like it +10 Vote: I do not like it

            Can't you add before s and t symbol different to s[0]? And the same at the end. It will mean that you 100% have 2 fixed positions on 0 and n+1. I did this in my solution.

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

i solved A,B,C,D,F1 all questions felt below 1600 got a delta of +200

»
5 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Hello everyone, I am trying to do the problem D of this contest. My idea is that I would make 2 sets of a and b (map storing frequencies), and then I would check for every element of set b-

  • if this element is there in set a, remove it from a
  • if this element is not present in set a
    • if this number is odd, find $$$x = num / 2 - 1$$$ and $$$y = x + 1$$$, and then try to find x and y in set a, if any is present then remove it from set a, otherwise add that to set b
    • if this number is even, find $$$x = num / 2$$$ and $$$y = x$$$, and do the same procedure as above
    • if in any case x or y is less than the minimum element in set a, print "No"

at last, if set a is also empty then print "Yes"

According to my understanding, it should take $$$O(n * log(max(b_i)))$$$, which should be feasible, but it is giving TLE verdict. Can anyone help me in this, whether I am missing something important?

Thanks

UPD: I got to know that this approach is same as the editorial.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    yeah i did this way too.

    submission: 302094211

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    what will happen if x will be 3? it will search for 0 and 1 And what will happen when x will be 0? it will search for 0 and 0 ?? it will never stop this may be one of the reason.

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I am checking whether x is less than the lowest number present in set a, so it should stop at 0 atleast because 0 can never be present in set a.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Fixed Code

    U are not decreasing the count of num from brr if it's current frequency is more than 1

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Thanks a ton gogo13. I was thinking that maybe I am inserting numbers back in set b, so it is increasing the complexity more than $$$log(n)$$$. Thanks for pointing that mistake.

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

my submission for problme E — https://codeforces.net/contest/2061/submission/302182528 can anyone tell my this is wrong.

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

I've got a question about problem I. I implement the O(n\sqrt(n\log n)) solution ,but the second part( j−(l−j)<−(r−l) ) seems to be not convex because the transition point is not monotonic(in very rare occasions,but it does accure).

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

In my friends list, a pupil, a specialist, an expert(me before this contest ;-;), a candidate master and a master all got FST for B. B destroyed everyone equally XD.

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

In problem E, I thought, at each move, i could greedily decide which magic to use with which number. I know this code would receive TLE, but it is getting WA. Submission

can anyone please help me out here? Is it the assumption that is wrong?

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    The assumption is wrong because using the most effective number first is not always optimal. For example, if the numbers are 1100, 1010, and 0101 in binary, a greedy algorithm would pick 1100 and 1010 instead of the optimal 1010 and 0101.

    • »
      »
      »
      3 weeks ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      I think you got it wrong aksLolCoding,

      if we check for each number with each magic, whats the max reduction at each step, then probably, this obs looks correct to me

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone explain problem E in simpler words? Couldn't grasp the editorial

  • »
    »
    4 weeks ago, # ^ |
    Rev. 3   Vote: I like it 0 Vote: I do not like it

    Look at code by Dominater069 and to understand why it passes you can look at the conversation between Dominater069 and Xellos above in this section. Editorial is way more complicated than the problem itself imo. You can also refer to my code(shameless me) here

»
4 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

An interesting fact.Is jqdai0815 trying to hide the editorial of H ?(). Why he post it on a google using Chinese while Chinese can't be accessed to it and others can't understand Chinese.

»
4 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve C in O(n^2),I am noob.

»
4 weeks ago, # |
  Vote: I like it +4 Vote: I do not like it

Where is the editorial for the problems H1, H2 ?

»
4 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

In Problem E, why doesn't just a greedy approach work for each A[i]. For example why is this approach wrong and why do we need to iterate over each subsequence of B.

vector<int> d;

	
for(int i = 0; i < n; i++){
	int g = a[i];
	int h = g;
	while(true){
		for(int j = 0; j < m; j++){
			h = min(h, g & b[j]);
		}
		if(h == g)break;
		d.push_back(g - h);
		g = h;
	}

	
}
»
4 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

The Editorial is not good :(

»
4 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anyone explain why solution of Problem D https://codeforces.net/contest/2061/submission/302985376 solution is giving TLE on test no. 22