Блог пользователя maomao90

Автор maomao90, 11 месяцев назад, По-английски

1919A - Wallet Exchange

Author: maomao90

Hint 1
Solution
Code

1919B - Plus-Minus Split

Author: maomao90

Hint 1
Solution
Code

1919C - Grouping Increases

Author: maomao90

Hint 1
Solution 1
Hint 1
Hint 2
Hint 3
Solution 2
Code (Solution 1)
Code (Solution 2)
Bonus

1919D - 01 Tree

Author: maomao90

Hint 1
Hint 2
Solution
Code

1919E - Counting Prefixes

Author: maomao90

Hint 1
Hint 2
Solution
Code
Bonus

1919F1 - Wine Factory (Easy Version)

Author: maomao90

Hint 1
Solution 1
Solution 2
Code (Solution 1)

1919F2 - Wine Factory (Hard Version)

Author: maomao90

Hint 1
Hint 2
Hint 3
Solution
Code

1919G - Tree LGM

Author: maomao90

Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Solution
Code

1919H - Tree Diameter

Author: maomao90
Full solution: dario2994

Background
Hint 1
Hint 2
Hint 3
Hint 4
Hint 5
Hint 6
Solution
Code
Разбор задач Hello 2024
  • Проголосовать: нравится
  • +760
  • Проголосовать: не нравится

»
11 месяцев назад, # |
  Проголосовать: нравится +19 Проголосовать: не нравится

Thanks for fast editorial

»
11 месяцев назад, # |
  Проголосовать: нравится -12 Проголосовать: не нравится

Thank you for the contest!

»
11 месяцев назад, # |
  Проголосовать: нравится +45 Проголосовать: не нравится

C was tough

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Yeah definitely, I think it should be around 1700 rated. Actually, I thought to find the longest non-increasing subsequence and remove it from the array. Then I have to calculate the cost of the remaining portion of the array. That should be the answer. But I could not implement it.

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится +1 Проголосовать: не нравится

      people are saying that they tried this and it gives WA too!

      • »
        »
        »
        »
        11 месяцев назад, # ^ |
          Проголосовать: нравится +6 Проголосовать: не нравится

        Yes i implemented that solution , got WA on pretest 2

      • »
        »
        »
        »
        11 месяцев назад, # ^ |
          Проголосовать: нравится +11 Проголосовать: не нравится
        LIS-approach counter test
        • »
          »
          »
          »
          »
          11 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          What if we first list those elements which are like a[i]<a[i+1] and then find longest non-increasing subsequence and remove them from array a to another array b .
          Take example of your test case : Element which are like a[i]<a[i+1] are 2 3 3 9 1 10 3 6 now longest non-increasing subsequence is 3 3 3 or 3 3 1 (Note that first two 3's are reffering to same element in array) , remove either of them from array a we will get( if we remove former) :
          3 3 and 2 2 9 8 3 1 10 6 We get answer 2 which is right.
          Please provide some counterexample if its wrong, I checked it for many test cases and get correct answers, sadly I couldn't implement it in contest :(

          • »
            »
            »
            »
            »
            »
            11 месяцев назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится
            Counter
            • »
              »
              »
              »
              »
              »
              »
              10 месяцев назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              I am getting 0 but still fail pretest 2.

              I find the longest decreasing subsequence. and find the plenty in the remaining elements.

            • »
              »
              »
              »
              »
              »
              »
              5 месяцев назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              thanks i got it why it doesnt works

        • »
          »
          »
          »
          »
          11 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Do you know the link of the Problem C with K subsequences? Can you give me it?

      • »
        »
        »
        »
        5 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        same i too get wa

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится +1 Проголосовать: не нравится

      agree :))

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      IMO, C was somewhat tricky to implement but the concept isn't that hard. I think D was pretty tricky, conceptually.

      • »
        »
        »
        »
        11 месяцев назад, # ^ |
          Проголосовать: нравится +1 Проголосовать: не нравится

        Can you explain me the concept of C?

        • »
          »
          »
          »
          »
          11 месяцев назад, # ^ |
            Проголосовать: нравится +5 Проголосовать: не нравится

          You just greedy it. Maintain the two sets and keep on adding them in order. You do have to be careful with how to handle each case though.

    • »
      »
      »
      11 месяцев назад, # ^ |
      Rev. 3   Проголосовать: нравится +18 Проголосовать: не нравится

      Longest non-increasing subsequence can fail sometimes, example 4 3 2 1 8 4 2 1.

      If your code detects 4 3 2 2 1, penalty is 1, but optimal answer is 0 (4 3 2 1 and 8 4 2 1).

      • »
        »
        »
        »
        11 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Oh right!

      • »
        »
        »
        »
        11 месяцев назад, # ^ |
          Проголосовать: нравится +6 Проголосовать: не нравится

        this case is invalid

      • »
        »
        »
        »
        11 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I was trying doing it with LIS for whole 2 hours. If only i was able to observe this faster.

      • »
        »
        »
        »
        10 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Okay so I randomly thought of a testcase: 8 9 10 2 11 7 4 3

        The answer for this one is 2 if we split this into 8 2 and 9 10 11 7 4 3

        however when I ran the codes submitted by some of the top coders for problem C

        I got different outputs from them. Some gave 1, some gave 2 and some 3.

        I tried to run Tourist's code and his code's answer was 3.

        Now this is really confusing as hell for a newbie like me, was the problem faulty or am I tripping?

      • »
        »
        »
        »
        10 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I find the longest decreasing subsequence and I am getting 0. still got WA Iin pretest 2

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I firstly implemented this way using longest subsequence, but it was WA. Than I implemented greedy that is accepted.

      • »
        »
        »
        »
        11 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Can you pls share your approach?

        • »
          »
          »
          »
          »
          11 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Accepted approach is about the same as in editorial

        • »
          »
          »
          »
          »
          11 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Basically: any item should either be in s or t It doesn't matter which one so have the last item inserted in each one in 2 vars: t,s if (s < t) swap(s,t) so t is the minimum(doesn't matter just wanna have the minimum)

          you agree that the most optimal approach is that the current element of the array goes to the one with the minimum, if it's less than min it's the best, if it was larger than min but smaller than the other one you put it into that, still free since it's smaller

          otherwise put it inside the any of those u like(doesn't matter since you'll swap them if wasn't good) in any of them the cost will be increased by one

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I did something similar and received WA. Time to try to solve it another way.

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I thought of that too and implemented it WA 2 but for anykind of tc like this: 1 2 3 4 5 6 7 8 9 where the ans is n-1, it'd output n

      but just going on the paper for 5secs and it clicked instantly got the exact approach of editorial

      it was nice

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I tried with longest decreasing subsequence removal from the array and calculating the rest array i was sure it was right but wasn't able to implement it.

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      i thought the same but could not implement it

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Very same

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I wrote DP code for part B and it ran out of memory, then I came up with a simpler approach. Figured the same would happen for part C, so I didn't even try using DP and ran out of time trying to think of a better solution. Oops.

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I was going for the same approach , but calculated the value for the output of last sample case as 5 in my mind. The simpler approach was my only guess

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    greedy worked

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Way to hard spent like 10 minutes trying to understand it

»
11 месяцев назад, # |
Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

Very good competition!

2024 will be a good year, it seems to me because the competition was cool!

Thanks maomao90 for the competition.

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

thank you for this fast and organized editorial.

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

FastEditorialForces!

»
11 месяцев назад, # |
  Проголосовать: нравится -6 Проголосовать: не нравится

russian???

»
11 месяцев назад, # |
  Проголосовать: нравится +35 Проголосовать: не нравится

F1 statement: "There are n wizard's..."

Me: "Well.... okay, there are n Jesuses..."

P.s. really cool problems and thank you for super fast editorial!

»
11 месяцев назад, # |
  Проголосовать: нравится +19 Проголосовать: не нравится

proofByAcForces

»
11 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

I solved F1 with sqrt decomposition. Why no mention about it in the editorial?

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится +24 Проголосовать: не нравится

    For the same reason the segtree solution to C and the DP solution to B weren't mentioned.

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      wait but the segtree solution to C is literally mentioned as solution 2

      • »
        »
        »
        »
        11 месяцев назад, # ^ |
          Проголосовать: нравится +5 Проголосовать: не нравится

        Oh, my bad. A non-sarcastic answer to your question: it's overkill. Also, probably not forseen or intended, because N = 2 * 10^5.

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      DP solution to B will probably time limit. could you tell me your approach for it, maybe I got it wrong.

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Side note, calling a magic number variable in your code "magic" is hysterical.

  • »
    »
    6 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    i also solved F2 with sqrt decomposition; much simpler than the editorial:

    for each block (sqrt(n)), save the following information: - if no (additional) water flows into the first water tower (of the block), how much wine would the block produce? - how much water is flows out of the last tower (considering capacity limits) - how much water that flows into the first water tower can be converted into wine (considering intermediate capacity limits) - the maximum capacity left, i.e. how much water can simply flow from the first to the last tower (i.e. in F1 this is around 10^18)

    then you can simply "simulate" the process on the blocks after updating one block, i.e. the complexity is $$$O(n \sqrt{n} + q \sqrt{n})$$$

    Edit: Submission: https://codeforces.net/contest/1919/submission/261156832

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Did anyone solve problem C with a cartesian tree. I tried so but i couldn't get accepted. My idea was to create de max cartesian tree and count the total amount of left children minus one. Took the tree code from geeksforgeeks. Also mention that if two elements are equal the one from the right will be the child. Here's my approach 240585200.

»
11 месяцев назад, # |
  Проголосовать: нравится -48 Проголосовать: не нравится

Do better. B, C have 6 pages long proof.

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Can you provide me with a proof for why the greedy approach they gave works in the third case where x < ai < y ? I did not understand the following part:

    This is because if we consider making the same choices for the remaining elements ai+1 to an in both scenarios, there will be at most one time where the former scenario will add one penalty more than the latter scenario as the former scenario has a smaller last element after inserting ai . After that happens, the back of the arrays in both scenarios will become the same and hence, the former case will never be less optimal.

    • »
      »
      »
      9 месяцев назад, # ^ |
      Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

      Recall that,

      If we have b < a[i] then we get a penalty.
      If we have b >= a[i] then we get NO penalty. 
      

      Suppose we appended a[i] to x, then a[i] becomes the new x and we have that a[i] <= y. Recall that since x < a[i], we get a penalty. Let's start by observing all the possible cases in the future. Suppose we have some a[j] where i < j,

      (1) If y < a[j], then no matter where we append it, we get a penalty. It is always optimal if we append it to the smaller element.
      
      (2) If a[i] < a[j] <= y, then appending it to a[i] gives us another penalty, BUT appending it to y, gives us NO penalty.
      
      (3) If a[j] <= a[i] <= y then no matter where we append it, we get NO penalty. It is always optimal if we append it to the smaller element.
      

      With these 3 possible cases in hand, notice (1) and (3) have a trivial result. From here, it is easy to see the optimal option for (2) is to append it to y since we get no penalty. Sometimes, appending to x may also work but it can be proven that it is no better than appending to y.

      Delving a bit more, we see that Case (1) and (3) are intuitive, we are always replacing the smaller element with something larger thereby making it less likely that we get penalties in the future.

      On the other hand, Case (2) seems a bit counterintuitive, since appending a[i] on the larger element y makes Case (1) more likely to happen. At the same time, it makes sense that having one less penalty in the future is better or even.

      Lets fix x < a[i], and list all the possible inequalities with y and a[j] for i < j
      Suppose we have x < a[i] <= y < a[j], case 2 -> case 1
      I.   (x y) -> (a[i] y) -> (a[j]   y ) --> penalty = 2
      II.  (x y) -> (x a[i]) -> ( x   a[j]) --> penalty = 1
      III. (x y) -> (a[i] y) -> (a[i] a[j]) --> penalty = 2
      IV.  (x y) -> (x a[i]) -> (a[j] a[i]) --> penalty = 1 (optimal), same as II but a[i] > x
      Suppose we have x < a[i] <= a[j] <= y, case 2 -> case 2
      V.   (x y) -> (a[i] y) -> (a[j]   y ) --> penalty = 1 or 2
      VI.  (x y) -> (x a[i]) -> ( x   a[j]) --> penalty = 0 (optimal)
      VII. (x y) -> (a[i] y) -> (a[i] a[j]) --> penalty = 1
      VIII.(x y) -> (x a[i]) -> (a[j] a[i]) --> penalty = 1
      

      Notice that we can reach the same penalty with with different states, but we always prefer the state with larger x and larger y. If you imagine several case 2s and case 1s stacked together, we will always get a better or even result by greedily choosing IV. and VI. which is exactly by always appending to y for case 2.

»
11 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

Does anyone have any material on ReLU Segment Trees? I solved F1 (and I will try to do F2 as well) using a bit of a different segment tree than first solution and don't know if it is just the second solution (although I don't think it is). Thanks in advance

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится +13 Проголосовать: не нравится

    My F1 solution uses a ReLU segment tree. I wasn't able to adjust it to solve F2 in time though. I thought the problem was really terrible at first because it's an ugly mathy solution, but after seeing that the intended solution was optimized flow instead, I like the problem much more.

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Do you know the link of the Problem C with K subsequences? Can you give me it? I really need it!

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится +16 Проголосовать: не нравится

    I updated the editorial tp have slightly more details about ReLU segment tree. Hope it helps!

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Do you know the link of the Problem C with K subsequences? Can you give me it? I really need it!

  • »
    »
    10 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Can one share the intuition behind the observation about remaining water = max suffix sum?

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Great contest & fast editorial.

I'm glad that 2024 has had such a perfect start. Thank you!

»
11 месяцев назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

Thank you for the contest! Best wish for 2024.

»
11 месяцев назад, # |
  Проголосовать: нравится +49 Проголосовать: не нравится

This contest is a perfect example of how to set and prepare rounds. Well done to the author and coordinator!

The tasks were pleasurable to solve, balancing math and algorithms. I thought that every problem was on point in terms of difficulty, quality, and their respective subjects (not every task was "constructive math"). Overall, the round seemed thoroughly tested and well-prepared.

»
11 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

I think this contest could've benefited from weaker samples on A and B. They're very proof-by-AC able.

Other than that, best contest I've ever had the joy of participating in.

»
11 месяцев назад, # |
  Проголосовать: нравится +2 Проголосовать: не нравится

I was not sure about C, so tried other approaches and when they all failed, then tried the above greedy approach and to my surprise it worked. Greedy is hard to prove!

»
11 месяцев назад, # |
  Проголосовать: нравится +12 Проголосовать: не нравится

Proof for D?

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone tell me what's wrong with my solution for F1?

240597499

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

so fast tutorial

»
11 месяцев назад, # |
  Проголосовать: нравится +6 Проголосовать: не нравится
»
11 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

F1 can also be solved by first consider D&C solution (maintain sum of A, sum of B, sum of answer for each node, when merge(L, R), try to match L.A with R.B as much as possible), then put this dp into segment tree.

»
11 месяцев назад, # |
  Проголосовать: нравится +6 Проголосовать: не нравится

greedy on D was quite unexpected

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

C and D is really hard.

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Could someone help me figure out why I got Wrong Answer on C?

https://codeforces.net/contest/1919/submission/240597831

»
11 месяцев назад, # |
  Проголосовать: нравится -10 Проголосовать: не нравится

nice contest

»
11 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

The editorial proof of the greedy algorithm correctness in Problem $$$C$$$ is obviously not complete. It is not clear why the optimal splitting for a prefix coincides with the restriction of the optimal splitting of the whole array to this prefix, while this claim is implicitly used.

Does anybody understand a complete proof?

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Is C been solved completely on intuition ?

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится -11 Проголосовать: не нравится

    I'm not sure what you mean by that. Why is this proof not complete? My understanding is that because it's a subsequence, and every element must be inserted into either array b or c, it is a complete proof.

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится +6 Проголосовать: не нравится

      The question is why if you have the optimal splitting of $$$[a_1, \dots, a_k]$$$ to subsequences $$$B$$$ and $$$C$$$, then the optimal splitting of $$$[a_1, \dots, a_k, a_{k+1}]$$$ can be obtained by back inserting of $$$a_{k+1}$$$ either into $$$B$$$ or into $$$C$$$. In general, the optimal splitting for $$$[a_1, \dots, a_k, a_{k+1}]$$$ may have nothing to do with $$$B$$$ and $$$C$$$.

      The proof is written such that it looks like this fact is used, but may be I am just misunderstanding something.

      • »
        »
        »
        »
        11 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I think the idea is that the split doesn't really matter. The only thing that matters is the final number in each array. So suppose that X and Y are the final two numbers in array A and B respectively, then regardless of any of the prior decisions for splitting the numbers, based solely on the values of X and Y we can determine whether a number should go into array A or B. You might argue that X and Y could have been chosen to be greater which might lead to a more optimal result, but the algorithm maximizes the value of X and Y in the first two cases, and in the third case the editorial lays out an argument for why it is at least just as good putting it in the other array.

        Does this sound right?

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    If you do let me know, the same is the reason I wasn't able to give this greedy approach a try

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I believe, the following was implied.

      Assume that you have any splitting of $$$[a_1, \dots, a_k]$$$ to subsequences $$$(B, C)$$$, which can be extended to a splitting $$$(B', C')$$$ of the whole array $$$[a_1, \dots, a_n]$$$ with penalty $$$x$$$. Then the splitting $$$(\tilde{B}, \tilde{C})$$$ of $$$[a_1, \dots, a_k, a_{k+1}]$$$ constructed greedily from $$$(B, C)$$$ also can be extended to some other splitting $$$(\tilde{B}', \tilde{C}')$$$ of the whole array $$$[a_1, \dots, a_n]$$$ with the same penalty $$$x$$$ or less.

      This works, and in order to prove it you need to construct these $$$(\tilde{B}', \tilde{C}')$$$ from $$$(\tilde{B}, \tilde{C})$$$, which is more or less done in the editorial.

  • »
    »
    11 месяцев назад, # ^ |
    Rev. 4   Проголосовать: нравится +5 Проголосовать: не нравится

    I agree with you. The proof is not a standard greedy proof. It only says: when the splitting of the first k elements is fixed, we can obtain the optimal (optimal under this condition, not globally optimal) solution of the splitting of the first k+1 elements. Edit: Actually there is a mistake, when the first k elements are fixed, then out of all optimal splittings of the remaining n-k elements, it's always not worse to choose the splitting (in the editorial way) of the (k+1)th element, so it's always chosen that way.

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      Can you please elaborate on why optimal choices at each particular step eventually lead to optimal partitioning globally? Still don't understand :(

      • »
        »
        »
        »
        10 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I'm guessing we should appreciate that there is not one unique optimal solution. The proof in editorial is a Greedy-stays-ahead(or catches up) argument and not an exchange arguments proof. So, if we consider the first point of difference between the optimal and the greedy. We have proved that the greedy isn't any worse. And for the rest of the sequence of moves in the optimal solution, I found that it is always possible directly append those moves to the greedy prefix from this point on without introducing extra penalty. And so now we move to the second point of difference and the same argument applies. Does it makes sense or am I stuck in circular reasoning ?

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

fast editorial, thx

»
11 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

good contest

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hey can anyone share their intuition for problem D , i didn't understand the idea from editorial , and would appreciate if anyone can share their stack based idea

  • »
    »
    11 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    For any element of the array, we need to make sure it can be absorbed into another element. This can only be done if eventually it can become adjacent to an element with value exactly one smaller than it.

    How can we check this? Like in the solution, we can take the larger element and trivially check if that's the case (it can either have a minus one element directly adjacent to it or have elements with similar value on its left and right that eventually reach a minus one element). This way, all max elements are deleted. Sane thing can be done for batch of the next bigger elements, etc until we are done. Make sure than only a single 0 element is permitted initially.

    Now, this process can be automated. By the above, it is enough to check that all the elements are eventually adjacent. To assure this, we need to make sure that, for every element, for the closest element to either the left or right that has a smaller value of our element, its value needs to be minus one and not less.

    This works, because we make sure this condition is true for every element. As on the above analysis, we can use it to get rid of elements of progressively smaller value and all the bigger elements will be deleted. Then, this element will be ready to be absorbed by this chosen smaller target. Ofc, if this target has a smaller value, the absorption will fail and this element will stuck there forever, so no tree can be found in this case. You can work out some examples and make sure why a sequence of elements with same value doesn't impede this solution.

    Here is where the stack idea comes. We can use stacks to solve this problem, like outlined here: https://www.geeksforgeeks.org/find-the-nearest-smaller-numbers-on-left-side-in-an-array. Do that for both directions and make sure that the element found on the stack is exactly one smaller.

    Also, there should be exactly 1 zero in the array. Check that as well and you have the solution.

  • »
    »
    10 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Consider "growing" the tree node by node. Consider the list of current leaves.

    In one move, we can replace the subsegment of a single element [x] with [x, x+1] or [x+1, x]

    In other words, choose any element x in the array, and insert to the left or right of it x+1.

    Problem is now to find if the final array of leaves can be generated via these moves.

»
11 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

thank you , but I am sb

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

HELLO 2024. THANK YOU

»
11 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

For the problem C I used the greedy approach explained in this editorial but i did it in reverse. I iterated from N-1 to 0 and tried to add the element x to the array with the first largest element unless x is smaller than that but larger than the first smallest element of the two array. I implemented this but it gives WA, here is my submission: 240582496

Is it a solution error or an implementation error?

UPD: My bad, it was >= and not only > :(

»
11 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Again ,C was too much for me

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For problem c I tried to separate longest decreasing subsequence (indices) from all elements, then I tried to compute a[i]<a[i+1] for rest of the elements but couldn't pass. Can anyone explain why?

»
11 месяцев назад, # |
Rev. 2   Проголосовать: нравится +42 Проголосовать: не нравится

I had a very different solution to D, I think.

Note that there must be exactly one leaf node with value $$$0$$$. Call this node $$$i$$$ and consider the path from the root to $$$i$$$. Then, for each node $$$v$$$ along this path, let $$$S_v$$$ be its subtree on the side not including $$$i$$$ (the other edge). Then, if we subtract $$$1$$$ from the distance for each node $$$j\in S_v$$$, this must also form a valid dfs ordering of distances.

This gives the following recursive definition: a height-$$$h$$$ tree has DFS ordering $$$[L, h, R]$$$ where $$$L$$$, $$$R$$$ are concatenations of height-$$$h + 1$$$ trees' DFS orderings.

For example, consider the first sample $$$[2, 1, 0, 1, 1]$$$. Then, $$$L=[2, 1]$$$ must be a concatenation of height $$$1$$$ trees and $$$R=[1, 1]$$$ must also be a concatenation of height $$$1$$$ trees.

Checking if $$$[2, 1]$$$ is a concatenation of height $$$1$$$ trees is the same as checking if $$$[1, 0]$$$ is a concatenation of height $$$0$$$ trees. Recursing gives that this is indeed valid.

Similarly, $$$[1, 1]$$$ being a concatenation of height $$$1$$$ trees is the same as checking if $$$[0, 0]$$$ is a concatenation of height $$$0$$$ trees (which it obviously is).

So, we can use Divide & Conquer to check if an array corresponds to a concatenation of height $$$h+1$$$ trees, by (for example) using a RMQ to find all positions with value $$$h + 1$$$ and recursing on their subranges. We also check that the root only has one $$$0$$$.

Overall, the code looks like this:

def dnc(i, j, h):
   if min entry in [i, j) != h, then not a valid height-h tree
   if j - i == 1, return true
   get positions pos in [i, j) at height h
   split [i, j) into ranges [lo, hi) corresponding to parts between height h values # concatenation is basically unique
   for each [lo, hi), check dnc(lo, hi, h+1) and return true if all succeed 

Since we look at each position as a root exactly once (utilizing the RMQ for this speedup, or some sort of binary search structure), the runtime is $$$O(n\log n)$$$ (coming from building the RMQ).

My code: 240589892

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

If ai<x insert ai to the back of the array with a smaller last element. If y<ai , insert ai to the back of the array with a smaller last element. If x<ai<=y , insert ai to the back of the array with a bigger last element.

Can someone help with an example here. Which array are they mentioning

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

The sample of D was too weak..

»
11 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

Finally, decent contest! Could've given more examples in D though:(

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I think this approach works for C but I couldn't figure out how to implement it efficiently during the contest:

  1. Reverse the array. e.g 5 6 7 0 9 4 3 8 0 10 4 8 becomes [8, 4, 10, 0, 8, 3, 4, 9, 0, 7, 6, 5].
  2. Starting from the beginning of the reversed array, find increasing subsequences by finding the lower_bound for a given number later in the array, until no such lower bound exists. Then go to the next index and start again. All these numbers are in one array, the rest are in the other.
  3. Calculate the penalty directly from these arrays, but either reverse them or calculate it "backwards" from what the problem asks.

For example, for this approach you would get [8, 8, 9], then since no number is >= 9, you would start from the next number which is 0, and get [0, 5]. So one array is [5,0,9,9,8] (the elements we saw in this process, but reversed since we reversed the array to do this), and the other one is the remaining elements.

Question: is it possible to implement something like this efficiently (even if it is not correct for all cases of C?) I tried to use a set of pairs with {value, index} and finding the lower_bound of the current pair, but couldn't figure out a sorting function to get the exact behavior I wanted. Thanks!

»
11 месяцев назад, # |
  Проголосовать: нравится +55 Проголосовать: не нравится

For F2 I feel you can just imagine a substring of towers as some mechanism that:

  1. produces ans wine for free.

  2. produces water water at the end.

  3. accepts at most magic water to turn into wine at the front.

  4. additionally accepts cap water at the front and routes it to the end.

Then it is just one segtree and no need for flow...

My sol: 240574390

»
11 месяцев назад, # |
  Проголосовать: нравится +16 Проголосовать: не нравится

Why is the TL of E 1 second?

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Easy A,B,C problems.

»
11 месяцев назад, # |
  Проголосовать: нравится +6 Проголосовать: не нравится

2024 gave me:

Spoiler
»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Beautiful contest Enjoyed it

Thanks to the creators, coordinators and testers for such a great contest

thanks for the fast editorial too

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Finally im green! Thanks for the great contest!

»
11 месяцев назад, # |
  Проголосовать: нравится +16 Проголосовать: не нравится

I had a square root decomposition solution for F1.
Submission.

I made blocks which store three values.

struct node {
    int ans = 0;
    int need = 0;
    int give = 0;
};

ans is the value of how much wine we get just from this block.
need is the value of how much extra water we need from the previous block to utilise the full capacity of wizards.
give is the value of how much extra water we have left (which we can give to the next block)

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Loved problem C. Although I couldn't do it myself, it was a good one. The tutorial explanation was good too.

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For the Problem C, wrt to 3rd scenario x < a[i] <= y

If we insert a[i] to the back of the array with the smaller last element, although there will be an additional penalty of 1, but we have optimised the end point of the subsequence (x, y) so that there will be less chances of penalty in future. So isn't it better to insert ai at the back of the array. Any counter examples?

Also, for the part:

This is because if we consider making the same choices for the remaining elements a[i + 1]
 to a[n] in both scenarios, there will be at most one time where the former scenario will add one penalty more than the latter scenario as the former scenario has a smaller last element after inserting a[i]. After that happens, the back of the arrays in both scenarios will become the same and hence, the former case will never be less optimal.

Can someone give an example where it will become same and former case will never be less optimal?

»
11 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

The "NO" tests on G were pretty weak, looks like two of the three submissions in contest didn't check enough conditions. (It's pretty easy to fix, you can just write the DP to compute the matrix corresponding to your tree and make sure it's equal to the input.)

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

thanks for the fast editorial

»
11 месяцев назад, # |
  Проголосовать: нравится +18 Проголосовать: не нравится

Why i was solving D like this the edge is 0 or 1, not one of the edge is 0 and the other is 1 :⁠'⁠(

»
11 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

My intuitive gready solution of C: lets keep last element of both subseq as large as possible. So e.x. if one subseq ends with 4 and other with 2 we always append to second no matter what we append. BUT. If appending to one subseq reduces the penalty, while appending to other is not — we do this append gready (for proof why this work just consider some cases).

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Incase someone wants an easy implementation for C-https://codeforces.net/contest/1919/submission/240529072 The idea is to maintain the endpoints of two sets with minimum answer and maximum endpoints. If current element is A(i) and endpoints are a,b. If b>=A(i),then assign A(i) to b and no need to increase answer. Else if a>=A(i),then assign A(i) to a and no need to increase answer.(Note-a>=b) if A(i)>a and b,then swap the smaller endpoint(i.e. b) with A(i) and increase answer by 1. Easy to implement, though night be slightly tricky to prove correctness

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    If b < A(i) <= a, Though assigning A(i) to b increases the penalty by 1, it will maintain the maximum endpoints and might lead to better answer in future right?

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      At a later time , the maximum endpoint can at max decrease answer by 1 as as soon as we replace the max endpoint then it will no longer be used (it will be replaced). Hence we have to greedily decrease the answer wherever we can... Since b<A(i)<=a then replacing a by A(i) decreases the penalty by 1,which is also the maximum decrease that the endpoint a can contribute

»
11 месяцев назад, # |
Rev. 8   Проголосовать: нравится 0 Проголосовать: не нравится

.

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In the problem C, I used a different approach which is calculating the LIS (Not the LDS), then halving the LIS length to get the 2 splits and respective lengths, possible but weirdly it gives WA on TC 2, at some 3600th Testcase, so can someone help finding a counter case for this approach.

Submission: 240610899

»
11 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Intuition in C:

We process from $$$1$$$ to $$$n$$$ and maintain the final element of each subsequence. When processing $$$a_i$$$:

  • The subsequence we add $$$a_i$$$ to obviously ends with $$$a_i$$$.
  • The key is to maximize the ending of the subsequence we didn't add to.
    • But, we must prioritize minimizing the penalty first.
  • »
    »
    11 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    can you explain what does this mean ?

    This is because if we consider making the same choices for the remaining elements ai+1 to an in both scenarios, there will be at most one time where the former scenario will add one penalty more than the latter scenario as the former scenario has a smaller last element after inserting ai . After that happens, the back of the arrays in both scenarios will become the same and hence, the former case will never be less optimal.

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Think of this two cases.

      1) two sequences ends with 10 and 8.

      2) two sequences ends with 10 and 20.

      And you still have some elements to put into them.

      The optimal answers for both cases will differ by at most 1.

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    I think people get confused why prioritize minimizing the penalty first will give the optimal result.

    I believe the reasoning is that, no matter which choice you made, the final elements will always contains $$$a_i$$$, so the two final elements will differ at most by one element. And in both cases, the future penalties can only differ by 1.

    So avoiding penalty now and take penalty later is always a better option.

»
11 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

(Problem C)I still don't understand why the optimal choice(in a greedy algorithm) at each step eventually leads to the optimal answer at the end

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    Read the explanation for Case 3. Case 1 and Case 2 are trivial but Case 3 is where greedy is truly proved.

»
11 месяцев назад, # |
  Проголосовать: нравится +13 Проголосовать: не нравится

rainboy has solved problem H. You can update the editorial.

240612222

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone share his O(n) approach for D??

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    This runs in O(n) for D. I maintain and update nxt[i] and bef[i] to store the next and previous undeleted indices for each element. There's no logn factor of time lost in searching for indices.

    240624316

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      thank you!

    • »
      »
      »
      11 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится
      if(!help){
         if(a[nxt[x]]==i){
      	continue;
         }
      }
      

      can you explain this part? what does it checks?

      • »
        »
        »
        »
        11 месяцев назад, # ^ |
          Проголосовать: нравится +1 Проголосовать: не нравится

        So that part was honestly a very inefficient way of checking that the condition for i is satisfied. The bool help is "True" if the previous instance of i had an instance of i-1 immediately before it.

        The lines you mentioned say that if the previous instance of 'i' did not have an i-1 before it, then there must be an 'i' to follow the current value(if not an i-1), as otherwise there's no way this block of i's could have been generated.

        A much cleaner way to do this would have just been to iterate one element at a time and kept updating the nxt[x] and bef[x] values.

»
11 месяцев назад, # |
Rev. 6   Проголосовать: нравится -16 Проголосовать: не нравится

:')

»
11 месяцев назад, # |
  Проголосовать: нравится +34 Проголосовать: не нравится

What is ReLU segment tree? There seems to be no resources about it in English side of internet... (I would appreciate resources in any language though)

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

nice contest

»
11 месяцев назад, # |
  Проголосовать: нравится -8 Проголосовать: не нравится

Can someone please explain why the greedy approach works for PROBLEM C?

I did not get the third case where x < ai < y. what is the proof that adding to y to avoid +1 penalty is always optimal?

Part where I did not understand:

This is because if we consider making the same choices for the remaining elements ai+1 to an in both scenarios, there will be at most one time where the former scenario will add one penalty more than the latter scenario as the former scenario has a smaller last element after inserting ai . After that happens, the back of the arrays in both scenarios will become the same and hence, the former case will never be less optimal.

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Say you take the penalty now, and you have to take at least X more penalty in the future.

    Then if you avoid the penalty now, then you will have to take at most X + 1 penalty in the future.

    So there is no benefits to take the penalty now.

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I had the same solution for D, but AC is quite tough in Python

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

a b:^v^ c:@-@?

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Editorial code for D says YES to this test case: N=9, A=[2,1,1,1,1,1,1,1,0]. It seems incorrect to me?

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

My solution for E. I think it works in $$$O(n)$$$ time.

»
11 месяцев назад, # |
  Проголосовать: нравится -8 Проголосовать: не нравится

Why all from A-D have no algo :(((((((((((((((

»
11 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

Proof for $$$D$$$:

First note that any most-distant leaf will have a parent edge weight of $$$1$$$, because otherwise there would exist a more distant node under the other child (edge-1 child) of the parent. Now we need to prove that for any $$$2$$$ adjacent array values, one of them is a maximum and the other is less by $$$1$$$, if there is a binary tree solution for that array (but it does not have such $$$2$$$ leaves sharing the same parent), we can still convert it to another binary tree conforming to the same array with the $$$2$$$ leaves sharing the same parent:

Using the same previous principles, the maximum leaf will be the edge-1 child. We can always cut its parent edge-1, remove the other edge-0 and merge the $$$2$$$ nodes connected by it, then go to the other leaf adjacent in the array (with distance less by $$$1$$$), add $$$2$$$ edges under that node, one of them will be the moved the maximum node (under edge-1), and the other will be the less-by-1 node.

»
11 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

Great problems D and F1! Finally became International Master. E is also excellent; it's a pity that I struggle with counting.

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone explain why the answer for F1 is the maximum suffix sum ? I cant figure it out :<

  • »
    »
    11 месяцев назад, # ^ |
    Rev. 3   Проголосовать: нравится +33 Проголосовать: не нравится

    Here's my understanding. Let's use this case I just came up with.

    a = [3, 3, 7, 5, 6, 4]
    b = [2, 2, 10, 3, 10, 3]
    
    pA = [3, 6, 13, 18, 24, 28] (prefix sums of A)
    pB = [2, 4, 14, 17, 27, 30] (prefix sums of B)
    

    Let's say that all $$$pB[i] <= pA[i]$$$ for some prefix. Then the answer would just be the last $$$pB$$$ of the prefix. For example let's take the the first two items in the array. Since $$$pB[0] <= pA[0]$$$ the wizard can take as much water as he wants from the first tower and since $$$pB[1] <= pA[1]$$$ as well, the wizard can take as much water as he wants from the first two towers. So the answer for that prefix is $$$pB[1] = 4$$$.

    But now $$$pB[2] > pA[2]$$$. This means that the wizard is trying to take out more water than is possible. In effect, from positions $$$[0, 2]$$$ the wizard can only take out as much water as is in $$$pA[2]$$$. So let's decrease $$$B[2]$$$ by an amount so that $$$pB[2] == pA[2]$$$. That is, $$$pB[2] - pA[2] = 1$$$. Now the prefix sums would effectively become this:

    pA = [3, 6, 13, 18, 24, 28]
    pB = [2, 4, 13, 16, 26, 29]
    

    Now we repeat this same process, looking for the next time that the wizard tries to take out more than is possible, and reduce the $$$B$$$-value again. In this case, $$$pB[4] > pA[4]$$$ so we need to reduce $$$B[4]$$$ by $$$pB[4] - pA[4] = 2$$$:

    pA = [3, 6, 13, 18, 24, 28]
    pB = [2, 4, 13, 16, 24, 27]
    

    Now we find that the answer would be 27. But to simplify this process, we know that the effects compound in such a way that the final $$$pB$$$ value will be decreased by the highest $$$pB[i] - pA[i]$$$ before any modification. So we just need to find the global maximum of a segment tree formed by $$$pB[i] - pA[i]$$$ and subtract it from $$$pB[n-1]$$$ as long as it is positive.

    To check this, let's return to the original $$$pB$$$:

    pA      = [3, 6, 13, 18, 24, 28]
    pB      = [2, 4, 14, 17, 27, 30]
    pB — pA = [-1, -2, 1, 1, 3, 2]
    

    And $$$pB[n-1] - 3 = 27$$$ so it does indeed work! I hope this makes sense!

»
11 месяцев назад, # |
Rev. 2   Проголосовать: нравится +22 Проголосовать: не нравится

I have some questions about problem F1

For the conclusion "the remaining amount of water remaining at tower $$$n$$$ is the maximum suffix sum of $$$v$$$", firstly here is my proof:

proof

Then here is my question:

Is there a more universal unstanding of the solution? or are there some ideas summarized from the problem? i don't come up with the solution because i alway think of those legal situation, and the suffix of $$$v$$$ may seems "contradictory" to the actual situation, which even didn't come to my mind.
However, i also have seen some problems with a solution like, "the answer is exactly the best one, so any illegal case cannot cover the answer, then just take all cases into consideration". i don't know whether there exist some underlying principles of these problems?
Sorry for my poor English, I have tried to express as clear as possible :)

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

how to promote my poor datastructure QwQ

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

C — be like DP is bluff, greedy is approch

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For Problem C, I guess I followed the greedy solution perfectly. But still could not pass pretest

wrong answer 9174th numbers differ — expected: '115', found: '121'

Debugged my code for hours but could not find the bug. Anyone can help? Thanks heaps in advance!

Here is my code:


import sys lines = [] is_stdin = True if is_stdin: for line in sys.stdin: lines.append(line.strip()) else: with open('./myin.txt', 'r') as f: for line in f.readlines(): lines.append(line.strip()) N = int(lines.pop(0)) while N > 0: N -= 1 n = int(lines.pop(0)) cur_line = lines.pop(0) pieces = cur_line.split(' ') s = [] t = [] total_p = 0 for p in pieces: if len(s) == 0: s.append(p) else: if len(t) == 0: if p > s[-1]: # avoid penalty t.append(p) else: s.append(p) else: # both non-empty if p > s[-1] and p > t[-1]: # penalty anyway, choose the smaller one if s[-1] < t[-1]: s.append(p) total_p += 1 else: t.append(p) total_p += 1 elif p > s[-1] and p <= t[-1]: # avoid penalty t.append(p) elif p > t[-1] and p <= s[-1]: # avoid penalty s.append(p) else: # smaller than or equal to both, no penalty anyway # choose smaller one if s[-1] < t[-1]: s.append(p) else: t.append(p) # print('s') # print(s) # print('t') # print(t) print(total_p)
  • »
    »
    10 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    solved. Need to 'p = int(p)' to convert str to int. Don't know why it passed so many cases with this bug in, LOL

»
11 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

C was good. I initially thought that I had to find the longest increasing subsequence. I was expecting this type of question to be D

»
11 месяцев назад, # |
  Проголосовать: нравится +37 Проголосовать: не нравится

Interesting alternative idea of D.

We can associate every (this binary 0-1 tree with n leafs) with (a simple not binary 1 tree with n vertices). Because we can decompress every (vertex with k children) to (a right hand bamboo with depth of k), where every its left child is a child of the original tree (with edge 1) and final right child -> the parent.

So, you need to check, whether there exists a usual tree with this DFS-order depth of vertices (where parent depth can be pasted between each to children or in the beginning/at the end).

There is my O(n) solution.

  • »
    »
    10 месяцев назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    I believe the idea of your solution is equivalent to the editorial's but from a different perspective.

    Assuming the maximum distance in a binary tree is $$$max$$$, the equivalent N-vertices tree you are looking for is a tree where every consecutive run of leaves with distance $$$max$$$ are children of the leaf with distance $$$max-1$$$ which is adjacent to the run's left or right, then after excluding the leaves with distance $$$max$$$, every run of leaves with distance $$$max-1$$$ are children of the leaf with distance $$$max-2$$$ which is adjacent to the run's left or right, and so on.

»
11 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

In D, if I have $$$[\dots, x-1, x, x-1, \dots]$$$, I can't see why it's okay to choose either left or right.
Can someone pls explain?

  • »
    »
    11 месяцев назад, # ^ |
    Rev. 3   Проголосовать: нравится +1 Проголосовать: не нравится

    it doesn't matter which one do you choose, because after an operation array will be $$$[..., x - 1, x - 1, ...]$$$

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone explain the kind of segment tree that is used to solve F1 (Solution 1, Not ReLU segtree)

i.e. you need to compute range-max as well as allow range updates, any resources to study such segment trees?

  • »
    »
    11 месяцев назад, # ^ |
    Rev. 3   Проголосовать: нравится +10 Проголосовать: не нравится

    This is a segment tree which uses lazy propagation. It's a pretty common technique used in lots of problems involving range queries, you can even use it in conjunction with other techniques on trees and such (Although I'm not sure if it's worth spending time learning about a lazy segment tree below expert or even CM, if your aim is to gain rating).

    If you want to learn about it, there's lots of resources online. I personally used this site. Then I used CSES to practise a bunch, I just completed all the problems under 'Range Queries'.

»
11 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

For C, in the third case, it says if u always insert an element x in the one with larger last element, and you shift any other arrangement from this point such that x is in the one with larger last element (with the remaining choices unchanged), then it cannot be any worse. But is that true?

Say A, B upto a point was [8] and [12] respectively. And say x is 10 at that point (8<x<12, so we are in case 3), and one possible final arrangement is [8,10,9] and [12,11] (assume 9 and 11 are future elements). But if we shift 10 to the second array instead (which the algorithm would do), the new arrangement is [8,9] [12,10,11]. This would have an extra penalty overall, since [8,9] and [10,11] — 2 new penalties are being created, and only [8,10] — 1 penalty was destroyed (You can count trivially too, there was only 1 penalty before and 2 penalties in this one).

I'm not saying the algorithm is incorrect, I just have a problem with the way the proof is written.

  • »
    »
    7 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I encountered the exact same question and here's the solution I came up with.

    We're only interested in the last case, when there is an inequality $$$x < a[i] \le y$$$. We know that with the current arrays $$$A$$$ and $$$B$$$, we can complement them in such a way as to achieve the optimal solution. Let this optimal solution be achievable if we add $$$a[i]$$$ after $$$x$$$, not $$$y$$$. Let's demonstrate that we can obtain a solution no worse by adding $$$a[i]$$$ after $$$y$$$.

    For better understanding, the optimal solution looks like: $$$[\ldots,\ y,\ \ldots],\ [\ldots,\ x,\ a[i],\ \ldots]$$$.

    First, try adding all numbers from $$$a[i + 1]$$$ to $$$a[n]$$$ into the same arrays (as in the editorial). The penalty will differ only by the penalty between $$$y$$$ and the next number added after $$$y$$$ (if it exists) and by the penalty between $$$a[i]$$$ and the next number added after $$$a[i]$$$ (if it exists). Because all other numbers go exactly in the same order in the same arrays.

    Let's denote the number after $$$y$$$ as $$$next_y$$$ and the number after $$$a[i]$$$ as $$$next_{a[i]}$$$ in optimal solution. Thus, the penalty may differ from the optimal by no more than 1 and only in the case if $$$y \ge next_y$$$ and $$$a[i] \ge next_{a[i]}$$$, but $$$a[i] < next_y$$$ and $$$x < next_{a[i]}$$$. Because optimal solution has $$$1$$$ extra penalty from $$$x < a[i]$$$.

    For better understanding, the optimal solution looks like: $$$[\ldots,\ y,\ next_y,\ \ldots],\ [\ldots,\ x,\ a[i],\ next_{a[i]},\ \ldots]$$$, our solution looks like: $$$[\ldots,\ y,\ a[i],\ next_y,\ \ldots],\ [\ldots,\ x,\ next_{a[i]},\ \ldots]$$$.

    Such a situation can indeed occur: it's exactly what was described by flakes24. So if we try to use the method described above, our arrays will look like this: $$$[\ldots,\ y,\ a[i],\ next_y,\ \ldots],\ [\ldots,\ x,\ next_{a[i]},\ \ldots]$$$. Inequalities arise: $$$a[i] < next_y$$$ and $$$x < next_{a[i]}$$$.

    In this case, we'll swap the suffixes of the arrays $$$A$$$ and $$$B$$$, without changing the relative order of the numbers from $$$a[i + 1]$$$ to $$$a[n]$$$. Since $$$a[i] \ge next_{a[i]}$$$, we will achieve no more than $$$1$$$ penalty (maybe $$$x < next_y$$$) in our solution. Optimal solution has at least $$$1$$$ penalty ($$$x < a[i]$$$), so our solution is no worse.

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone explain the dp solution for the problem C. I tried reading it but not able to understand it :(

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

who can explain to me why it's not true in C when I tried to solve it with LIS :v

»
11 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

For E: can someone explain why the method from the editorial works? Basically:

  • Why could each array with the sum $$$s$$$ and the given prefix sums be constructed this way?

UPD: it is true, that any such array could be constructed this way, because any such array could be converted into $$$[1, 1,1, ..., 1, -1, -1, ..., -1]$$$ by removing all the $$$(-1, 1)$$$.

»
11 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

maomao90 What was the original solution to H?

  • »
    »
    10 месяцев назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    I used $$$n$$$ type 1 queries to find all the edges on the diameter, then I group edges based on which componenent they belong to if all the edges on the diameter are removed using $$$3n$$$ type 1 queries. Then, I solve for each component separately by first determining each vertex depth using $$$n$$$ type 2 queries, then finding the parent of each edge using $$$n$$$ type 1 queries. If I remember correctly, I eventually managed to optimise it to use a total of $$$3n$$$ type 1 queries, but I don't think there is any way for me to get to $$$2n$$$ or even $$$n$$$ type 1 queries as my solution consists of 3 steps.

    Code
»
11 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

I doubt about the Wine Factory question. After each update, we can just calculate the sum of the water array(sum_Water) and the sum of the wizard array(sum_wizard). and if sum_wizard >= sum_water: the answer is sum_water, else answer is sum_wizard. I don't find any wrong in this approach. Am I missing something in understanding the question?

  • »
    »
    11 месяцев назад, # ^ |
    Rev. 5   Проголосовать: нравится +18 Проголосовать: не нравится
    • Water array: $$$a = [1, 100]$$$
    • Wizard array: $$$b = [100, 1]$$$

    According to your logic, the answer is $$$\min(1+100, 100+1) = 101$$$. In reality, the answer is $$$2$$$.

    Let's follow the statement carefully:

    $$$i = 1$$$:

    • Wizard $$$1$$$ removes at most $$$b_1 = 100$$$ liters of water from tower $$$1$$$ and converts it to wine. Tower $$$1$$$ has only $$$a_1 = 1$$$ liters of water, so only $$$1$$$ liter of water gets converted to wine. Now tower $$$1$$$ has $$$0$$$ liters of water left, and we have created $$$1$$$ liter of wine in total.
    • Tower $$$1$$$ has $$$0$$$ liters of water, so no water gets transferred to tower $$$2$$$.

    $$$i = 2$$$:

    • Wizard $$$2$$$ removes at most $$$b_2 = 1$$$ liters of water from tower $$$2$$$ and converts it to wine. Tower $$$2$$$ has $$$a_2 = 100$$$ liters of water (no water got transferred from tower $$$1$$$), so $$$1$$$ liter of water gets converted to wine. Now tower $$$2$$$ has $$$99$$$ liters of water left, and we have created $$$2$$$ liters of wine in total.
    • $$$i = n$$$, so no water gets moved forward.

    In total, we created $$$2$$$ liters of wine.

»
10 месяцев назад, # |
Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

On problem E, could anyone provide a proof on why every legal sequence with sum $$$s$$$ can be generated by adding consecutive pairs of $$$(-1,1)$$$ from the base $$$a=[1,1,...,-1,-1]$$$? I understand that all generated sequences are legal, and that legal sequences with sum $$$s$$$ must contain $$$a$$$ as a subsequence, but I can't seem to prove that all legal sequences with sum $$$s$$$ can be reduced to $$$a$$$ by removing consecutive pairs of $$$(-1,1)$$$.

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

O(n) solution for problem D soln

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

problem c is very tough for me, but nice problem.

»
10 месяцев назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

Okay so I randomly thought of a testcase: 8 9 10 2 11 7 4 3

The answer for this one is 2 if we split this into 8 2 and 9 10 11 7 4 3

however when I ran the codes submitted by some of the top coders for problem C

I got different outputs from them. Some gave 1, some gave 2 and some 3.

I tried to run Tourist's code and his code's answer was 3.

Now this is really confusing as hell for a newbie like me, was the problem faulty or am I tripping?(nvm I didn't notice that the elements has to be less than or equal than the size of the array)

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

My approach for F2: for each range of towers [l, r], let's call f(l, r, x) the total amount of wine the towers (in that range) will make if we add x liters of water to the l-th tank, and g(l, r, x) the amount of water the r-th tank will pass to the next. We can prove that f(l, r, x) and g(l, r, x) both in the form of min(max(x+a, b), c) with a, b, c are all integers (which is to say, consist of 3 "parts", two "constant parts", and a linear "middle part"). Then, we could store the functions f(l, r, x) and g(l, r, x) in each segment tree node. After each query, we just need to plug x = 0 in the function f(1, n, x) and we'll have the answer

I think this approach is a little bit more natural for someone who doesn't know anything about flow (like me) (P/s: sorry for my bad English skills)

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится
»
10 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

Is there any solution to the D that has something to do with split the distance sequence into two part?

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

240827156 int solve(int ind, int arr[], int x, int y, int n, vector& a, vector& b) { if (ind == n) return 0; int ans = INT_MAX; if (dp[ind][x][y] != -1) return dp[ind][x][y];

a.push_back(arr[ind]);
if (ind > 0 && a[a.size() - 1] > a[a.size() - 2])
    ans = min(ans, 1 + solve(ind + 1, arr, 1, y, n, a, b));
else
    ans = min(ans, solve(ind + 1, arr, 1, y, n, a, b));
a.pop_back();

b.push_back(arr[ind]);
if (ind > 0 && b[b.size() - 1] > b[b.size() - 2])
    ans = min(ans, 1 + solve(ind + 1, arr, x, 1, n, a, b));
else
    ans = min(ans, solve(ind + 1, arr, x, 1, n, a, b));
b.pop_back();

return dp[ind][x][y] = ans;

} can someone help, my sublime gives correct output at the wrong answer testcase.

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone plz explain how this part works? (From problem E's code)

"for (int i = 2; i < MAXN * 2; i++) { ifact[i] = MOD — MOD / i * ifact[MOD % i] % MOD; } for (int i = 2; i < MAXN * 2; i++) { ifact[i] = ifact[i — 1] * ifact[i] % MOD; } "

»
10 месяцев назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

in 'D' when you delete the maximum there can be case when the maximum dont have a leaf sibling . it has a subtree further instead of a leaf, i.e case when 2,1,2.i am confused.

»
10 месяцев назад, # |
Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

I can't understand problem C's solution 2, there is a $$$dp_{i - 1, a_{i - 1}}$$$ in formula $$$dp_{i, a_{i - 1}} = \min(dp_{i - 1, a_{i - 1}} + [a_{i - 1} < a_i], \min_{1\le x\le n}(dp_{i - 1, x} + [x < a_i]))$$$

according to the solution, it means split the array to two subarrays where the last element of one subarray is $$$a_{i-1}$$$, and the last element of the second subarray is also $$$a_{i-1}$$$ what does this mean?

  • »
    »
    10 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Array $$$a$$$ can contain duplicate elements, so it is possible that the last element of one subarray has a value equal to $$$a_{i - 1}$$$, but not necessarily from index $$$i - 1$$$, and the last element of the other subarray is from index $$$i - 1$$$.

    Not having the first term will still result in AC because of the greedy algorithm, but if we do not want to prove any greedy having both terms is more "correct".

    • »
      »
      »
      10 месяцев назад, # ^ |
      Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

      thanks for explaining, I got this

      but why the answer can be got from querymn(1, n)? In my opinion, the v that do not appear in the array make no meanings and they should not be calculated, so after dp calculation, we should traverse the dp array to get final answer by O(n), does there exist any greedy idea also?

      to be clear, I wonder why the all v in [1, n] that even didn't appear in the array a need to be calculated and lead to the ans.

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For D, sure any two leaves sharing the same parent will be adjacent in the dfs order and their depths will differ by 1. But not every adjacent pair, even if one of them has the largest ai value can be siblings ryt? Like for a being 1,0,1 and for the tree you construct by clubbing first 2 leaves , the last two leaves in the dfs order won't be siblings even though they were adjacent in the original dfs order and one of them had the largest ai value. I understand that clubbing the last 2 leaves first in this example will construct a different tree and the answer returned by the algo will still be yes, however my point is that what if in some instance , we pick up some such pair for which those leaves can never be siblings for any tree satisfying the same dfs traversal, then the algo would return a NO instead of a possible yes. Maybe such faulty pairs can never exist but the editorial doesn't seem to consider or prove the non-existence of such pairs and frankly I was (during the contest) and still am stuck at this. Can someone help providing a proof? Thanks

  • »
    »
    10 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I’m also struggling to convince myself why that solution is correct. That’s why I really appreciate editorials which show proofs for their solutions (which isn’t the case for D unfortunately)…

  • »
    »
    10 месяцев назад, # ^ |
    Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

    Edit: that was wrong unfortunately

  • »
    »
    10 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Having the same problem with the provided solution, if someone has a proof that would be nice :)

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Is it not $$$ T′_{p} \geq OT_{et+1,p} $$$ in cases 1 and 2A in the bonus problem for C? Also, can someone explain the proof for 2B? I do not understand what $$$r$$$ is.

»
9 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Is the problem H really 2000 rated? Or is it some bug?

»
5 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Very helpful. Thank you for that..!

»
4 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In problem c, my approach was to find the length of longest increasing subsequence say 'm'

then the answer will be max(0, m-2)

but it is giving WA, can anyone help?

»
4 месяца назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

In problem C, why we need to swap t1 with t2 if t1 is greater than t2?