ch_egor's blog

By ch_egor, 3 years ago, translation, In English

Thanks for the participation!

1602A - Two Subsequences was authored and prepared by napstablook

1602B - Divine Array was authored and prepared by Anti-Light

1601A - Array Elimination was authored and prepared by isaf27

1601B - Frog Traveler was authored and prepared by KiKoS

1601C - Optimal Insertion was authored and prepared by isaf27

1601D - Difficult Mountain was authored and prepared by Tikhon228

1601E - Phys Ed Online was authored by jury and prepared by fedoseev.timofey

1601F - Two Sorts was authored by cdkrot and prepared by cdkrot, LHiC

Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
  • Vote: I like it
  • +133
  • Vote: I do not like it

| Write comment?
»
3 years ago, # |
  Vote: I like it -140 Vote: I do not like it

Wow! Editorials of previous contests are all very fast! Thanks a lot!

»
3 years ago, # |
  Vote: I like it -21 Vote: I do not like it

That's the fastest editorial I've seen yet. I struggled with D, but a nice round nonetheless!

»
3 years ago, # |
  Vote: I like it +14 Vote: I do not like it

We will use $$$dp_i$$$ — maximal number of moves needed to travel from $$$i$$$ to $$$0$$$.

shouldn't it be minimal number of moves needed to travel from $$$i$$$ to $$$0$$$? (div1.B)
nice round btw!

»
3 years ago, # |
  Vote: I like it +18 Vote: I do not like it

Problem 1C/2E appeared on stackexchange.

»
3 years ago, # |
  Vote: I like it +7 Vote: I do not like it

For problem B can someone prove that it requires at most log(n) steps for the array to become constant .?

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +20 Vote: I do not like it

    The array will become constant until the frequency of each element is different from that of other elements.And the maximum value of frequency is n. If the array has NOT became constant, there are two same frequency at least.At the worst situation, let's say the same frequency equals to f, then it will equal to 2f, 4f, 8f and so on... Even if the f equals to 1, it requires at most log(n) steps to f equals to n. Done!!!

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    If a_i changes, the number of integers which equal to a_i will multiply. When a_i stops changing and becomes the minimum integer(least occurrences) at the same time, it's locked and never change.

    Also, processing one step, if the array doesn't remain unchanged, the minimum number of occurrences of different elements unlocked will multiply. After at most log(n) steps there's no element unlocked.

»
3 years ago, # |
  Vote: I like it +6 Vote: I do not like it

1602E - Optimal Insertion is super similar to a recent contest's 1579E2 - Array Optimization by Deque.

For some reason I've decided to not use the Fenwick trees like I did before and switched to Order Statistics Tree from GNU C++ PBDs. The solution should be O((n + m) (log(n) + log(m))) Can someone help me understand why I have TLE?

The idea is as follows:

  1. Sort b.
  2. Do coordinate compression (okay, maybe I don't need this for the order statistics tree but I was thinking about Segment Trees and Fenwick Tree at first).
  3. Iterate from 1 to n + m and "pop" (choose) an element from either a or b at each step based on the following: for the next element in a count the number of items in the remainder of b that are less than this element (this will be the number of the inversions we "add" by choosing an element from a at this step) and do the same for the next element in b.
  4. Iterate over the resulting array and count the number of inversions (I used Ordered Statistics Tree again even though I could do the Merge Sort modification or something else).

The asymptotic seems right: there are n + m steps, at each of them there are at most log(n) + log(m) operations, so this should really fit the limitations. I can't figure out why I get TLE :(

Here's the code: 133029362

  • »
    »
    3 years ago, # ^ |
    Rev. 3   Vote: I like it +9 Vote: I do not like it

    Okay, it looks like the time limits are really tight, so having the Statistics Tree being used twice over $$$log(n) + log(m)$$$ gets a TLE.

    I've changed the solution to pass the TLE but now I get WA on Test #3 (133042951) which means my solution is wrong. I need to think more on the ideas. I was under the impression that the greedy additions to the result are going to be fine but somehow it's not the correct approach.

    I was under the impression that greedily adding the element from $$$a$$$ or $$$b$$$ by choosing which one will have the least number of inversions with the remainders of $$$a$$$ AND $$$b$$$ should be right but the result is somehow not correct in some cases. Any idea how to solve this with Segment Tree/Order Statistics/Fenwick tree?

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +22 Vote: I do not like it

      Here is my solution to 1601C - Optimal Insertion. I had no time to debug it during round.

      First, without proof assume that we can greedy pick first place with best outcome and insert there. I don't have proof yet, so my solution is not proved in my eyes, and perhaps might be hacked.

      code snippet of main idea

      Details under spoiler.

      Wall of text
      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it -8 Vote: I do not like it

        ORZ, thanks so much.

      • »
        »
        »
        »
        3 years ago, # ^ |
        Rev. 2   Vote: I like it +8 Vote: I do not like it

        Let's define $$$p_i$$$ as the optimal position for $$$b_i$$$(after sorting) to insert. To prove the conclusion, we just prove for any $$$i<j$$$, $$$p_i>p_j$$$ doesn't hold. All we have to consider is between $$$p_j$$$ and $$$p_i$$$. let's assume there are $$$r_i$$$ elements less than $$$b_i$$$ and $$$s_i$$$ elements greater than $$$b_i$$$ at the interval. It's optimal for $$$b_i$$$, so $$$r_i>=s_i$$$, otherwise $$$p_j$$$ would be a better position for $$$b_i$$$. Since $$$b_i<b_j$$$, $$$r_j>=s_j$$$ holds too. Therefore, $$$p_i$$$ would be a better position for $$$b_j$$$. So there is a contradiction.

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          What conclusion do you prove? If you only prove that $$$p_i$$$ increasing, this won't be enough for my solution. I insert values in first best place: 1) there could be plenty of those, 2) I take into account inserted. Also, I think you can't say something would be better so easily, and you didn't mention case $$$b_i = b_j$$$.

          • »
            »
            »
            »
            »
            »
            3 years ago, # ^ |
              Vote: I like it +8 Vote: I do not like it

            In fact, you don't have to construct a solution meeting $$$p_i$$$ increasing. I proved that $$$p_i$$$ is increasing, even if just in one case. But it's enough. It turns out that if we insert values in first best place as $$$p_i$$$, the answer will be optimal. So even if we insert the elements in other fist best place, the answer won't change.

            • »
              »
              »
              »
              »
              »
              »
              3 years ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              I don't see connection with your proof. What if I pick any increasing $$$p_i$$$? You don't use anything related to idea to pick first best at current state.

              • »
                »
                »
                »
                »
                »
                »
                »
                3 years ago, # ^ |
                  Vote: I like it +8 Vote: I do not like it

                Why would the solution to pick the first best place be not optimal? Because there might be smaller $$$b_i$$$ inserted after bigger $$$b_j$$$ and produce a cost $$$(b_j,b_i)$$$. But from my first conclusion, we can move $$$b_i$$$ to $$$p_j$$$, so the solution to pick the first best place is optimal.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  3 years ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  It's not the only case. It may be $$$b_i$$$ inserted before $$$b_j$$$ but at wrong place. I stop replying unless feel necessary.

»
3 years ago, # |
  Vote: I like it +7 Vote: I do not like it

Can anyone pls share their thought process for solving problem D div2 I am not able to understand the editorial's solution.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +21 Vote: I do not like it

    firstly let's notice that if we were staying in some position, then there is no need to go there again. This is, it's the main idea. let us assume that we are staying at depth 0 and want to go to depth n, for that just reverse array a and b.it is not needed, but it was just more comfortable for me. we will do some kind of a bfs.if we are at x now, than we will try to go to all position that is not visited yet.in case to do it fast enough let us do the following.
    we will have a set of positions that we have not used to rest at .suppose we are currently at position x, now we are interested in all positions that we did not rest at and are in the range [x,x +a[x]]. We can find them using our set. suppose y is one of such position then after the rest it will become y — b[y].than if we have not visited y — b[y] put it into the queue. It can be seen that it work in O(nlogn),because there is at most n "edge",and log is from set.

    P.S sorry for my pure english,feel free to ask any question you need.`
    

    P.S.S here is my code

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +32 Vote: I do not like it

    My thought process was more BFS-like than DP. If you were to do a BFS, it's easy to see that you would visit every height at most once. Now, the only part left to optimize is adding nodes into the queue.

    If you were to linearly traverse through all possible jumps the time complexity would be O(N*N). Upon closer inspection we see that you are basically trying to find some numbers in an interval. We can make a minimum range query segment tree from an array C where C[i] = i + b[i], that is the place you will end up after jumping to height i after slipping.

    Coming back to our BFS function. When we are adding the nodes into the queue we would simply query the desired interval of possible jumps and check if there is any that are not INF. If we find a value that is not INF, we add it's height into the queue and set it's value in the segment tree to INF. To clarify the INF part, it's basically just a mark whether it has been visited. Similarly if you were to make a maximum range query, you would set the value of marked heights to -INF

    To get the traversal order we can keep track of each node's parents.

    Here is my code if you want to take a closer look:

    seggy is my segment tree. wher is the aformentioned array C https://codeforces.net/contest/1602/submission/133023221

    Hope this helps. :)

  • »
    »
    3 years ago, # ^ |
      Vote: I like it -8 Vote: I do not like it

    You can Have a look at my approach here :

    Video Solutions to A,B,C and D (div 2)

»
3 years ago, # |
  Vote: I like it +15 Vote: I do not like it

Fast Editorial! Thanks! I've got a lot RE on problem D but finally solved it, and fortunately the conclusion I guess in problem E is right.

»
3 years ago, # |
Rev. 3   Vote: I like it +33 Vote: I do not like it

My O(n) solution to Div2D/Div1B: let dp[i] be the minimal number of steps from n to i. Notice that we can't just go with a for-loop from n to 0 and call it a day, because some dp[x] states can only be calculated from dp[y] (y < x) states.

My idea was to propagate the DP states: from the current dp[x] state, we try to calculate all states reachable from x. For now, we will use a heap/priority_queue to maintain, in increasing order of the dp value, all the states that haven't been propagated yet. Notice that from position x we reach a subarray of positions x-a[x]...x, before slipping. But because of our priority_queue, it means that if a part of this x-a[x]...x subarray was already visited, then by propagating dp[x] you get, in the best case scenario, the same cost from before (in the already visited part of x-a[x]...x). To be more precise, the already visited part of x-a[x]...x forms a suffix of this subarray. So we can maintain a variable lim, which shows the leftmost visited position in the propagation. This is one of the key parts of the O(n) solution: you have to visit each dp state only once.

Then, I realised that we can replace the heap with a queue. My gut feeling was that, sort of like in a BFS, the first time you propagate to a certain dp state creates the optimal cost for that state.

AC submission

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +3 Vote: I do not like it

    I think O(n) code is easiler to write than O(nlogn) code.

    Using BFS, push each height into the queue once, which is O(n).

    And it's best to reach this height for the first time.

    here is the code

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Consider that this problem can be transferred into a shortest path problem which all edges' weights are 1.

    So using BFS, I think we can calculate the answer directly, without using dp.

    My submission

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +5 Vote: I do not like it

      In retrospective, my solution is indeed a bit overcomplicated, as my solution isn't that much of a DP solution, rather than a BFS. Though the DP idea definitely helped me find the solution

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    nice explaination !!

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    I like your code! Just exactly like my style.

»
3 years ago, # |
  Vote: I like it +22 Vote: I do not like it

Problem B (Div. 1) is so similar to this.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

For div1B/div2D, I understand that we want to find all j's that can land on u, but how is this achieved? I don't get the segment tree thing mentioned in the editorial.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    When we are looking at possible landing spots, it's always a continuous interval, which leads us to the thought of using a segment tree. For simplicity's sake let's say that we are using min range query. Initially the segment tree will have all elements set to something that is not INF. When we visit a height we set it to INF in the segment tree. When we want to look for possible landing spots we query the segment tree for the desired interval, that being from i to i - a[i], where i is our current height. It's easy to see that if there is a value that is not visited the query function will return it as it will definitely be smaller than INF. If our query function returns INF then we know that we cannot make a jump as every value in the desired interval would have been visited. Hope this helps.

»
3 years ago, # |
  Vote: I like it +6 Vote: I do not like it

Err... Sorry for that but the editorial for Div2B shows that a tight upper limit is $$$\log(n)$$$, and I think a more accurate limit is $$$\log(n) + 1$$$ ? Maybe I was wrong but the data below shows $$$\log(n)$$$ has some little problems:

n = 2
a[] = 1, 2

In this case we need $$$2$$$ rounds but $$$\log(n) = 1$$$.

Probably, the editorial means the upper limit is around $$$\log(n)$$$ but I didn't get the point. Sorry for my poor English again.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Usually when people talk about time complexities or bounds it doesn’t mean exactly N, NlogN or Logan. It is approximately. So if it is written logN, it may be logN+1, 5*logN, etc…

»
3 years ago, # |
  Vote: I like it +10 Vote: I do not like it

Div2 D/ Div 1 B : linear time solution, and intuitive solution. (You can check the code out in my submissions.) First, let's say that max_jump given from nth level is x . Then all nodes from n-x to n can be reached in one jump (without considering slipping) . Now , see where all can you slip back from these x nodes (only one node for each ). While seeing those nodes that we can slip back to, calculate the highest node you can jump to from these slip back nodes. Let's say the highest such node is p.

If ever p <= x , then return -1 (you are stuck after this) . else continue to iterate now on nodes from x+1 to p , to see where all you can slip back to. Keep going until we reach 0. It has a linear time complexity.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Could anyone please give me some hints why BFS solution gets TLE for div2-D (div1-B)?

The complexity is still O(n*log(n)) where E=n*log(n)

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Can you post the link to your submission that got you TLE?

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      This one, Thanks

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it +3 Vote: I do not like it

        So basically, for each iteration of the while loop, the for loop runs for a[i] iterations, thus making the complexity O(N x max(A)) in the worst case, which gives you a TLE. Instead, try running the for loop in reverse, i.e. from a[i] to 1 and add a break statement whenever you find that a depth is visited.

        The difference is that suppose you are at depth 'i' now, you would visit i to i — a[i]. Now lets suppose after a few iterations of the while loop, you are at depth 'j', where i > j >= i — a[i]. Now, your for loop runs from 1 to a[j], i.e. checks for positions from j — 1 to j — a[j]. Recall that in the previous iteration, when we were at depth 'i', we already visited i — a[i] to i, which includes these above positions. We can avoid this unnecessary revisit by running for loop in reverse, so that it iterates only on all unvisited depths.

        Below is a link to my solution: Solution

  • »
    »
    3 years ago, # ^ |
      Vote: I like it -11 Vote: I do not like it

    i use dp + segment tree lmao =))

»
3 years ago, # |
  Vote: I like it +37 Vote: I do not like it

I'm having a hard time understanding the solution of Div1C/Div2E... I just don't get why we can find the optimal p_mid only considering the inversions of b_mid, as choosing p_mid influences choosing the other p values. Does someone have a clear explanation?

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +11 Vote: I do not like it

    I guess it relate to dp optimize technique (you can see it here : https://codeforces.net/blog/entry/8219 )

    We want to prove $$$best[i] <= best[i + 1]$$$ where $$$best[i]$$$ is the best position to put $$$b[i]$$$.

    But how can we prove it? I don't know...

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +11 Vote: I do not like it

    Oh, I realized something just a few minutes ago

    I guess you are same to me : observe that $$$b$$$ will be sorted, and we will find a way to put $$$b[i]$$$ in $$$a$$$. But we actually skip a few things.

    Let $$$best[i]$$$ is the best position for $$$b[i]$$$ to put in array a, which mean : if we put $$$b[i]$$$ at $$$best[i]$$$, the numbers inversion of $$$b[i]$$$ for $$$a$$$ will be minimum (notice that we not consider the numbers inversion in array $$$b$$$)

    assume that we find a pair $$$i$$$ and $$$j$$$ such that $$$b[i]$$$ < $$$b[j]$$$ and $$$best[i]$$$ $$$>$$$ $$$best[j]$$$, it will be a contradition. Why ? Because why don't move $$$best[j]$$$ to $$$best[i]$$$ ? Yup, by answer this question, we will get what we want. (why don't you write some example on paper to see something ?)

    The important is : $$$best[i]$$$ $$$<=$$$ $$$best[i + 1]$$$ (for sorted $$$b$$$)

»
3 years ago, # |
  Vote: I like it +12 Vote: I do not like it

A problem in our school's private contest is similar with 1D/2F, even stronger, the difference is every climber has a weight and you need to maximize the sum of weight.There is a quite simple solution(but I wrote another in CF contest).

An important conclusion is, there exists an answer with $$$a_i+b_i$$$ increasing. It can be proved by classified discussion, then any datastructure can work.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +2 Vote: I do not like it

    Can you prove 1D about this common solution,thanks?

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      Sorry I can't..

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      It seems that the solution you showed is based on the classical "Interval Covering Problem"? Each alpinist can be seen as an interval $$$[s_i,max(s_i,a_i)]$$$ (it is not guaranteed that $$$a_i\ge s_i$$$), and our goal is to select disjoint intervals as many as we can. We can solve the problem with an simple Greedy Algorithm.

      (Just my opinion... maybe incorrect)

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    This is also how I solved it. Sort by (a_i + s_i) increasing and break ties by s_i. May I ask if solution to weighted version is any different ?

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Sorry, did you mean that there exists an answer with $$$a_i + s_i$$$ non-decreasing? And if so, why? Thanks.

»
3 years ago, # |
  Vote: I like it +10 Vote: I do not like it

Another easy solution for problem E is:

There are two cases first cases is $$$K > sqrt(N)$$$:

In this case it's obvious that any student will buy at most $$$sqrt(N)$$$ tickets so the answer is $$$ (\sum_{i=l}^{n} min(A_l,\dots,A_i))$$$ where $$$i \equiv l\ (\textrm{mod}\ K)$$$.

This can be done easily by using sparse table to get min query in $$$O(1)$$$.

Second case for each $$$m \le K$$$ do the following algorithm:

We are gonna process the queries offline and iterate over elements in reverse building a monotonic . Having element $$$j$$$ in this stack means that $$$min(A_l,\dots,A_i) \space \forall \space stk_j \leq i \le stk_{j+1} = A_{stk_j}$$$.

I can easily calculate the contribution of element $$$i$$$ in the answer by counting the number of indexes $$$i \equiv m \space (mod K)$$$.

The rest is just prefix sum on the contribution and binary search to handle stack elements at the end of the range.

»
3 years ago, # |
Rev. 6   Vote: I like it +89 Vote: I do not like it

I have a easy-understanding greedy method for Div1.D.

In fact,we can divde all the alpinists into two parts:a<=s and a>s.

For the first part,we can sort it by the order of $$$s$$$.

All the alpinists whose $$$a$$$ is greater than $$$d$$$ can be choosed in this order.

When we concern about the a>s part,we can find that all the alpinists we choose can be transformed into some disjoint segments,which means if we choose one from it ,we may give up another segment.

We can insert those segments into the first part in the order of $$$a$$$.

If we can insert one segment without any conflict,we can insert it.

Otherwise,we can prove that one segment at least affect one alpinist in the first part,but it can't give us bigger contribution.

So we can solve the problem by two pointers.

The new algorithm has time complexity of $$$O(nlog n)$$$.

Sorry for poor English.

If you want to see the code:133058942

Maybe a little similer to the Tutorial.(C2020jzm told me.)

»
3 years ago, # |
Rev. 2   Vote: I like it +6 Vote: I do not like it

"This sum differs by a constant from" — may someone please explain this part of the solve function for problem div2E / div1C — Optimal Insertion, I don't really get it so I can't understand how the first sum reduces to the second one?

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anyone explain div2/C.

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +3 Vote: I do not like it

    Consider an operation on k elements, if all of these elements have i-th bit equal to 1, their i-th bit'll be set to 0. We delete 0 or k "1 bit" for each position. Let cnt(i) = number of a[i] has i-th bit. The solution is just iterate g from 1 to n, if $$$cnt\left( i \right) \vdots g$$$ for all $$$0 \le i \le 29$$$, output g.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

it's a typos in E. min(bl+k,bl+2k+...+bl+tk) should be min(bl+k,bl+2k,...,bl+tk)

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

In first question why cant we give the answer as a=empty string b=given string as every empty string is a subsequence of given string.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Read again. the questions clearly says that both should be non-empty strings. so that's why we can't.

»
3 years ago, # |
  Vote: I like it +21 Vote: I do not like it

A HUGE thanks to KiKoS. Problem 1601B - Frog Traveler is totally adorable. The thing that I don't have to IF every border violation due to perfect input data in this task. Ohhhh my, it's absolutely perfect!

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Div.1 C O(n log^2 n) Isn't supposed to work? I have a solution using DP with divide and conquer and I have TLE

»
3 years ago, # |
  Vote: I like it +1 Vote: I do not like it

Isn't 1601A - Array Elimination complexity should be $$$O(n \log C + \sqrt{C})?$$$. Or, can you find all divisors faster than $$$O(\sqrt{C})$$$ without too much complicated number-theory algorithms?

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +21 Vote: I do not like it

    just iterete over k and check whether cnt[i] % k == 0 ,where cnt[i] mean count of numbers in which i-th bit is on,

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +6 Vote: I do not like it

      Oh, indeed, I'm dumb. We need to know divisors of value which is not greater than n.

»
3 years ago, # |
  Vote: I like it +1 Vote: I do not like it

Can anyone please advise why this submission for Div2D gets WA6? I tried to implement what is written in editorial, and was hoping to get a TL so that I can do further optimisations (just wanted to make sure that I got the idea right). Thank you!

»
3 years ago, # |
  Vote: I like it +7 Vote: I do not like it

big thanks for the round! great problems, i liked Div2E the most

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    can u tell ur solution for E pls,coz i cant understand one described in editorial?

»
3 years ago, # |
  Vote: I like it +5 Vote: I do not like it

What would be the solution for div2C/div1A if the operation was OR, Xor instead of AND.?

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

For Div1D I sorted by Max (s_i, a_i), if two indices i and j are such that max(s[i],a[i]) == max(s[j],a[j]) then one with lower skill goes before. Then I just greedily try to climb according to sorted order above. Can anyone prove/disprove my approach? I got AC but can't actually prove my solution

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Bonus $$$O(N)$$$ solution for Div2 D/Div1 B here

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Did anyone use the method in the editorial to solve Div.1 C? It is OK to give an implementation? I got WA on #8. Thanks.

»
3 years ago, # |
  Vote: I like it +3 Vote: I do not like it

For 1C/2E,in the editoral,could you tell me what's the “This sum differs by a constant " in the "solve"?I can find the constant...please

»
3 years ago, # |
  Vote: I like it +15 Vote: I do not like it

I have another solution for 1602D - Frog Traveler which works in $$$O(N\cdot logN)$$$. I created a graph in the form of a segment tree and ran a 0/1 BFS on it. Have a look at it here: 133096348.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I tried implementing div2 c / div1 a but it is giving a runtime error. Can someone pls suggest where I went wrong. Your text to link here...

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Your divisors array is only of size 31. But valu is 0<=valu<=N. And you are trying to access the element out of bounds of 31. You should calculate answer only for valu, not all possible values of “valu”

»
3 years ago, # |
  Vote: I like it +1 Vote: I do not like it

is video tutorial present for div2D if yes plz share the link if not ak2006 can u please make a video tutorial for div2D , i am not able to understand anything in this , help plz. thnx in advanced!

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

is there any proof for the observation that the array will remains constant after n operations? DIV2. B

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +22 Vote: I do not like it

    Note that if array $$$a$$$ remains constant after step $$$i$$$, it will remain constant after any other step $$$j > i$$$. Now note that if array changes after step $$$i$$$ then at least two groups of equal numbers merged in one. Since there are no more than $$$n$$$ groups initially, there won't be more than $$$n - 1$$$ merges. (But $$$+1$$$ step for the first step, since it's the only step when $$$a_i$$$ may differ from the size of its group.)

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I got wrong result from __lg in cpp, what was going on? When I calculates it by myself, it works right. Can somebody explain what happen? Here 133143224 is my code for 1E. A runtime error occurs. But when I replaced it by the lower 2 line, I got accepted.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Does the linear time solution to D involve monotonic queue/stack?

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +3 Vote: I do not like it

    No, it involves noticing that if you made a jump to some height, you should not jump to that height again. You go through every jump from the bottom and you push unique places where you slid to the queue. Now, when you process the next level, you should not repeat those jumps which you already did (from the bottom), you should only jump to the lower heights if possible and again, push to the queue unique places after you slid. So you record the lowest height you jumped so far and make all the jumps that go past that height.

    When you start jumping from the bottom, it would be a continuous segment. When you process the next place, it is impossible to create a new disjoint continuous segment (you start within the segment, which was covered by the initial jumps and you make another continuous segment of jumps). So the new jumps would extend that continuous segment and so on.

    The amortized complexity would be O(n) as the total number of jumps would be n.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      Wow that's beautifully explained! Thanks!

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Sorry,I can't understand div2 D's tutorial.How to build the minimum segment tree and how to use this segment tree to find all u for every j

»
3 years ago, # |
  Vote: I like it +1 Vote: I do not like it

Can anyone explain how to solve C — Optimal Insertion using divide and conquer or share some code?

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

my simple O(n) solution for div2 D is submission

»
3 years ago, # |
Rev. 3   Vote: I like it +3 Vote: I do not like it

In problem Frog Traveler, how does the editorial traverse through all j's at most once?

For any particular value of $$$j-a_j$$$ there can be such values of $$$j$$$ such that $$$j-a_j\leq j\lt u$$$.
So, we can't simply use all indices with a fixed value of $$$j-a_j$$$.
I tried maintaining a map such that $$$map[x]$$$ has all indices $$$j$$$ such that $$$j-a_j=x$$$ sorted in increasing order. But it leads to TLE. Any solution similar to editorial might help.

Also there must be a correction in the editorial:
$$$dp_i = 1+min(dp_j+b_j)$$$ must be replaced with $$$dp_i=1+min(dp_{j+b_j})$$$

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it +11 Vote: I do not like it

    There two facts:

    First, the described solution uses a Segment Tree which keeps for each index $$$j$$$ value $$$j - a_j$$$. And for $$$u$$$ you ask range minimums only on suffix $$$[u, n]$$$. If minimum $$$m = j - a_j \le u$$$, you make a move at such $$$j$$$ and replace $$$j - a_j$$$ with $$$\mathit{INF}$$$. That's why you take each $$$j$$$ exactly once.

    Second (more interesting) fact: actually, you can ignore $$$j$$$, i.e. you can always take non-visited $$$j$$$ with minimum $$$j - a_j$$$. So you can keep segments $$$[j - a_j, j]$$$ either in set, or just sorted in vector. It can be proven that making prohibited steps (from $$$u$$$ to $$$j < u$$$) is never optimal.

    • »
      »
      »
      3 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      you can always take non-visited j with minimum j−aj

      But $$$j\lt u$$$ would mean $$$u$$$ can't be reached from $$$j$$$ in one step. So, updating $$$dp[j]$$$ as $$$d+1$$$ would be incorrect.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

For C: Optimal Insertion, am I missing something or does the editorial not explain: how is it that we can be sure that the greedy choice of p_mid will result in an optimal solution?

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The greedy choice of p_mid is optimal as you covered the whole available range. Then you just limit those ranges based on the fact that p[i] <= p[i+1].

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Hm, I still don't totally understand. Perhaps more specifically, why is it not possible that there is some other solution out there that chooses a different p_mid value which gives more inversions for b_mid but makes up for it with fewer inversions in the recursive calls?

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Because the ranges would be limited suboptimally. If the optimal choice is p[mid] = p_mid and instead you would choose p_mid-1, some of the values in the left call might get worse because the max value for them would be p_mid-1. However, it may turn out that it would be better if they used p_mid instead. In the right call, they would not use the p_mid-1 because the optimal value is p_mid and we know that p[mid+1] >= p[mid] = p_mid > p_mid-1.

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          After mulling on this for a while I think I understand. Thanks for helping me.

          For future readers, here's how I convinced myself. Consider placing b_mid somewhere in the array A, particularly, the position that minimizes inversions between the elements of A and b_mid. So we have [ (elements of a), b_mid, (elements of a) ].

          Now we want to place some B > b_mid. We will prove that even disregarding inversions between B and b_mid, it is never optimal to place B to the left of wherever b_mid is.

          The number of inversion for b_mid is (# of elems to my left that are bigger than me) + (# of elems to my right that are smaller than me). Consider marching b_mid to the left. The former quantity may decrease and the latter quantity cannot decrease. But regardless of how much the former quantity decreases, we know that it is never "worth it" since we have found the current placement of b_mid to overall minimize inversions.

          Now consider what happens when B is in place of b_mid. If (# of elems to my left that are bigger than me) decreases as we march left, then it must have also done so for b_mid, since such element X > B would also be X > b_mid since B > b_mid. But we knew it was never "worth it" for b_mid, so it will also never be "worth it" for B. And as with b_mid, the latter quantity (# elems to my right that are smaller than me) again will not decrease as we march left. Thus, the optimal placing of B must be to the right of b_mid.

»
3 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Would someone please offer a proof for B? Specifically,

1) Prove that at most $$$n$$$ steps after transformation, $$$a$$$ becomes repetitive.

2) Prove that at most $$$\log n$$$ steps after transformation, $$$a$$$ becomes repetitive.

If both 1) and 2) are unknown, how do you come up with either of these 2 conjectures?

Thanks a lot.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it
    1. After first steps numbers can only become larger or stay the same. Numbers cannot become larger than N, so this proves that it takes no more than N steps.
»
3 years ago, # |
Rev. 2   Vote: I like it +3 Vote: I do not like it

Can't KiKoS provide a sample implementation to the logic given the editorial of Frog Traveler?

It's really a pain trying to decipher what the editorial wants to say and to even believe that it will work the right way if we try to implement it after reading.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Hi! I'm sorry you didn't like the editoral. Still, solution (in my opinion) has two main ideas:

    1. calculate dp using bfs ~~~~~ queue q; q.push(0); while (!q.empty()) { int x = q.front(); q.pop(); FOR EACH i WHERE (i > x && i — a_i <= x) IS TRUE {
      for (int j : gr[i]) { // gr is a reversed i -> i + bi falls if (dp[j] > dp[x] + 1) { dp[j] = dp[x] + 1; q.push_back(x); } } } } ~~~~~

    2. FOR EACH i WHERE (i > x && i - a_i <= x) IS TRUE can be found in O(n log n), if we find i with segtree.index_of_min([x, maxn]) where segtree is built on array i — a_i

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Thanks I got your idea.

      Thanks again for replying me :).

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I have found $$$O(N)$$$ solution for 1602D — Frog Traveler. I couldn't do it during the contest but after some thinking I came to this idea. Here, we will perform a BFS from the bottom of the well to all nodes it can reach above it and store the maximum height we reach as integer $$$start$$$ . Now, when we start seeing all nodes one can visit from the new source node we will only check nodes which are at height greater than $$$max(start, source$$$ $$$node)$$$ and continue to update $$$start$$$ as the height increases.

I might not be very clear about how or why this works so here's my submission for the same.

code : 133240574

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can somebody tell me the key point for 1F? I don't understand the editorial.

»
3 years ago, # |
  Vote: I like it +3 Vote: I do not like it

In problem B div 1, you can do without a segment tree by storing a std::set of indices that bfs has not yet visited. Then, to find the index where you can get from i, you need to take j = lower_bound(i - a[i]) and check that j exists and *j <= i. Like that: https://codeforces.net/contest/1602/submission/133263356

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    whats the time complexity for this?

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      $$$ O (n \ log \ n) $$$. Each item will be removed no more once per $$$ log \ n $$$ and for each index we can make 1 extra lower_bound because bfs can visit each index only 1 time.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Hello, it was a great contest and nice editorial. But solution for problem div1 F is not clear from the editorial. Can you please post a code solution for it? Thanks ch_egor cdkrot LHiC

»
3 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

There is a typo in the editorial of the problem 1602D - Frog Traveler:

It should be $$$dp[i] = min(dp[j + b[j]] + 1)$$$ instead of $$$dp[i] = min(dp[j] + b[j] + 1)$$$

Please correct me if I am wrong and the editorial is correct

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I think you are right. I suppose it should be $$$\min\lbrace dp_{j + b_j}\rbrace + 1$$$, too.

»
3 years ago, # |
  Vote: I like it +51 Vote: I do not like it

I think the divide and conquer method of question 1601C is very difficult to think about.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +7 Vote: I do not like it

    I will disagree, coming to the conclusion that p1<=p2<=...<=pm is fairly straightforward, after that it is a well known trick that if for the optimal answer the index condition is satisfied than we can use divide and conquer.

    Maybe this was your first time seeing this trick so you should remember it now. Same trick is used in divide and conquer dp optimization so you should definitely read that , as that was where I saw this trick for the first time.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

1601B Frog Traveler, time complexity is O(n), just regard as a BFS, a shortterm path problem https://codeforces.net/contest/1602/submission/136659435

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Solved Problem Frog Jumps in O(n) TC and constant additional space, The approach we can use is very simple and Inspiration was taken from a similar problem "Minimum number of jumps to reach end" on gfg. The idea is to keep 2 variables Reach and NextReach and a counter for the jumps taken so far. With current value of counter we can climb upto Reach, and with one additional jump we can climb to NextReach. As soon as we are in a level which is less than our reach so far, we increment the count to take an additional jump and now we can climb to NextReach, and while going from Reach to NextReach we will find the next value of NextReach, when NextReach reaches 0 we simply terminate and display the result.

here is my code for the same Problem B/D Frog Jumps Solution

Also the gfg problem link Simpler Similar Problem

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Does anyone know why this (143363464) solution for Div 1 B doesn't work?

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

The bounds on problem $$$C$$$ were really tight, unfortunately :(. My solution, which differs from the editorial's, is $$$\mathcal{O} ((N + M) \log (N + M))$$$. It took my lots of tries to get it to pass because of the high constant factor.

First, it's best to insert elements of $$$b$$$ in increasing editorial (this is the same observation as the editorial's). The main idea is that we build a segment tree, where the $$$i$$$th element of our segment tree thing is the incurred cost of inserting the element $$$0$$$ into the array at the $$$i$$$th position. We can update for inserting the $$$1$$$st element by updating indices that have elements $$$0,1$$$. To transition from $$$1->2$$$, update things with index $$$1, 2$$$. And so forth. Main idea is that very few updates when transitioning.

Code: 145697883

»
2 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone help why 167665277 shows TLE?

»
20 months ago, # |
Rev. 4   Vote: I like it 0 Vote: I do not like it

I have alternate solution to div1-D using segment tree, and I have solved the problem by considering the climbers in increasing order of a[i] rather than s[i]. Link to submission

Detailed Explanation
»
15 months ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

D1-E can be solved without the observation that "such a sum is independent of the remainder of the division of l by k"

Instead we can build a tree on the indices (parent of $$$i$$$ is $$$nxt_i$$$), weight the edges according to value of $$$a_i$$$ and seperation between $$$i$$$ and $$$nxt_i$$$ (some casework involved here),

We can process queries offline (answering all queries with same value of $$$l \%k $$$ together) using binary lifting and any tree path-update path-sum structure. This solution takes $$$O(n\log{n})$$$ time.

Implementation: link

»
15 months ago, # |
  Vote: I like it 0 Vote: I do not like it