Изменения рейтингов за последние раунды временно удалены. Скоро они будут возвращены. ×

Блог пользователя Pyqe

Автор Pyqe, история, 2 года назад, По-английски

1740A. Factorise N+M

Author: Pyqe
Developer: Pyqe

Tutorial

1740B. Jumbo Extra Cheese 2

Author: Pyqe
Developer: errorgorn, Pyqe

Tutorial

1740C. Bricks and Bags

Author: Pyqe
Developer: Pyqe

Tutorial

1740D. Knowledge Cards

Author: Nyse
Developer: steven.novaryo

Tutorial

1740E. Hanging Hearts

Author: Pyqe
Developer: yz_

Tutorial

1740F. Conditional Mix

Author: Pyqe
Developer: errorgorn

Tutorial

1740G. Dangerous Laser Power

Author: Pyqe
Developer: Pyqe

Tutorial

1740H. MEX Tree Manipulation

Author: steven.novaryo
Developer: steven.novaryo, Pyqe

Tutorial

1740I. Arranging Crystal Balls

Author: NeoZap
Developer: errorgorn, Pyqe

Tutorial
Разбор задач Codeforces Round 831 (Div. 1 + Div. 2)
  • Проголосовать: нравится
  • +128
  • Проголосовать: не нравится

»
2 года назад, # |
  Проголосовать: нравится +9 Проголосовать: не нравится

I solved E it without using dp, if someone could prove why it works that would be great or maybe provide a counter testcase.

Approach

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Interesting. You constructed the array s,right?

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    can u explain your approach ? I have tried this question in building the array just like you

    I noticed the thing that if we take a given node "X" ,then nodes in the longest path from root to a leaf in a subtree where "X" is root can be made to same value.

    so I came up with approach something like this For Each level, For Each node in that level lets say Y, I added up the length of longest path from the node Y to the leaf

    After getting answer for all levels, I took the max of them but i am missing something.

    • »
      »
      »
      2 года назад, # ^ |
        Проголосовать: нравится -8 Проголосовать: не нравится

      My approach is simply instead of randomly going to a subtree I choose the one with the deepest node and when we have visited all vertices in its subtree I assign a value and push the subtree min in sequence and calculate its LNDS.

      • »
        »
        »
        »
        2 года назад, # ^ |
          Проголосовать: нравится +8 Проголосовать: не нравится

        your explanation is not clear. whose subtree? what do you do with the subtree you choose? what value do you assign?

        • »
          »
          »
          »
          »
          2 года назад, # ^ |
          Rev. 6   Проголосовать: нравится 0 Проголосовать: не нравится

          Sorry if my explanation wasn't clear enough I will explain it more thoroughly. So I'm maintaining a timer variable that will assign a value to each node, it is initialized to 1. Now we start by traversing a tree in dfs order but we assign a value to a node only when we have traversed all nodes in its subtree and the value that gets pushed into s is the subtree minimum of that node then we increment the timer variable. Now for traversal let's say we are at a node x so instead of randomly visiting a node in its subtree, we will start by visiting the subtree with the deepest node(maximum height) first. So for that I just calculated depth of each vertex and took the max depth (height) for each node and just sorted the adjacency list for each node based on their respective height in non increasing order. Finally I just calculate the LNDS of the s sequence that we created.

          178446226

»
2 года назад, # |
  Проголосовать: нравится +40 Проголосовать: не нравится

How to prove the first statement in the editorial for problem F?

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +34 Проголосовать: не нравится
  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +38 Проголосовать: не нравится

    Well, the first thing that helps for this problem is to consider that all multisets have the same size, which is $$$n$$$. Simply, add additional zeros representing empty sets until you have $$$n$$$ numbers in the multiset.

    One possible way in which we realize that such a condition is equivalent and can prove it is to note the following: If we are able to createe a multiset $$$(M_i)_{i \in [0, n)}$$$ we can always move elements from a bigger set $$$M_i$$$ to a smaller one $$$M_j < M_i$$$ and still maintain a valid configuration. The reason is that in $$$M_i$$$ there are at most $$$M_j$$$ elements that are already in $$$M_j$$$, so the rest, $$$M_i - M_j$$$ can be moved to $$$M_i$$$ if it is necessary.

    This implies that if we consider the multisets sorted, which is what makes more sense, we can always move elements from $$$M_i$$$ to $$$M_{i+1}$$$. Thus, if a configuration $$$(M_i)_{i \in [0, n)}$$$ is valid, so is any configuration $$$(N_i)_{i \in [0, n)}$$$ with $$$\sum_{j = 0}^i N_j \le \sum_{j = 0}^i M_j$$$. And this is the key to the problem. We do not need to consider all multisets, only those that are maximal in this sense. And then we can count how many elements they have below them.

    By the way, once we get that formula, it makes no sense to keep considering the multisets in the way we are doing it. Instead, use the sequence of accumulated sums to represent the multiset. Then the condition becomes $$$N_i \le M_i$$$ for all $$$i$$$, which reads much better. Hence, our multisets are represented as increasing sequences $$$(M_i)_{i \in [0, n)}$$$ with decreasing $$$M_{i+1} - M_i$$$. Or even better, as increasing sequences $$$(M_i)_{i \in [0, n]}$$$ wiith $$$M_0 = 0$$$, $$$M_n = n$$$ and decreasing $$$M_{i+1} - M_i$$$.

    The question is, which multisets are maximal? Two multisets are not comparable if $$$N_i \le M_i$$$ and $$$N_j \le M_j$$$ for some indices $$$i$$$ and $$$j$$$. But... well, this was of little help for me. So, I thought: at the very least I know that the biggest lexicographically is maximal. Which $$$(K_i)_{i \in [0, n]}$$$ is the biggest lexicographically? Well, if we try to fit as many possible numbers in the first set, we will have $$$K_1 = \sum_{j = 0}^n min(\operatorname{cnt}_j, 1)$$$. Then, we will substract $$$1$$$ from every $$$\operatorname{cnt}$$$ and repeat. In the end, we get the multiset $$$K_i = \sum_{j = 0}^n \min(\operatorname{cnt}_j, i)$$$. (And the right hand side is precisely the formula of the statement. This, along with our previous observation, proves that the condition is sufficient.)

    However, now that I know the formula I also know that we cannot make another maximal multiset, because for $$$M_i$$$ we can only use at most $$$\min(\operatorname{cnt}_j, i)$$$ occurences of each number $$$j$$$. That means $$$N_i \le \sum_{j = 0}^n$$$ for all multisets $$$(N_i)_{i \in [0,n)}$$$. Thus, the lexicographically biggest multiset is, actually, the only maximal.

    Ending note: Yes, we can prove necessity from the very beginning and sufficiency if we consider the lexicographically biggest multiset and the first observation, without any notion of maximal elements. However, I am not sure one would get there magically. Instead, I believe the first observation must lead us to think of maximal elements and then we can either guess that there is just one or simply deduce it as it has been shown.

    • »
      »
      »
      2 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Why do we get $$$min(cnt_j, i)$$$ in the formula for $$$K_i$$$ ? I get the part of $$$cnt_j$$$, but not the reason for the i. Is like we're telling we are repeating an element in one of the set we built right? but this should then be invalid

      • »
        »
        »
        »
        2 года назад, # ^ |
          Проголосовать: нравится +1 Проголосовать: не нравится

        Well, if you want to make each element the largest possible each time, you take one element from each set with one element, which is why $$$K_1 = \sum_{j \in [0,n)} \min(\operatorname{cnt}_j, 1)$$$. Then, you substract one from each set and get

        $$$K_{i+1} = K_i + \sum_{j \in [0,n)} \min(\max(\operatorname{cnt}_j - i, 0), 1),$$$

        because you took at least $i$ elements from each set already.

        Then, by induction, if you suppose $$$K_i = \sum_{j \in [0,n)} \min(\operatorname{cnt}_j, i)$$$, you get

        $$$ K_{i+1} = \sum_{j \in [0,n)} \min(\operatorname{cnt}_j, i) + \sum_{j \in [0,n)} \min(\max(\operatorname{cnt}_j - i, 0), 1), $$$

        which equals $$$\sum_{j \in [0,n)} \min(\operatorname{cnt}_j, i + 1).$$$

        • »
          »
          »
          »
          »
          2 года назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          I see, I think what I didn't get is that $$$K_i$$$ is cumulative

          Thanks

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    You can conclude and prove this lemma with mincut maxflow — if you try to find a maximum matching between a chosen multiset of sizes, and the frequencies of the elements, then try to find the condition by which all cuts are of size at least $$$n$$$.

    It's pretty lengthy (I can elaborate if you want), but you can arrive at this condition without magically guessing it (the magic here comes from the magic of MCMF).

»
2 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

My logic is exactly the same as the editorial but giving WA at tc 2

sol link: https://codeforces.net/contest/1740/submission/178662191

can someone please help me... thanks

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    $$$>$$$ or $$$\geq$$$ ?

    • »
      »
      »
      2 года назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I think it should be > as even if we have filled all slots we can remove some in next iteration and if we don't we will report that ans don't exist

»
2 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

What is $$$k$$$ and $$$z$$$ in the first statement in the editorial for problem F?

»
2 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In Problem E, can anyone explain why we would ever do this? ->If card i is used in the longest non-decreasing subsequence, then the maximum answer is the maximum value of dist(j,i) for all j∈Di.

Can someone give a small counter example where doing just this (->If card i is not used in the longest non-decreasing subsequence, then the maximum answer is the sum of dp values of all of the children of i.) will fail.

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    you can consider a stick tree. where 1->2->3->4 here it always makes sense to select a root node as we can never get the better answer by excluding the root.

    • »
      »
      »
      2 года назад, # ^ |
      Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

      I did consider this case in my dp. I did this. My code failed anyways.

      if(adjList[u].size() == 1) dp[u] += 1(Below is the dfs)
      
      void dfs(int u, vector<vector<int>> &adjList, vector<int> &dp){
          int answer = 0;
          if(adjList[u].size() == 0){
              dp[u] = 1;
              return;
          }
      
          for(int v: adjList[u]){
              dfs(v, adjList, dp);
              answer += dp[v];
          }
          dp[u] = answer;
          if(adjList[u].size() == 1) dp[u] += 1;
      }
      

      Thanks

»
2 года назад, # |
  Проголосовать: нравится +68 Проголосовать: не нравится

In my opinion, the editorial gives the conclusions more than proof or explanation for them, with many sentences like "We can observe/see/obtain/ that ...".

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +11 Проголосовать: не нравится

    Agreed. Now, some of the editorials have been edited to give more explanations about the claims and conclusions. I hope they are more helpful now <3

»
2 года назад, # |
  Проголосовать: нравится +22 Проголосовать: не нравится

I think the time complexity in C problem would be O(nlogn) the writer forgot that he assumed that the elements are sorted . ( Though nlogn will do the work but still )

»
2 года назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

Imho, complexity of problem D is O(n). There is just iteration over an array.

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    Yea, my solution also runs in O(n). However, I think the priority queue solution is more intuitive to explain and understand. Thanks for mentioning that!

»
2 года назад, # |
Rev. 4   Проголосовать: нравится +25 Проголосовать: не нравится

In D, let's say we have the grid:

  • x 1 2
  • x 3 4
  • 5 6 x

where x are empty cells. If we can move any card to any cell, as long as there is an empty cell, could someone explain how we could move 4 to the cell where 5 currently is — cell with coordinates (3,1) ?

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    You can't but you don't need too. Cards only move if they are the current next card going to the exit or as part of a rotation to let another card past (so you'd never need to move from one 2x2 square into the other). As long as there is an empty square any card can reach the exit which is all that matters.

    • »
      »
      »
      9 месяцев назад, # ^ |
      Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

      Can you please explain how you concluded that it is impossible to move 4 to the place of 5 following the rules, I tried alot but am not able to move 4 and empty space together in the lower left 2 x 2 square, maybe that is the reason its impossible ?, Can you please explain your conclusion.

      Edit : I got it, its just because the size of cycle for the rotation is odd so we cannot shift it, had it been even, we could have.

  • »
    »
    2 года назад, # ^ |
      Проголосовать: нравится +15 Проголосовать: не нравится

    Nice catch! We actually did not realise the possibility of that. The tutorial has been edited to have a more correct claim.

»
2 года назад, # |
Rev. 2   Проголосовать: нравится +20 Проголосовать: не нравится

Very pleasant round to participate in. Liked E and F a lot. C and B were also pretty nice. Looking forward to see more rounds from you!

»
2 года назад, # |
Rev. 3   Проголосовать: нравится +10 Проголосовать: не нравится

Did anyone solve problem I like this?

Start from editorial's O(nm^2) knapsack

Lets say you have dp[i] after considering $$$f_0,...f_{i-1}$$$. divide $$$f_i$$$ into $$$K$$$ line segments. let the kth segment is $$$a_kx+b_k$$$ for $$$l_k\le x\le r_k$$$

Then $$$dp[i+1][j]=\min_{0\le k<K}(\min_{l_k\le x\le r_k}(dp[i][j-x]+a_kx+b_k))$$$ can be calculated in O(mK) using deque or something to find range minimum

so in total it will be O(mn)

»
2 года назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

In problem E, why is ai>maxj∈Di(aj) true in the optimal solution?

Edit: Its clear now, thanks for editing the editorial

»
2 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For que 5 I thought of a solution where the answer will be (no of nodes — no of nodes with more then one children). It fails on test case 6. Can anyone please provide a counter example, I am unable to think of one.

»
2 года назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

Animations like in D helps understand better

»
2 года назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

In problem F, I got AC by $$$O(n^3\log n)$$$ (Though it can be optimized to $$$O(n^2\log n)$$$ easily).

Too weak system test or too high efficiency?

Submission

»
12 месяцев назад, # |
Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится
Explanation for problem C

First approach that comes to mind is to first of all set two out of the three bags with the heaviest and the lightest bricks.
Then, put all of the remaining bricks in the third bag.
Eg 1: $$$B_1$$$ = {heaviest}, $$$B_2$$$ = {lightest}, $$$B_3$$$ = {all the rest}.
Bu Denglekk will automatically select the lightest brick of $$$B_3$$$ (which will be second lightest globally). So $$$ans$$$ = (heaviest — lightest) + (second_lightest — lightest)
Alternatively,
Eg 2 if we set $$$B_1$$$ = {lightest} and $$$B_2$$$ = {heaviest} and $$$B_3$$$ = {all the rest}. Then, Bu Denglekk will select the heaviest brick of $$$B_3$$$ (which will be second heaviest globally). So $$$ans =$$$ (heaviest — lightest) + (heaviest — second_heaviest).

This approach fails and the scenario where it might fail is the intuition for the correct solution.

What if, we have $$$B_1$$$ = {heaviest}, $$$B_2$$$ = {lightest, second_lightest}, $$$B_3$$$ = {all the rest}.
Again Bu Denglekk, will select the lightest brick from $$$B_3$$$ as before, (which will be third_lightest globally) Also, he will pick up the second_lightest from $$$B_2$$$. Now, if we compare it to eg 1. Although the term $$$(heaviest - lightest)$$$ is certainly greater than $$$(heaviest - second\;lightest)$$$. Maybe, the second term of this new configuration makes up for the loss in score. That is, it might be possible that $$$(third\;lightest - second\;lightest)$$$ is much greater than the term $$$(second\;lightest - lightest)$$$ of Eg 1.
Consider the following input if not clear: $$${51, 386, 2159, 2345, 2945}$$$

Basically the catch is to notice and exploit the non-uniform differences between the consecutive elements in a sorted ordering of the weights of the bricks.

Thinking of how to handle these possibilities is all thats left now. (Left to the reader :P)

  • »
    »
    7 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Amazing explanation bro, I was stuck at the same approach and wondering why i am getting WA , but thanks for the clarification. I wish editorials could explain the thought process + pitfalls in the questions too.

    • »
      »
      »
      7 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Glad it helped you out.

      About editorials, I swear sometimes I feel like making a blog titled: "Tutorial: How to write Tutorials"

»
10 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I don't understand jackshit about the editorial explanation for C so I'll outline how i thought of it here.

At first I wasted about 6hrs trying the "fill two cups with smallest and largest blah blah" approach but after seeing someone's code thought of this.

Think of the sorted array as points on a numberline, we need to maximize the length between the smallest and largest point.

We can only "fix"/"choose" two points and the 3rd one is chosen for us.

We can now think about how the differences between two different points are varying, so to use this we can simply pick the first and last point and then go on bruteforcing for the 2nd point, as for the 3rd point, it'll be the point just before the bruteforced point.

This is a very bad explanation (so is the editorial lol) but it is what it is