1740A. Factorise N+M
Tutorial
1740B. Jumbo Extra Cheese 2
Author: Pyqe
Developer: errorgorn, Pyqe
Tutorial
1740C. Bricks and Bags
Tutorial
1740D. Knowledge Cards
Author: Nyse
Developer: steven.novaryo
Tutorial
1740E. Hanging Hearts
Tutorial
1740F. Conditional Mix
Author: Pyqe
Developer: errorgorn
Tutorial
1740G. Dangerous Laser Power
Tutorial
1740H. MEX Tree Manipulation
Author: steven.novaryo
Developer: steven.novaryo, Pyqe
Tutorial
1740I. Arranging Crystal Balls
Author: NeoZap
Developer: errorgorn, Pyqe
Tutorial
I solved E it without using dp, if someone could prove why it works that would be great or maybe provide a counter testcase.
Approach
Interesting. You constructed the array s,right?
Yes
can u explain your approach ? I have tried this question in building the array just like you
I noticed the thing that if we take a given node "X" ,then nodes in the longest path from root to a leaf in a subtree where "X" is root can be made to same value.
so I came up with approach something like this For Each level, For Each node in that level lets say Y, I added up the length of longest path from the node Y to the leaf
After getting answer for all levels, I took the max of them but i am missing something.
My approach is simply instead of randomly going to a subtree I choose the one with the deepest node and when we have visited all vertices in its subtree I assign a value and push the subtree min in sequence and calculate its LNDS.
your explanation is not clear. whose subtree? what do you do with the subtree you choose? what value do you assign?
Sorry if my explanation wasn't clear enough I will explain it more thoroughly. So I'm maintaining a timer variable that will assign a value to each node, it is initialized to 1. Now we start by traversing a tree in dfs order but we assign a value to a node only when we have traversed all nodes in its subtree and the value that gets pushed into s is the subtree minimum of that node then we increment the timer variable. Now for traversal let's say we are at a node x so instead of randomly visiting a node in its subtree, we will start by visiting the subtree with the deepest node(maximum height) first. So for that I just calculated depth of each vertex and took the max depth (height) for each node and just sorted the adjacency list for each node based on their respective height in non increasing order. Finally I just calculate the LNDS of the s sequence that we created.
178446226
It works the same as the dp solution. let a[v] be the lnds of v's subtree. a[v] is either the concatenation of a[c_i] for all children c_i of v, or the path from v to its deepest child, whichever is longer. the length of a[v] is equal to dp[v].
Oh got it thank you.
How are we deciding the permutation a?
Can you please write the full form of LNDS?
Longest non-decreasing subsequence.
How to prove the first statement in the editorial for problem F?
Gale–Ryser theorem
Well, the first thing that helps for this problem is to consider that all multisets have the same size, which is $$$n$$$. Simply, add additional zeros representing empty sets until you have $$$n$$$ numbers in the multiset.
One possible way in which we realize that such a condition is equivalent and can prove it is to note the following: If we are able to createe a multiset $$$(M_i)_{i \in [0, n)}$$$ we can always move elements from a bigger set $$$M_i$$$ to a smaller one $$$M_j < M_i$$$ and still maintain a valid configuration. The reason is that in $$$M_i$$$ there are at most $$$M_j$$$ elements that are already in $$$M_j$$$, so the rest, $$$M_i - M_j$$$ can be moved to $$$M_i$$$ if it is necessary.
This implies that if we consider the multisets sorted, which is what makes more sense, we can always move elements from $$$M_i$$$ to $$$M_{i+1}$$$. Thus, if a configuration $$$(M_i)_{i \in [0, n)}$$$ is valid, so is any configuration $$$(N_i)_{i \in [0, n)}$$$ with $$$\sum_{j = 0}^i N_j \le \sum_{j = 0}^i M_j$$$. And this is the key to the problem. We do not need to consider all multisets, only those that are maximal in this sense. And then we can count how many elements they have below them.
By the way, once we get that formula, it makes no sense to keep considering the multisets in the way we are doing it. Instead, use the sequence of accumulated sums to represent the multiset. Then the condition becomes $$$N_i \le M_i$$$ for all $$$i$$$, which reads much better. Hence, our multisets are represented as increasing sequences $$$(M_i)_{i \in [0, n)}$$$ with decreasing $$$M_{i+1} - M_i$$$. Or even better, as increasing sequences $$$(M_i)_{i \in [0, n]}$$$ wiith $$$M_0 = 0$$$, $$$M_n = n$$$ and decreasing $$$M_{i+1} - M_i$$$.
The question is, which multisets are maximal? Two multisets are not comparable if $$$N_i \le M_i$$$ and $$$N_j \le M_j$$$ for some indices $$$i$$$ and $$$j$$$. But... well, this was of little help for me. So, I thought: at the very least I know that the biggest lexicographically is maximal. Which $$$(K_i)_{i \in [0, n]}$$$ is the biggest lexicographically? Well, if we try to fit as many possible numbers in the first set, we will have $$$K_1 = \sum_{j = 0}^n min(\operatorname{cnt}_j, 1)$$$. Then, we will substract $$$1$$$ from every $$$\operatorname{cnt}$$$ and repeat. In the end, we get the multiset $$$K_i = \sum_{j = 0}^n \min(\operatorname{cnt}_j, i)$$$. (And the right hand side is precisely the formula of the statement. This, along with our previous observation, proves that the condition is sufficient.)
However, now that I know the formula I also know that we cannot make another maximal multiset, because for $$$M_i$$$ we can only use at most $$$\min(\operatorname{cnt}_j, i)$$$ occurences of each number $$$j$$$. That means $$$N_i \le \sum_{j = 0}^n$$$ for all multisets $$$(N_i)_{i \in [0,n)}$$$. Thus, the lexicographically biggest multiset is, actually, the only maximal.
Ending note: Yes, we can prove necessity from the very beginning and sufficiency if we consider the lexicographically biggest multiset and the first observation, without any notion of maximal elements. However, I am not sure one would get there magically. Instead, I believe the first observation must lead us to think of maximal elements and then we can either guess that there is just one or simply deduce it as it has been shown.
Why do we get $$$min(cnt_j, i)$$$ in the formula for $$$K_i$$$ ? I get the part of $$$cnt_j$$$, but not the reason for the i. Is like we're telling we are repeating an element in one of the set we built right? but this should then be invalid
Well, if you want to make each element the largest possible each time, you take one element from each set with one element, which is why $$$K_1 = \sum_{j \in [0,n)} \min(\operatorname{cnt}_j, 1)$$$. Then, you substract one from each set and get
because you took at least $i$ elements from each set already.
Then, by induction, if you suppose $$$K_i = \sum_{j \in [0,n)} \min(\operatorname{cnt}_j, i)$$$, you get
which equals $$$\sum_{j \in [0,n)} \min(\operatorname{cnt}_j, i + 1).$$$
I see, I think what I didn't get is that $$$K_i$$$ is cumulative
Thanks
You can conclude and prove this lemma with mincut maxflow — if you try to find a maximum matching between a chosen multiset of sizes, and the frequencies of the elements, then try to find the condition by which all cuts are of size at least $$$n$$$.
It's pretty lengthy (I can elaborate if you want), but you can arrive at this condition without magically guessing it (the magic here comes from the magic of MCMF).
My logic is exactly the same as the editorial but giving WA at tc 2
sol link: https://codeforces.net/contest/1740/submission/178662191
can someone please help me... thanks
$$$>$$$ or $$$\geq$$$ ?
I think it should be > as even if we have filled all slots we can remove some in next iteration and if we don't we will report that ans don't exist
$$$\geq$$$ is right. try it and I'll explain it to you.
a little too late to ask but can you explain please why >= is right?
What is $$$k$$$ and $$$z$$$ in the first statement in the editorial for problem F?
Now edited to make it clearer.
In Problem E, can anyone explain why we would ever do this? ->If card i is used in the longest non-decreasing subsequence, then the maximum answer is the maximum value of dist(j,i) for all j∈Di.
Can someone give a small counter example where doing just this (->If card i is not used in the longest non-decreasing subsequence, then the maximum answer is the sum of dp values of all of the children of i.) will fail.
you can consider a stick tree. where 1->2->3->4 here it always makes sense to select a root node as we can never get the better answer by excluding the root.
I did consider this case in my dp. I did this. My code failed anyways.
Thanks
Take a look at Ticket 16403 from CF Stress for a counter example.
Thanks! Got it. I made a mistake with my reasoning.
In my opinion, the editorial gives the conclusions more than proof or explanation for them, with many sentences like "We can observe/see/obtain/ that ...".
Agreed. Now, some of the editorials have been edited to give more explanations about the claims and conclusions. I hope they are more helpful now <3
Thanks! Truly more helpful.
I think the time complexity in C problem would be O(nlogn) the writer forgot that he assumed that the elements are sorted . ( Though nlogn will do the work but still )
It is fixed now. Thanks for pointing it out! <3
yes we can just make it in O(n+log(n) for the sort ) == O(n) .
Imho, complexity of problem D is O(n). There is just iteration over an array.
Yea, my solution also runs in O(n). However, I think the priority queue solution is more intuitive to explain and understand. Thanks for mentioning that!
In D, let's say we have the grid:
where x are empty cells. If we can move any card to any cell, as long as there is an empty cell, could someone explain how we could move 4 to the cell where 5 currently is — cell with coordinates (3,1) ?
You can't but you don't need too. Cards only move if they are the current next card going to the exit or as part of a rotation to let another card past (so you'd never need to move from one 2x2 square into the other). As long as there is an empty square any card can reach the exit which is all that matters.
Can you please explain how you concluded that it is impossible to move 4 to the place of 5 following the rules, I tried alot but am not able to move 4 and empty space together in the lower left 2 x 2 square, maybe that is the reason its impossible ?, Can you please explain your conclusion.
Edit : I got it, its just because the size of cycle for the rotation is odd so we cannot shift it, had it been even, we could have.
Nice catch! We actually did not realise the possibility of that. The tutorial has been edited to have a more correct claim.
Very pleasant round to participate in. Liked E and F a lot. C and B were also pretty nice. Looking forward to see more rounds from you!
Did anyone solve problem I like this?
Start from editorial's O(nm^2) knapsack
Lets say you have dp[i] after considering $$$f_0,...f_{i-1}$$$. divide $$$f_i$$$ into $$$K$$$ line segments. let the kth segment is $$$a_kx+b_k$$$ for $$$l_k\le x\le r_k$$$
Then $$$dp[i+1][j]=\min_{0\le k<K}(\min_{l_k\le x\le r_k}(dp[i][j-x]+a_kx+b_k))$$$ can be calculated in O(mK) using deque or something to find range minimum
so in total it will be O(mn)
Yes, it can be done in $$$O(nm)$$$.
In problem E, why is
ai>maxj∈Di(aj)
true in the optimal solution?Edit: Its clear now, thanks for editing the editorial
For que 5 I thought of a solution where the answer will be (no of nodes — no of nodes with more then one children). It fails on test case 6. Can anyone please provide a counter example, I am unable to think of one.
6 1 2 2 2 2
Animations like in D helps understand better
In problem F, I got AC by $$$O(n^3\log n)$$$ (Though it can be optimized to $$$O(n^2\log n)$$$ easily).
Too weak system test or too high efficiency?
Submission
Explanation for problem C
First approach that comes to mind is to first of all set two out of the three bags with the heaviest and the lightest bricks.
Then, put all of the remaining bricks in the third bag.
Eg 1: $$$B_1$$$ = {heaviest}, $$$B_2$$$ = {lightest}, $$$B_3$$$ = {all the rest}.
Bu Denglekk will automatically select the lightest brick of $$$B_3$$$ (which will be second lightest globally). So $$$ans$$$ = (heaviest — lightest) + (second_lightest — lightest)
Alternatively,
Eg 2 if we set $$$B_1$$$ = {lightest} and $$$B_2$$$ = {heaviest} and $$$B_3$$$ = {all the rest}. Then, Bu Denglekk will select the heaviest brick of $$$B_3$$$ (which will be second heaviest globally). So $$$ans =$$$ (heaviest — lightest) + (heaviest — second_heaviest).
This approach fails and the scenario where it might fail is the intuition for the correct solution.
What if, we have $$$B_1$$$ = {heaviest}, $$$B_2$$$ = {lightest, second_lightest}, $$$B_3$$$ = {all the rest}.
Again Bu Denglekk, will select the lightest brick from $$$B_3$$$ as before, (which will be third_lightest globally) Also, he will pick up the second_lightest from $$$B_2$$$. Now, if we compare it to eg 1. Although the term $$$(heaviest - lightest)$$$ is certainly greater than $$$(heaviest - second\;lightest)$$$. Maybe, the second term of this new configuration makes up for the loss in score. That is, it might be possible that $$$(third\;lightest - second\;lightest)$$$ is much greater than the term $$$(second\;lightest - lightest)$$$ of Eg 1.
Consider the following input if not clear: $$${51, 386, 2159, 2345, 2945}$$$
Basically the catch is to notice and exploit the non-uniform differences between the consecutive elements in a sorted ordering of the weights of the bricks.
Thinking of how to handle these possibilities is all thats left now. (Left to the reader :P)
Amazing explanation bro, I was stuck at the same approach and wondering why i am getting WA , but thanks for the clarification. I wish editorials could explain the thought process + pitfalls in the questions too.
Glad it helped you out.
About editorials, I swear sometimes I feel like making a blog titled: "Tutorial: How to write Tutorials"
I don't understand jackshit about the editorial explanation for C so I'll outline how i thought of it here.
At first I wasted about 6hrs trying the "fill two cups with smallest and largest blah blah" approach but after seeing someone's code thought of this.
Think of the sorted array as points on a numberline, we need to maximize the length between the smallest and largest point.
We can only "fix"/"choose" two points and the 3rd one is chosen for us.
We can now think about how the differences between two different points are varying, so to use this we can simply pick the first and last point and then go on bruteforcing for the 2nd point, as for the 3rd point, it'll be the point just before the bruteforced point.
This is a very bad explanation (so is the editorial lol) but it is what it is