I am solving this problem and I am unable to solve it. I read the editorial but don't understand it. I feel sad to not even understand the editorial of a 1400 rated problem. https://codeforces.net/problemset/problem/388/A
Can someone please explain this to me? Thanks.
As the value of n is less we can do this in
n2
. So, the minimum number of piles required will be between 1 to n. So, ifans
will be the number of piles then each pile will have atleastn/ans
boxes. and ifn%ans != 0
then there will be some boxes left that will be n%ans.So,
n%ans
piles will have(n/ans)+1
boxes andans - (n%ans)
piles will haven/ans
boxes. Just sort the boxes in descending order. And assign each box toj%ans
pile if possible. And if not then this cannot be the answer.My solution with comments 118884188
I get the idea. So, I'll try to explain with an example.
Let array be [10, 2, 2, 1, 1, 1, 0, 0, 0]. Let's say that we want to check if we can acccomodate them in 3 piles, while satisfying the required conditions. So, according to your solution, we will have 3 piles with 3 elements each. - 10 -> 1 -> 0 in the first pile - 2 -> 1 -> 0 in the second pile - 2 -> 1 -> 0 in the third pile
And, since it satisfies the condition, it is the answer. I had a few questions. 1. Why do we divide the elements evenly in piles. We give (n/(no of piles)) box to each pile. Why can't we give more boxes to the piles whose base has more strength? How can we prove that even distribution is optimal? 2. Why do we take jumps of 'pile size' (referring to the 'j' loop in your submission). Why cannot we just increment j by 1 and keep adding the boxes.
Sorry if I am asking too much but honestly I have seen lot of editorial which says 'It is easy to observe that ...'. Sorry, I can't observe. I appreciate your help.
bruh why your pfp is so demotivating ??
He is just telling his competitors its time to give up : )
yeah i need to start seeing the things the other way around too . LOL !
It's actually explained in the editorial. Suppose that we have an optimal solution, but pile heights are very different in it. Now let's look at the smallest pile and at the largest pile in this solution. We can always remove the box from the bottom of the largest pile (the pile will become smaller but remain valid) and make it a new bottom box of the smallest pile (this box was able to support a higher pile, so it's strong enough). After this operation is done, the heights of the piles become closer to each other, the solution remains valid and optimal. Repeat this operation multiple times and you will end up with all piles having heights that don't differ from each other by more than 1 box.
To sum it up. If some valid solution exists with k piles of very much diverse heights, then this solution can be always transformed into another valid solution with k piles of equalized heights. If there's no valid solution with k piles of equalized heights, then there's no valid solution with diverse heights either (if it existed, then it could be transformed). So we are focusing on constructing and checking only solutions with equalized heights.
Now let's try to construct a solution with k piles. The weakest k boxes just become the topmost boxes. Why? Let's suppose that a solution exists where the topmost box of some pile is not one of the weakest k boxes and this solution is valid. But we can always swap this box with one of the k weakest boxes that ended up somewhere at a lower height and the solution will remain valid after this transformation. If some weak box was strong enough to be in the middle of some pile, then replacing it with an even stronger box will be okay. And the box from the middle of some pile would also work fine if moved up, because it needs to support fewer boxes there (actually zero for the topmost box).
Let's look at the following sample array [10, 2, 2, 2, 1, 1, 1, 0, 0]. Piles can be arranged as [2 -> 1 -> 10], [2 -> 1 -> 0], [2 -> 1 -> 0] and this is valid (the strength of all the bottom boxes is 2). But this valid solution can be also transformed into [2 -> 10 -> 1], [2 -> 1 -> 0], [2 -> 1 -> 0] by swapping boxes with strength 1 and 10 in the first pile.
To sum it up. If some valid solution exists where not all the weakest boxes are placed at the top, then this solution can be always transformed into another valid solution where all the weakest boxes are placed at the top. So we are focusing on constructing and checking solutions with the weakest boxes at the top of each pile.
Once we are done with the topmost boxes, we move one level down. And the only difference is that the strength of the boxes there needs to be at least 1. But other than this, they are the weakest among the remaining boxes for the same reasons as explained above.
Maybe we can? But that would be a different algorithm.
I tried to explain it above.
Another important thing is that any valid solution with k piles can be always transformed into a valid solution with k+1 piles (up to n). This allows the use of binary search.
The explanations below are based on my implementation 118933477 of the editorial algorithm. Which is much more simple than prashar32's, but has the same 'pile size' jumps.
This is just an extra performance optimization. If all the boxes are sorted by their strength and we placed the weakest boxes at the top, then boxes from a lower height level (from $$$a_{k}$$$ to $$$a_{k+(k-1)}$$$) need to have strength >= 1. But because the array is sorted, we only need to look at $$$a_{k}$$$ and can safely skip the others.
We can do this too. See 118940513 for the code modifications. It is surely somewhat slower, but the time complexity of both variants is still $$$O(n\log n)$$$. I used the following Ruby script to generate a big input file for benchmarking:
The big-jumps C++ implementation takes ~2.3 seconds on my computer. And the increment-by-1 C++ implementation takes ~3.2 seconds. I also have big-jumps implementations for D 118923989 and Ruby 118926013.
Sometimes(or mostly) comments in editorial are much more helpful than the editorial itself.
You can refer to this, no need to force any observation it's simple enough.
Credit: kstheking
Well you can call it a greedy algorithm, when we try to apply greedy by placing the heaviest block in bottom, a little problem occurs because of blocks of same strength for example in case of 10 2 2 1 1 0 0 If you try to apply greedy by choosing the strongest block 10 2 2 1 1 0 0 = 3 heaps Which is wrong answer as sometimes we lose opportunity to create lesser heaps, as in this case if we use the second 2 to make another heap we can do better 10 2 1 0 2 1 0 = 2 heaps
So I apply greedy in the reverse direction by picking the smallest strength brick and placing below it the next smallest brick, this way the optimal number of heaps will be formed as in above example choose 0, then you need >= 1, choose 1, then you need >= 2, choose 2, then you need >= 3 choose 10 repeat until you have chosen all the bricks If you don't get anything, feel free to ask :)
Again full credit to: kstheking
I have copy pasted the comment, its stupid but I don't want people do hardwork unlike this problem. XD
Thanks. I had already read this comment and it made sense to me. The reason I created this post was because it kept on bugging me that I was not getting the editorial of a 1400 problem. The editorial idea is completely different that the comment and maybe if we understand both the methods, we will be able to solve similar problems later, right?
When I noticed your blog, I tried to solve this problem myself and quickly came up with a straightforward and suboptimal implementation of kstheking's solution. The constraints are a joke, so pretty much any poorly coded implementation is going to be fast enough. Unless I'm mistaken, my submission 118912513 has
O(n^2 log n)
time difficulty because of abusing array sort too much. But it was still accepted and that's what really matters in the end.The difficulty rating of a problem is based on how easy or difficult it was to solve by the contestants during a contest: https://codeforces.net/blog/entry/62865
My guess is that a more sophisticated (and more difficult to comprehend) algorithm suggested by the problem author in the editorial could become necessary if the problem constraints actually allowed much larger
n
and box strength values. In this case the difficulty rating of the problem would be potentially much higher than 1400, because fewer people would successfully solve it during its contest.