Tutorial is loading...
Author: Egor.Lifar
Tutorial is loading...
Author: KAN Tutorial is loading...
Author: Jatana Tutorial is loading...
Author: Egor.Lifar Tutorial is loading...
Author: Jatana Tutorial is loading...
Author: Egor.Lifar Tutorial is loading...
Author: Jatana Tutorial is loading...
Author: Egor.Lifar
Auto comment: topic has been updated by Egor.Lifar (previous revision, new revision, compare).
Can someone share their solution?
Tutorial for this problem is going to be uploaded in about 5 minutes.
I think the test case 7 of B got most of us :).
For E, how do you prove that that it's sufficient that every prefix sum of $$$δ_i$$$ is $$$≥0$$$ and $$$\sum \delta_i = 0$$$? (it's not hard to prove that those conditions are necessary; the negation of the first statement implies that there there exists a $$$k$$$ such that $$$\sum_{i = k}^{n}s_k$$$ increases)
Let's prove that it is sufficient by induction. Assume we have proved that the following condition is sufficient for all $$$m$$$ < $$$n$$$: $$$\sum\limits_{j=1}^i d_i \geq 0$$$ for $$$1 \leq i \le m$$$ and $$$\sum d_i = 0$$$.
Let's prove that for $$$n$$$ above condition is sufficient too. Let's start moving an arbitrary pair of stones until we get $$$\sum\limits_{j=1}^i d_i = 0$$$ for some $$$i \le n - 1$$$. Now, we can split our task to 2 subtasks, from first stone to $$$i$$$-th stone and from $$$i$$$-th stone to n-th stone. By the induction assumption, in these two smaller subtasks the above condition is sufficient $$$\implies$$$ it is sufficient for our task too.
My friend told me to read this, but I'm not sure whether it does help or not.
In 1148F — Foo Fighters: 1) If we follow the method given in this tutorial, does it happen that the new sum that we get after performing the operation(as mentioned in the question) is exactly the additive inverse of the old sum of the values ?
if not ,can someone please explain what is the logic behind this solution mentioned in the editorial ?
I'll try to make it more clear. When you solve this problem using induction principle, of course you are trying to get the best you can. But unfortunately some bit additions changes not only objects, which we have already proceeded, but all objects given in the input. That's why we can't say that we can exactly additive inverse of the old sum. During some steps of the algorithm we already work with not original prices, but with reverse ones from earlier added bits.
Is there anyone else who found problem D easier than C?
To me it was also easier than B, but its probably because I overcomplicated it
Can anybody explain crazy diamond problem ? Actually I am not able to understand the editorial of this problem.
My solution (different from editorial):
Because the array is permutation, the condition sorted means that for each $$$i$$$, $$$a_i = i$$$.
Instead of sorting the entire array, we try to sort subarray with index $$$2$$$ to $$$n - 1$$$. After that, if $$$a_1 \neq 1$$$, swap it with $$$a_n$$$.
Loop $$$i$$$ from $$$2$$$ to $$$n - 1$$$, when $$$a_i \neq i$$$, we can always move $$$a_i$$$ to $$$i$$$ via both endpoint (index $$$1$$$ and $$$n$$$). Let $$$p$$$ = current position of $$$a_i$$$, $$$u$$$ = endpoint that is reachable from $$$p$$$ $$$(2 \cdot |p - u| \geq n)$$$, and $$$v$$$ the other endpoint (equal to $$$1$$$ if $$$u$$$ is $$$n$$$, and equal to $$$n$$$ otherwise). There is 2 cases:
My submission (i used 0-indexing in code).
How's your approach is different from the editorial. I think you have done the same thing as in editorial.
oh ok, actually im not sure about editorial when i posted that, i just look at number of swaps needed, it is different from mine, so i think it should be different XD. Yes i think it use same idea but different implementation.
Your solution is easy to understand. Thank you.
Can anyone explain problem F( Foo Fighters) more clearly?
Let $$$E_k$$$ denote the set of elements whose masks have only (some subset of) the first $$$k$$$ bits on. Note that $$$E_1 \subseteq E_2 \subseteq ...$$$.
We will iteratively construct $$$mask_k = $$$ any mask on $$$k$$$ bits which makes the elements of $$$E_k$$$ have a sum with the sign we want at the end. For now, zero is considered to have either sign.
$$$mask_1$$$ is easy to construct: just check the sign of the sum of $$$E_1$$$ and flip it if necessary by setting the first bit.
To construct $$$mask_{k+1}$$$ from $$$mask_{k}$$$, we check the sum of the elements of $$$E_{k+1} \setminus E_k$$$ assuming we have already applied $$$mask_k$$$ to all elements. If the sum of $$$E_{k+1} \setminus E_k$$$ has a sign which differs from our target, we can flip it by setting the $$$(k+1)^{th}$$$ bit without affecting the sum of $$$E_k$$$. The sum of two values with the correct sign still has the correct sign. So, $$$mask_{k+1}$$$ is simply $$$mask_{k}$$$ plus the choice for the $$$(k+1)^{th}$$$ bit.
It is guaranteed that the input does not have sum zero. At some point, we will reach $$$k_0$$$ such that $$$E_{k_0}$$$ has non-zero sum. After that point, our "working sum" on the elements of $$$E_{k_0}$$$ will become non-zero with the correct sign.
$$$mask_{62}$$$ is the final output.
How to prove the greedy solution in D? I don't know why it is correct.
If you sort pairs of form ai<bi in decreasing order then, ai< ai-1 <bi-1 is always true which means bi-1 > ai therefore a1<b1>a2<b2>a3.....
Similarly, you can prove for pair of form ai>bi
I hope this helps :)
Got it, thank you.
Please provide a more detailed explanation of Problem F- Foo Fighters, explaining how the induction works.
For those who are having trouble understanding the editorial of F let me explain it in a different way.
Let us partition the object in 62 collections (collection 0, collection 1,..., collection 61) based on their least significant bit position (i.e. objects with least significant set bit as i will belong to collection i)
Let's iterate on collections from 61 to 0 and have a rule in processing 'collection i' we will change (if any) only on i'th bit of our answer s. Doing so will not have any changes to our processing in collection i+1 or higher. When we are at processing collection 'i' we have s bits i+1 to 61 fixed and we only need to decide for i'th bit so let's calculate what sum-change when taking i'th bit or not taking i'th bit (based on parity of number of bits set in s till now) (observe we don't need lower bits for collection i to decide its parity). So taking and not-taking divide collection i sum into two parts we will take bigger part if out initial sum was positive and smaller part if initial sum was negative. How? Taking a part is simple if we are choosing taking then set ith bit of s as 1 else do nothing. Why? Suppose initial sum was positive we take bigger part meaning we have new sum from collection i as smaller part — bigger part (as bigger part sign flip) so it will be non-positive similarly one can say about initial sum negative case.
Code : https://codeforces.net/contest/1148/submission/54949506
The solution seems to be correct except for one problem that suppose we have initial sum as positive we make it non negative at each collection so can't our final sum be zero and we wanted it to be negative. Here is where the condition initial sum non-zero 0 comes in. Let j be highest number with collection j non-empty so obviously its not taking is 0 and taking part is sum number. Suppose taking part sum was also zero we remove all numbers from j this does not change initial sum and we have now a lower j (Not this situation can't arrive when j is 0 as initial sum non-zero) and if taking j part sum was non-zero then we have a negative sum by selecting th e i-th bit so we don't get exactly zero. (Similar way to prove for initial sum negative case)
Least significant or most?
Yes, it could be done with most significant also (we have to reverse the processing loop). But here, I had partitioned it based on the least significant bit.
can you explain this condition??
if(init_sum >0 && taking > not_taking) ans| = 1<<i;
else if(init_sum<0 && taking < not_taking) ans| = 1<<i;
Help me to solve out 1148E problem.....
In 1148E problem test case 7:
20 53 86 76 100 16 12 13 97 79 23 28 64 42 10 23 56 59 76 48 12
48 49 49 49 48 49 48 49 49 49 48 48 48 49 49 49 49 49 49 48
my code give following output for above testcase :
YES
19
14 4 39
6 4 12
6 8 25
20 8 23
20 2 13
7 2 24
7 9 11
5 9 19
5 18 13
10 18 14
10 3 12
15 3 15
15 12 11
11 12 5
11 17 10
11 16 5
13 16 2
13 1 4
19 1 1
my code link:code for 1148E in c++
Problem E:
What does it mean?
Notice that the problem does not require stone at position (i) to be moved to target position (i). It requires that the resulting stones positions fill all the target positions regardless of which stone moved to which target position.
When you sort the stones based on their original positions, and sort target positions such that stone (i) will be moved to target position (i), leftmost stone will move to the leftmost target position, and second leftmost stone will move to second leftmost target position, and so on. Thus, their original order is preserved.
Because in applying operation two stones will cross each other. Suppose i move to a and j move to b such that i < j and a > b Then these stones will meet a point. From there you can move any of the two stones.
Can anyone help me understand why this approach for problem E fails on test 14 ?? ( Or maybe I forgot some conditions)
First, I sort both the array. Then I have two variables with l = 0 and r = n-1.
Below is sort of pseudo — c++ code of my approach:
While ( l < r) {
if(s[l] == t[l]) { l++; continue; } if(s[r] == t[r]) { r--; continue; }
int d1 = t[l] — s[l]; int d2 = s[r] — t[r];
if(d1>0 && 2*d1 <= s[r]-s[l] && s[r]-d >= t[r] ) { //here we can make s[l] = t[l] , decrease s[r] by d1 } else if(d2>0 && 2*d2 <= s[r]-s[l] && s[l]+d <= t[l] ) { //here we can make s[r] = t[r] , increase s[l] by d2 }
else { cout << -1; return 0; }
}
I guess your code will fail for following input:
The answer to it is:
If your approach gives "NO", you should be able to get why your logic is incorrect.
Got it ,Thank you !! I think I may change comp for sorting and sort by the difference s[l] — t[l] , will it solve the problem ?
Instead of starting $$$r$$$ at the end and moving inward, maintain $$$r$$$ to be the index of the first value after index $$$l$$$ that needs to be reduced. That will take care of the problem occurring right now, because then you'll be using the least restrictive candidate each time you're performing an operation. Values that needed to be reduced that were further towards the end would then be available for increasing values of $$$l$$$ further along the array.
please help me with C. i can't understand the tutorial.
Does someone know why this code can get the correct answer? code
I didn't expect D to be that easy
Alternate greedy solution for E:
Assume that $$$s_1, s_2, ...$$$ and $$$t_1, t_2, ...$$$ are in sorted order. Notice that if $$$t_1 < s_1$$$, solution is NO. Now, take the minimal $$$i$$$ such that $$$s_i \geq t_1$$$. If no such $$$i$$$ exists, the answer is NO. We move $$$s_1$$$ and $$$s_i$$$ inward until one of them is equal to $$$t_1$$$, and remove both that $$$s$$$ and $$$t$$$ from our set. We repeat this process $$$n-1$$$ times, and if our remaining stone is not in the correct location, the answer is NO.
86306018
Why is H question 3500 difficult? Maybe it has a special inspiration for DS?