All the Polygon materials (including the official implementations of all the problems) are here.
Author: TheScrasse
Preparation: TheScrasse
Can you reach the score $$$\max(a) + \lceil n/2 \rceil$$$?
Can you reach the score $$$\max(a) + \lceil n/2 \rceil  1$$$?
The maximum red element is $$$\leq \max(a)$$$, and the maximum number of red elements is $$$\lceil n/2 \rceil$$$. Can you reach the score $$$\max(a) + \lceil n/2 \rceil$$$?
 If $$$n$$$ is even, you always can, by either choosing all the elements in even positions or all the elements in odd positions (at least one of these choices contains $$$\max(a)$$$).
 If $$$n$$$ is odd, you can if and only if there is one occurrence of $$$\max(a)$$$ in an odd position. Otherwise, you can choose even positions and your score is $$$\max(a) + \lceil n/2 \rceil  1$$$.
Complexity: $$$O(n)$$$
Author: TheScrasse
Preparation: TheScrasse
Can you determine fast how many intervals contain point $$$p$$$?
The intervals that contain point $$$p$$$ are the ones with $$$l \leq p$$$ and $$$r \geq p$$$.
Determine how many intervals contain:
 point $$$x_1$$$;
 points $$$x_1 + 1, \ldots, x_2  1$$$;
 point $$$x_2$$$;
 $$$\ldots$$$
 point $$$x_n$$$.
First, let's focus on determining how many intervals contain some point $$$x$$$. These intervals are the ones with $$$l \leq x$$$ and $$$x \leq r$$$.
So a point $$$x_i < p < x_{i+1}$$$ satisfies $$$x_1 \leq p, \ldots, x_i \leq p$$$, and $$$p \leq x_{i+1}, \ldots, p \leq x_n$$$. It means that you have found $$$x_{i+1}  x_i  1$$$ points contained in exactly $$$i(ni)$$$ intervals (because there are $$$i$$$ possible left endpoints and $$$ni$$$ possible right endpoints).
Similarly, the point $$$p = x_i$$$ is contained in $$$i(ni+1)  1$$$ intervals (you have to remove interval $$$[x_i, x_i]$$$, which you do not draw).
So you can use a map that stores how many points are contained in exactly $$$x$$$ intervals, and update the map in the positions $$$i(ni)$$$ and $$$i(ni+1)  1$$$.
Complexity: $$$O(n \log n)$$$
Author: TheScrasse
Preparation: TheScrasse
The answer is at most $$$n$$$.
Solve the problem with $$$k = 0$$$.
When is the answer $$$n$$$?
If the answer is not $$$n$$$, how can you buy cards?
Note that there are $$$n$$$ types of cards, so the subsets have size at most $$$n$$$, and the answer is at most $$$n$$$.
If $$$k = 0$$$, you can make subsets of size $$$s$$$ if and only if the following conditions are true:
 the number of cards ($$$m$$$) is a multiple of $$$s$$$;
 the maximum number of cards of some type ($$$x$$$) is $$$\leq m/s$$$.
Proof:
 $$$m$$$ is the number of decks times $$$s$$$.
 The number of decks is $$$m/s$$$. Each deck can contain at most $$$1$$$ card of each type, so there are at most $$$m/s$$$ cards of each type in total.
 If the two conditions above hold, you can make a deck containing the $$$s$$$ types of cards with maximum frequency. You can show with some calculations that the conditions still hold after removing these cards. So you can prove by induction that the two conditions are sufficient to make decks of size $$$s$$$.
The same idea is used in problems like 1954D  Colored Balls and abc227_d  Project Planning.
For a generic $$$k$$$, the answer is $$$n$$$ if you can make the number of cards of type $$$1, \ldots, n$$$ equal. Otherwise, for any choice of number of cards to buy, you can buy them without changing $$$x$$$. It means that you need $$$x \cdot s$$$ cards in total:
 if you have less than $$$x \cdot s$$$ cards, you have to check if you can reach $$$x \cdot s$$$ cards by buying at most $$$k$$$ new cards;
 if you already have $$$x \cdot s$$$ or more cards at the beginning, you have to check if you can make $$$m$$$ a multiple of $$$s$$$.
Complexity: $$$O(n)$$$
Author: TheScrasse
Preparation: TheScrasse
When is the answer $$$0$$$?
Starting from city $$$x$$$ is equivalent to setting $$$a_x = 1$$$.
At some time $$$t$$$, consider the minimal interval $$$[l, r]$$$ that contains all the cities with $$$a_i \leq t$$$ (let's call it "the minimal interval at time $$$t$$$"). You have to visit all this interval within time $$$t$$$, otherwise there are some cities with $$$a_i \leq t$$$ which you do not visit in time. So if this interval has length $$$> t$$$, you cannot visit it all within time $$$t$$$, and the answer is $$$0$$$.
Otherwise, the answer is at least $$$1$$$. A possible construction is visiting "the minimal interval at time $$$1$$$", then "the minimal interval at time $$$2$$$", ..., then "the minimal interval at time $$$n$$$". Note that, when you visit "the minimal interval at time $$$t$$$", the actual time is equal to the length of the interval, which is $$$\leq t$$$. In this way, at time $$$t$$$ you will have conquered all the cities in the minimal interval at time $$$t$$$, and possibly other cities.
Starting from city $$$x$$$ is equivalent to setting $$$a_x = 1$$$. After this operation, you have to guarantee that, for each $$$i$$$, the minimal interval at time $$$t$$$ is short enough. If this interval is $$$[l, r]$$$ before the operation, it can become either $$$[x, r]$$$ (if $$$x < l$$$), or $$$[l, x]$$$ (if $$$x > r$$$), or stay the same. In all this cases, the resulting length must be $$$\leq t$$$. With some calculations (e.g., $$$rx+1 \leq t$$$), you can get than $$$x$$$ must be contained in $$$[rt+1, l+t1]$$$. So it's enough to calculate and intersect the intervals obtained at $$$t = 1, \ldots, n$$$, and print the length of the final interval.
You can calculate the minimal intervals by iterating on the cities in increasing order of $$$a_i$$$. Again, if the old interval is $$$[l, r]$$$ and the new city has index $$$x$$$, the new possible intervals are $$$[x, r]$$$, $$$[l, r]$$$, $$$[l, x]$$$.
Another correct solution is to intersect the intervals $$$[ia_i+1, i+a_i1]$$$. The proof is contained in the editorial of 2018F3  Speedbreaker Counting (Hard Version).
Complexity: $$$O(n)$$$
Author: wksni
Preparation: TheScrasse
Solve for a fixed final depth of the leaves.
Which nodes are "alive" if all leaves are at depth $$$d$$$ at the end?
If the final depth of the leaves is $$$d$$$, it's optimal to keep in the tree all the nodes at depth $$$d$$$ and all their ancestors. These nodes are the only ones which satisfy the following two conditions:
 their depth ($$$a_i$$$) is $$$\leq d$$$;
 the maximum depth of a node in their subtree ($$$b_i$$$) is $$$\geq d$$$.
So every node is alive in the interval of depths $$$[a_i, b_i]$$$. The optimal $$$d$$$ is the one contained in the maximum number of intervals.
2018D  Max Plus Min Plus Size
Author: TheScrasse
Preparation: TheScrasse
The optimal subsequence must contain at least one occurrence of the maximum.
Iterate over the minimum, in decreasing order.
You have some "connected components". How many elements can you pick from each component? How to make sure you have picked at least one occurrence of the maximum?
The optimal subsequence must contain at least one occurrence of the maximum ($$$r$$$) (suppose it doesn't; then you can just add one occurrence, at the cost of removing at most two elements, and this does not make your score smaller).
Now you can iterate over the minimum value ($$$l$$$), in decreasing order. At any moment, you can pick elements with values $$$[l, r]$$$. Then you have to support queries "insert pickable element" and "calculate score".
The pickable elements make some "connected components" of size $$$s$$$, and you can pick $$$\lceil s/2 \rceil$$$ elements. You can maintain the components with a DSU.
You also want to pick an element with value $$$r$$$. For each component, check if it contains $$$r$$$ in a subsequence with maximum size. If this does not happen for any component, your score decreases by $$$1$$$. All this information can be maintained by storing, for each component, if it contains $$$r$$$ in even positions, and if it contains $$$r$$$ in odd positions.
Complexity: $$$O(n \alpha(n))$$$
2018E1  Complex Segments (Easy Version), 2018E2  Complex Segments (Hard Version)
Authors: lorenzoferrari, TheScrasse
Full solution: Flamire
Preparation: franv, lorenzoferrari
Solve for a fixed $$$m$$$ (size of the subsets).
$$$m = 1$$$ is easy. Can you do something similar for other $$$m$$$?
Solve for a fixed $$$k$$$ (number of subsets).
If you have a $$$O(n \log n)$$$ solution for a fixed $$$m$$$, note that there exists a faster solution!
Let's write a function max_k(m)
, which returns the maximum $$$k$$$ such that there exists a partition of $$$k$$$ valid sets containing $$$m$$$ intervals each. max_k
works in $$$O(n \log n)$$$ in the following way (using a lazy segment tree):
 (wlog) $$$r_i \leq r_{i+1}$$$;
 for each $$$i$$$ not intersecting the previous subset, add $$$1$$$ on the interval $$$[l[i], r[i]]$$$;
 as soon as a point belongs to $$$m$$$ intervals, they become a subset;
 return the number of subsets.
For a given $$$k$$$, you can binary search the maximum $$$m$$$ such that max_k(m)
$$$\geq k$$$ in $$$O(n \log^2 n)$$$.
The problem asks for the maximum $$$mk$$$. Since $$$mk \leq n$$$, for any constant $$$C$$$ either $$$m \leq C$$$ or $$$k \leq n/C$$$. For $$$C = (n \log n)^{1/2}$$$, the total complexity becomes $$$O((n \log n)^{3/2})$$$, which is enough to solve 2018E1  Complex Segments (Easy Version). You can also find max_k(m)
for all $$$m$$$ with a divide and conquer approach, and the complexity becomes $$$O(n \sqrt n \log n)$$$ (see here).
Now let's go back to max_k(m)
. It turns out you can implement it in $$$O(n \alpha(n))$$$.
First of all, let's make all the endpoints distinct, in such a way that two intervals intersect if and only if they were intersecting before.
Let's maintain a binary string of size $$$n$$$, initially containing only ones, that can support the following queries:
 set bit in position $$$p$$$ to
0
;  find the nearest
1
to the left of position $$$p$$$.
This can be maintained with DSU, where the components are the maximal intervals containing 100...00
.
Now let's reuse the previous solution (sweeping $$$r$$$ from left to right), but instead of a segment tree we will maintain a binary string with the following information:
 the positions $$$> r$$$ store
1
;  the positions $$$\leq r$$$ store
1
if and only if the value in that position (in the previous solution) is a suffix max.
So the queries become:
 add $$$1$$$ to $$$[l, r]$$$:

 $$$r$$$ changes, so you have to set elements in $$$[r'+1, r1]$$$ to $$$0$$$;

 the only other element that changes is the nearest
1
to the left of position $$$l$$$, which does not represent a suffix max anymore.
 the only other element that changes is the nearest
 find the maximum: it's equal to the number of suffix maximums, which depends on $$$r$$$ and on the number of components.
This solution allows us to replace a $$$O(\log n)$$$ factor with a $$$O(\alpha(n))$$$ factor.
Complexity: $$$O(n \sqrt n \alpha(n))$$$
[Bonus: there exists a data structure faster than DSU to solve the subproblem above, so you can solve the problem in $$$O(n \sqrt n)$$$. See here.]
2018F1  Speedbreaker Counting (Easy Version), 2018F2  Speedbreaker Counting (Medium Version), 2018F3  Speedbreaker Counting (Hard Version)
Author: TheScrasse
Full solution: Flamire
Preparation: TheScrasse
Suppose you are given a starting city and you want to win. Find several strategies to win (if possible) and try to work with the simplest ones.
The valid starting cities are either zero, or all the cities in $$$I := \cap_{i=1}^n [i  a_i + 1, i + a_i  1] = [l, r]$$$.
Now you have some bounds on the $$$a_i$$$.
Fix the interval $$$I$$$ and try to find a (slow) DP.
Counting paths seems easier than counting arrays. Make sure that, for each array, you make exactly one path (or a number of paths which is easy to handle).
How many distinct states do you calculate in your DP?
Lemma 1
For a fixed starting city, if you can win, this strategy works:
 [Strategy 1] If there is a city on the right whose distance is $$$t$$$ and whose deadline is in $$$t$$$ turns, go to the right. Otherwise, go to the left.
Proof:
 All constraints on the right hold.
 This strategy minimizes the time to reach any city on the left. So, if any strategy works, this strategy works too.
Corollary
For a fixed starting city, if you can win, this strategy works:
 [Strategy 2] If there is a city whose distance is $$$t$$$ and whose deadline is in $$$t$$$ turns, go to that direction. Otherwise, go to any direction.
Lemma 2
The valid starting cities are either zero, or all the cities in $$$I := \cap_{i=1}^n [i  a_i + 1, i + a_i  1] = [l, r]$$$.
Proof:
 The cities outside $$$I$$$ are losing, because there exists at least one unreachable city.
 Let's start from any city $$$x$$$ in $$$I$$$, and use Strategy 2.
 You want to show that, for any $$$x$$$ in $$$I$$$, Strategy 2 can visit all cities in $$$I$$$ first, then all the other cities. Then, you can conclude that either all the cities in $$$I$$$ are winning, or they are all losing.
 The interval $$$I$$$ gives bounds on the $$$a_i$$$: specifically, $$$a_i \geq \max(il+1, ri+1)$$$. Then, you can verify that visiting the interval $$$I$$$ first does not violate Strategy 2.
Corollary
If you use Strategy 1, the first move on the right determines $$$l$$$.
$$$\LARGE O(n^4)$$$ DP
Let's iterate on the (nonempty) interval $$$I$$$. Let's calculate the bounds $$$a_i \geq \max(il+1, ri+1)$$$. Note that Strategy 1 is deterministic (i.e., it gives exactly one visiting order for each fixed pair (starting city, $$$a$$$)). From now, you will use Strategy 1.
Now you will calculate the number of pairs ($$$a$$$, visiting order) such that the cities in $$$I$$$ are valid starting cities (and there might be other valid starting cities).
Let's define dp[i][j][k] =
number of pairs ($$$a$$$, visiting order), restricted to the interval $$$[i, j]$$$, where $$$k =$$$ "are you forced to go to the right in the next move?". Here are the main ideas to find the transitions:
 If you go from $$$[i+1, j]$$$ to $$$[i, j]$$$, you must ensure that $$$a_i \geq \max(il+1, ri+1, ji+1)$$$ (because you visit it at time $$$ji+1$$$). Also, $$$k$$$ must be $$$0$$$.
 If you go from $$$[i, j1]$$$ to $$$[i, j]$$$, and you want to make $$$k = 0$$$, you must make $$$a_j = ji+1$$$. It means that $$$j$$$ was the city that was enforcing you to go to the right.
In my code, the result is stored in int_ans[i][j]
.
Now you want to calculate the number of pairs ($$$a$$$, visiting order) such that the cities in $$$I$$$ are the only valid starting cities. This is similar to 2D prefix sums, and it's enough to make int_ans[i][j] = int_ans[i  1][j] + int_ans[i][j + 1]  int_ans[i  1][j + 1]
.
Since, for a fixed $$$a$$$, the visiting order only depends on the starting city, the number of $$$a$$$ for the interval $$$[i, j]$$$ is now int_ans[i][j] / (j  i + 1)
.
You have solved $$$k \geq 1$$$. The answer for $$$k = 0$$$ is just $$$n^n$$$ minus all the other answers.
$$$\LARGE O(n^3)$$$ DP
In the previous section, you are running the same DP for $$$O(n^2)$$$ different "bound arrays" on the $$$a_i$$$ (in particular, $$$O(n)$$$ arrays for each $$$k$$$). Now you want to solve a single $$$k$$$ with a single DP.
For a fixed $$$k$$$, you can notice that, if you run the DP on an array of length $$$2n$$$ instead of $$$n$$$, the bound array obtained from $$$I = [nk+1, n]$$$ contains all the bound arrays you wanted as subarrays of length $$$n$$$. So you can run the DP and get all the results as dp[i][i + n  1][0]
.
$$$\LARGE O(n^2)$$$ DP
You still have $$$O(n^3)$$$ distinct states in total. How to make "bound arrays" simpler?
It turns out that you can handle $$$l$$$ and $$$r$$$ differently! You can create bound arrays only based on $$$r$$$ (and get $$$O(n^2)$$$ distinct states), and find $$$l$$$ using the Corollary of Lemma 2. The transitions before finding $$$l$$$ are very simple (you always go to the left). So a possible way to get $$$O(n^2)$$$ complexity is processing Strategy 1 and the DP in reverse order (from time $$$n$$$ to time $$$1$$$).
Complexity: $$$O(n^2)$$$
1 minute late editorial?
I created video editorial for Div2D/Div1B: Speedbreaker.
:)))) Too fast
E was such a good problem, great problemset. im so mad i didnt get C earlier though
very interesting problems!!
The goddamn C...
so fast :O
anyway, good songs :D
Thank you so much for the great contest! I loved the problems!
Is it just me or was this Div 2, little too difficult?
i think this is one of the easier ones recently. a and b are pretty simple and c seems hard (atleast it seemed to me) but is actually fairly simple. the rest are pretty standard div2 problems imo
It's just me then I guess. Tbh, I couldn't even solve one lol. I did get the basic idea but could not solve them completely, in first and second question. Anyways thanks for the reply, mate!
i think its a bit easier than other div2 , C usually is harder i find it a bit easy i came up with the idea for it very fast but took me some time to fully solve it and b was easy but i spent a lot of time understanding what they want like it took me 40 min just to understand the problem and solve it in like 15min
I was unable to understand the problem B in whole contest and still having WTF it wants, that test case 1 [101,200] making things even harder.
To solve the third problem [2018A cards partition], can we use the binary search?
if yes then how to implement the checker function that this size of deck is possible or not
I tried but mine didn't work
i thinking no, because the checker is no montonic fuction
thinking in the primes number and k=0, the 6 can be divide, 7 no, and 8 yes. So the binary search can fail.
I think we can't do binary search on no.of decks
consider how you would make the decks, you would put all the cards with the highest frequency first, and then just greedily put all the cards on top of the lowest deck that you own currently, without making any new decks. then in the end if the last "row" you filled wasn't completely filled, you can try filling it with the k coins you have. you can do that if (sum)%x<=k , where x is the size youre checking. i didnt do it with bs, since i couldnt prove that if x doesnt work then x+1 surely wont work. also you have to check that the partition that you made actually uses all cards, you can check this by seeing if the amount of decks you would have would be atleast the frequency of the most frequent card. i hope i explained it well, if its not clear, just ask. also you can check my solution for details, but its very simple
I tried but got WA on pretest 4, I think search space is not monotonic
A binary search on all numbers from $$$1$$$ to $$$n$$$ doesn't work, because the function isn't monotonic, so if some deck size fails, it's still possible for a bigger one to succeed. Consider a case where you have $$$n=5$$$, $$$k=0$$$, and $$$1$$$ card of each type. Clearly, the only possible deck sizes are $$$1$$$ and $$$5$$$, while $$$2$$$, $$$3$$$ and $$$4$$$ fail, but $$$5>4$$$.
It might be possible to do a binary search on all numbers that divide at least one possible number of cards you can get. I'm not sure about that. In any case, as of now, I'm not aware of any reasonably fast checker function that isn't $$$O(1)$$$ (or easily possible to turn into $$$O(1)$$$ by some precomputation), so this probably isn't a great way to think about the problem.
i tried but wrong 283276768
with binary search here
what does
ans+=max(0LL,c*v[i]sumans);
in check mean?if the c*v[i] more than sum then increase all prefixs,ans here is the need from k
i made a checker of O(1) and tried binary search but as function is not monotonic so it fail so i just removed the binary search and then just run a for loop keeping checker then it got accepted so as i used that checker function to check that size is possible or not.. you can check my code for checker function
How much practice do i need i was stuck at B completely.
its an complete observation problem.Second test case is enough to get the answer try to dry run it, you will get the answer
Yeppp
E1，O(n * sqrt(n) * log^2(n)) got a TLE......sad
Rename account maybe
How to implement E.
here. if you have any questions, just ask
the only thing i don't understand is how maxwin affects the answer. I was able to implement everything else.
maxwin[i] is the maximum depth you can reach if you start going down from the ith vertex. then when checking for a certain depth x, if maxwin[i] is smaller than x, then i must be destroyed, because the ith vertex can never become a leaf with depth x, and neither can any node in it's subtree, so the entire subtree must be destroyed, since i counted all the vertices by themselves i just remove a vertex one by one
yes yes brilliant thank you. I was trying to do something like this with prefix sums as well but failed miserably.
today's contest just ruined my day (though contest was good)... My submission on C 283251847 is the biggest blunder i had done till now... if i did not tried the binary search && just run a loop my code would be accepted && I might be Cyan today...
when i firstly start to solve this C.. i go for nlogn approach... then make the function 'f' .. but i did not notice function is literally O(1) .. so I could run a O(n) loop...
when i find this 1 minute after contest, I realize this CP is not for me...
(apologies for my poor english)..
The intervals in the editorial of F should be $$$[i  a_i + 1, i + a_i  1]$$$?
Yes, fixed!
did you mean like that ?
or i didn't get it :(
283258453
It only works if the answer is $$$\geq 1$$$, you need to check if the answer is $$$0$$$ separately.
thank you
It seems that you only fix it in the solution part, but in hint 2 there also exists one to be fixed
An alternative solution for Speedbreaker (Div2D):
There are 4 cases:
Submission: 283247155
I did binary search.
283252824
cool
nice one!
Can we claim that "if all the cities in the intersection are valid, then any strategy should first visit all the cities in the intersection before visiting other cities"?
UPD: No
The above observation actually makes the solution much easier,
Observation 1 : All solutions would lie in a contiguous segment
Observation 2 : If a segment [l, r] has a solution then a[l] >= rl+1 and a[r] >= rl+1
My proposed solution :
set l = 0, r = n1. if exactly one of a[l] or a[r] is >= rl+1 then reduce the segment by removing that element i.e. segment either becomes [l+1, r] or [l, r1]. If both a[l], a[r] are >= rl+1 decrement r. If both a[l] and a[r] < rl+1 then no solution.
Claim : The above iteration ends in the leftmost solution.
We reach a solution because we could just perform the above steps in the reverse order and cover the entire array, By Observation 1, as we always decrement r means that we must have reached the smallest such l.
Similarly perform the above with incrementing l at each iteration, this gives the largest r such that we can start and conquer all indexes.
Code for this solution : https://codeforces.net/contest/2019/submission/283302596
nice
But why do all solutions lay in a contiguous segment?
Apart from that, very nice solution!
Adding to it why removing a[l] >=r — l + 1 and a[r] >=rl + 1 works as the segments get shorter these cities can be still be visited with lesser segment length and pausing at a[l]<rl+1 or at a[r]<r l + 1 means we have to stay here and decrease the segment from the opposite end until we get a[l]>=r — l + 1 or a[r]>=r — l + 1 if both are less than a[l]<rl + 1 and a[r]<r — l + 1 no matter what you we cannot reach this from either end therefore no solution exist at first place.
Hi, I don't get how a city is a valid starting city can be checked? I see some implementations where they are doing l = max(l, ia[i]+1) r = min(r, i+a[i]1) but this gives a range of starting cities, that also I don't get, I don't get how can you verify if a city i is a valid starting city, what is the strategy to pick cities if you make i the starting city, I can't understand the editorial, if you don't mind, can you please independently explain a solution to a beginner? In your words as an editorial, take some time for us bro, I solved A, B, C, E and could not understand D till now. Please help dear friend! TELL WHY to whatever you approach, like step by step how you build the solution and how you proved yourself that's working, please help
Do you know what a segment tree is? Otherwise I can't really explain my solution.
Yes I do
Wasn't div2 E a lot easier for its position?
C was more difficult imo
Yeah, I do agree to that, but div2 E is usually a lot harder than this one.
true
C was tooo gorgeous... Gave it the attempt of my life on Codeforces... Learnt a lot, it was like an adventure... Thank you contest×codeforces
literally had the same experience with C, spent like an hour on it and finally got the perfect solution.
also is your profile picture a reference to the AMV tan(x) by lolligerjor?
Ummm, No, didn't know that such thing existed, but my good name is Tanishq, it kinda sounds like TanX, maybe...Yes!
For div2 E i used ternary search on depth...and got wrong answer on test 3.i can not find any wrong case.can anyone hep me 283252293? update: may be this problem can't be solve using ternary search. i got a case
woah too fast (:
Felt like DIV2B was harder than DIV2C :)
Oh wow, my solution to F is completely different. The canonical strategy that I use for an array with a marked interval of possible starting cities is to always go to the city with the shortest deadline and break ties by going left. It seems that the resulting dp is very different (and in particular I don't need to use any division)
Obligatory "thanks mr Radewoosh" comment.
div1E is https://codeforces.net/blog/entry/61331
Why this code is a incorrect !! My intuition was first find maximum element across the array and then if n is even — n/2 and if n is odd — ((n+1)/2)?!
First of all, your n == 3 case should be reconsidered as on test cases:
Your code says
3
but the answer ismax_ele(3) + 2 = 5
As the1 2 3
you can colour 3 and 1 red having 2 red elements + max_ele(3) will give optimal answer as 5. Moreover, you have to consider the max element being in an odd place having all odd indexes counted elements, or being in an even place having all even indexes counted elements.what about this solution
what do you think the answer should be?
you have to check the parity of position of max value
It's shocking that so many people in Div1 solved problem C.
i found D1C to be the easiest div1 problem today, looking at my friendlist, most people had the same feeling. It was obvious to me what to do upon reading the problem. Same for D too, I took less time to mindsolve CD combined than either of AB
D might be *2200,but C is sure under 1900.
And C even easier than B.
so why u are chocked ?
C is easier than usual
I am interested to know your opinion on this. After reading your comment, I spent some more time on problem C, unfortunately I still can't come up with the solution. I haven't looked at the editorial but so far my idea is to consider iterating on the levels of the tree and taking the minimum and for a particular level the answer is Nnumber of distinct vertices on the path from the root to all the vertices on the current level(I actually figured all this in about ten minutes but I can't think of a way to calculate this in O(n), maybe LCA/inclusion exclusion to avoid double counting I don't know). What do you think one should ideally do in such case? Try for more time or just look at the editorial and be done with it?
Also, I solved DIV1 A in about 30 minutes and got the idea even fairly quick and spent time only to fix a stupid typo. However, if I didn't already know the fact that we only need to know the max element and total sum to figure out if we can arrange the cards, then I don't think I would have been able to solve the problem. So, for DIV1A I would recommend someone to look at the editorial quite early if they weren't able to solve it. So, my general question is how long do you think should one spend time trying to solve a problem before looking at editorial and what was your strategy when you were at specialist/expert level ?
you are close on C, you just overcomplicated the latter part. Try to think of when a vertex will be deleted instead of its opposite.
As for when I used to read editorials, it somewhat depends on my progress. If i felt I was close to the answer, I would hold off / read a bit after long time. Otherwise, if i did not make substantial progress, I would read after say 30mins — 1 hour (now its more ofcourse)
Thanks for the response! With your hint for C, I managed to solve it myself. I would have solved it in 3040 min I guess if I thought of this:( Is there a way to count the answer if you look at it my initial way(counting the distinct vertices among all the paths from root to all the vertices at the current level)? Assume if that was what you thought first, Is there any way to recognize that the other way of looking at it is easier/will lead to solution or once you are stuck you just simply switch to looking at when the vertex will be deleted instead of opposite? Asking because I kind of get stuck on one approach like this and fail to solve many solvable problems and would love to get better.
yes, but it will end up being the same thing. We want to charactertize such vertices to easily count them. Its not hard to see that you want to count vertices that satisfy dep_u <= k and max_dep_in_subtree_u >= k
So, for all k in range [dep_u, max_dep_in_subtree_u], increase the count of saved vertices by 1.
Bro can you explain your intuition for Speadbreaker which is Div1B? I'm unable to build the solution even after reading editorial, like why is it happening? How can you claim that a city i is valid starting city, what is the strategy to pick cities after you fix a starting city and how can you say that this range from l to r is going to be the valid cities, I'm unable to understand @Dominater069
can someone please tell me what's wrong with this soln for Div2B? it gave WA on pretest 9
maybe integer overflow in the line cnt += (n1+(i*(n1i))); You have taken n and i as int.
yeah i just typecasted at 2 places and it accepted. I thought since cnt is long long, it'll get handled. Can you please suggest some reference where i can clear this concept?
may be overflow ...using long long in your ans
My O(n^5) solution (with small constant) passed F1. However it's hard to optimize to O(n^4) or better (Because I solved d1B by a suboptimal solution, which mislead my thinking of F1)
EDIT: you can actually do this in $$$O(n)$$$: 283287968. Same idea, but using path counting instead of the first PIE step.
For F you don't really have to think about paths at all, inclusionexclusion on the arrays is enough (submission 283272424, ignore the unused variable).
Idea one: it's always sufficient to take the "working" interval first.
Idea two: to count the number of arrays of length $$$k$$$ with elements in $$$[1,n]$$$ and every index works, one only needs to make sure the endpoints work. You can just lower bound the elements for a count of
Now iterate over $$$k$$$. Count the above and consider how to count extensions to arrays of length $$$n$$$ that do not break any of the elements that were supposed to work. You can do this with PIE; to get extensions of length $$$i$$$ get extensions of length $$$i1$$$ and extend them on either side, then subtract out extensions of length $$$i2$$$ where you can take the next two elements in either order. (see my solution for the push dp)
Now you have
Idea three. the PIE to finish is by subtracting out
for each $$$a$$$ with a working interval larger than $$$k$$$. If you know the size of the working interval, you can count this.
That's clever! Looks like I still have ways to go :)
too fast!!
D is a tremendous problem
Good contest, I explained the entire process of how to do D (div 2) and GPTo1mini still couldn't solve it.
Use this technique below: for each number from 1 to n, maintain two arrays start and end , signifying the index of the first and last occurrence of the number. if there is no occurrence initiate it as 1. fact: there must exist a subarray of size j such that all occurrences of numbers upto j occurs inside that subarray for all j from 1 to n. Now the given array is valid iff it passes the following procedure: first we start with 1, if there is no occurrence of 1 we ignore it, in general say the first and last occurrence of 1 is in l and r, then r — l + 1 <= 1, next we go to the occurrences of two, if no occurrence then same thing as before, otherwise r — l + 1 <= 2. note that this new r and l is obtained by extending the previous l and r by the first and last occurrences of this number 2, basically within this new range l..r all occurrence of 1 and 2 must be present. we keep doing this for all the elements and if we can reach the end without any violation then we can in fact start from somewhere. On the procedure of finding where we can start, but with an example: 6 3 3 5 4 5 1: no occurrence implying our staring range: 1 to n 2: no occurrence implying our starting range: intersect the range of 1 with the range of two ( 1 to n again), which is also 1 to n. 3: l = 2, r = 3, and this range of length two should be contained inside a subarray of length 3, there are two such subarrays, and in fact we can start from anywhere between 1 and 4 and still be able to cover this range, so our current starting range is the subarray range 1 to 4 intersected with 1 to n, which is just 1 to 4. 4: new range of all occurrences till 4 is 2 to 5, and only one subarray satisfied it with range 2 to 5, so our valid range is 1 to 4 intersected with 2 to 5 which is 2 to 4. 5: so on our final valid range is 2 to 4, so there are 3 valid starting points
convert this logic into code
In case anyone is wondering, I have written a code with the exact same logic and it is AC. You can check it out on my submissions.
Good news is, despite several attempt and demand of direct conversion of logic to code without adding its own logic, it could not produce a code which solved even TC 1.
This implies GPTo1 really isn't something we should worry about for now for div 2 and above. Not only can it not think critically, it cannot even follow basic logical instructions.
Can someone tell me why I got WA on Test Case 9?
283223987
Can anyone tell me which test case will be wrong in this solution of Problem C 283238912 ? Can anyone give me the tc where binary function won't be monotonic??
consider
5 0 1 1 1 1 1
your code says 1, but the answer is actually 5. just because 3 doesn't work, doesn't mean that 5 won't.
ohh got it thanks
I have a quite different solution for Div2 E. I think it’s quite enriching and gives some insights about an alternate way of thinking hence i will try to present it here. Let’s try to calculate the cost for some fixed final depth(say d) of all the leaves. What do we need to do to make the depth of all the leaves equal to d? Try to think of it before looking up the spoiler below.
There are two cases for all the leaves that have to be removed.
Case 1: The depth of some leaf is greater than d: we need to remove such leaves until the point where the remaining tree has all leaves with depth less than or equal to d
Case 2: The depth of some leaf is less than d: we need to remove such leaves until the remaining tree has all leaves with depth greater than or equal to d
Do the two cases hint you something, how are they related. Do they seem quite similar?
The two cases mentioned above are independent and can be solved separately.
We define two arrays:
1) less[d] : it represents the cost of case 2 for depth d. More formally for some depth d, it represents the cost of removing all leaves with depth less than d to a point where the left tree has no leaf with depth less than d
2) more[d] : cost of case 1 for choosen depth = d(define formally in a similar way)
How do we calculate these for all possible d(0 <= d <= n)
It’s simple dynamic programming + simulation I will describe the dynamics for the less case here : We process d in increasing order. Suppose we have found less[i] for all i < d, how do we find less[d]?
We will simulate the process : when we are at d we would already have simulated the process for all depths < d, hence we won’t have any leaf with depth < d — 1 at this point. We want to have no leaf with depth less than d. Hence we will now remove all leaves with depth = d — 1, and if any such removal creates a new leaf, it would surely have a depth < d, hence we would insert it too in our data structure(from where we pick up leaves and remove them: we can keep a set of pair<int,int> the first value being depth and the second being node). Note that every node can be removed at most once across all iterations on d giving an asymptotics of O(nlogn) for this part.
We can find more[d] in the same way.
Now the cost for a final depth of d = less[d] + more[d]. Hence take minimum across all the values
You can view my solution for implementation details here : 283261930
solved it after the contest with same logic it is more intiutive I have calculated less and more using another logic here it is 283290386
+1: 283291023
Div2E, can solve using bfs.
iterate levelbylevel, each level we can compute the number of nodes not removed.
when a node is a leaf node, need to remove from leaf to it’s parent recursively.
what is the "x" in the tutorila of problem C
Here's a "troll" way to solve F in linear time.
As with some other approaches to this problem, we want to answer the following question: given $$$n$$$ and a width $$$w$$$, compute the sum over all $$$nw+1$$$ intervals of width $$$w$$$ of the number of ways to pick numbers outside that interval so that all cities in that interval are good, given that the numbers inside that interval are already chosen to make it possible. Now observe, either through a bijective argument or by printing the results from a slower solution, that this quantity only depends on $$$nw$$$, i.e. it is the sequence $$$1$$$, $$$2$$$, $$$7$$$, $$$34$$$, $$$209$$$, $$$\dots$$$ found in column 1 of the samples. Finally, observe that this is OEIS A002720 which provides a recurrence that computes these numbers in linear time. The rest is straightforward.
The combinatorial interpretation of the sequence is really nice, though. You don't really need to biject with anything new. Basically you have
(the subscript is falling factorial).
Here $$$f(n)$$$ is the number of ways of expanding by $$$n$$$. Consider this to be the picking the values enumerated by the path as in the editorial. Pick the first fixed point in the sequence to be index $$$i$$$, and choose how much of the suffix is on the right/tight side (options are $$$0$$$ through $$$i$$$). Pick the nonfixed points and recurse.
$$$O(n)$$$ Algorithm for D1F: https://codeforces.net/contest/2018/submission/283298613.
Thank you for sharing polygon materials! For future setters, can we normalize this?
Very nice problems!
Could someone explain the problem div2D for me, as I couldn't understand the editorial.
Maintain a lo and hi pointer, initially set to 0 and n1. Check if either a[lo] or a[hi] >= size of array. Then, increment lo up or hi down, depending on if it was a[lo] or a[hi]. Repeat until lo>hi.
If this process results in no answer, for any quey, then 0 starting locations will suffice.
Otherwise, maintain a minStart and maxStart position. Set to 0 and n — 1.
For each item: minStart = max(a[i]i,minStart) maxStart = min(a[i]+i,maxStart)
ans = maxStart — minStart + 1.
For problem D(div2) can someone explain this line from the editorial — "At some time t, consider the minimal interval [l,r] that contains all the cities with ai≤t(let's call it "the minimal interval at time t"). If this interval has length >t, the answer is 0."
you got something??
I still didn't get the editorial's solution. I solved it in a alternative way.
An alternative solution for Speedbreaker (Div2D): Solution Courtesy: asdasdqwer
There are 4 cases:
1.a0<n and an−1<n: no solution is possible.
2.a0≥n and an−1<n: This means that a0 must be the last element that is conquered. We now check how many solutions exist in the interval [1,n−1].
3.a0<n and an−1≥n: Similar to second case.
4.a0≥n and an−1≥n : We now want to check if a0 and an−1 are valid solutions. a0 is a valid solution only if we can conquer every city going from left to right. To check if a0 is a valid solution, create a segment tree b with bi=ai−i. Now do a query on the range [0,n−1]. a0 is a valid starting city if and only if the result of the query ≥1. For checking if an−1 is a valid starting city, do something similar. After that, count the number solutions in the range [1,n−2].
Can anyone explain C?
It's my birthday and I get 180 as a gift :p sadge
Can the editorial writers consider providing some useful thinking process to reach the solution instead of a formal proof of correctness which I don't think is very useful for most readers of the editorials ?
At least, for Div2 A B C D , make it simple and intelligible and actually useful for PROBLEMSOLVING.
For example, I find editorial for problem C very challenging to follow and honestly not very useful. I Solved this problem before reading the editorial with a very different thinking.
I suggest that the editorial writer for such problems not to be red coder but maybe expert level, or a red coder who has some teaching background.
Great contest,was able to solve ABC,got the idea of E in the contest but didn't know how to find lca of two node in logn time(though i know that binary lifting is used),after the contest i learned Lca using binary lifting.Basically i calculated distance of root from every node.Let say we fix the level of all the leaf nodes in the tree,total number of edge that u have to remove is nothing but ((n1)total number of unique edge that all the node on the particular level pass through)).To calculate the total number of unique edge on a particular level,i iterated through all the node in that level using queue,and added (distance of that node from rootdistance of lowest common ancestor of previous node and current node from root).Answer is lowest operation on all the level.
nice, I almost did the same thing.
how 2F?
can anyone help me understand D?
:O I see a lot of geometry dash songs in the problem statements
in problem E can we solve it ternary search? if no why?
can you please include the actual solutions themselves
can anyone provide me the dp solution for 1st problem ...did it using greedy but not getting right solution using dp...2019A Max Plus Size
For 2018BSpeedbreaker, I have another solution which seems simpler. Consider [1,n] first, we observe that if max(a[1],a[n])<n, then there isn't a valid starting point because 1st or nth must be the last city visited. If a[1]==n, we know that if we start in [2,n], then 1st is the last city and we can only consider cities [2,n]. This implies we can first check whether 1st is a valid starting point or not, and then decrease the question size by one. When considering [l,r] and a[l]>=rl+1 (we want set l to l+1), l is valid starting point if and only if a[l]>=1, a[l+1]>=2, a[l+3]>=3,...,a[r]>=rl+1, which is equivalent to a[x]x>=1l for any x in [l,r], which can be maintained using Sparse Table. Similar condition for r when we want set r to r1. This leads to a O(n log n) solution.
Could anyone please explain me how to prove this part of C solution: "the maximum number of cards of some type (x ) is ≤m/s"
m = the number of total cards we have after we buy some
s = the number of elements in each deck
m/s= the number of decks we have
so no element can be more than m/s because you have m/s decks only
I saw a lot of solutions for problem Div1D. Max Plus Min Plus Size that use DP + Segment tree. Can someone explain those please?
submission
I might be retarded but once you get the arrays a and b for Div2E/Div1C as mentioned in the editorial, how do you calculate the count of intervals overlapping each 1<=i<=n, at least without using max segment tree with range update or something stupid. I saw a lot of answers using prefix sum and Shayan's video editorial also mentioned prefix sum, but I'm absolutely not able to understand what we are accomplishing with prefix sum.
Problem : find the intersection of the segments.
We can say that for each $$$1 \leq i \leq n$$$, we want to find the number of segments which contain it and then see how many $$$i$$$ are contained in $$$n$$$ segments.
So, for each segment, we can add $$$1$$$ in the range from $$$l$$$ to $$$r$$$ (since this segment contains every element inside this range).
Our new problem is to add $$$x = 1$$$ in the range from $$$l$$$ to $$$r$$$, and after doing all such operations, find how many elements are $$$=n$$$.
Think, how can we use the fact that we have to perform all the operations before seeing the array.
Instead of just looping over $$$l$$$ to $$$r$$$, we instead want to pass this information and get the real value later.
We can accomplish this by difference array (application of prefix sum). We do :
p[l]++;
indicating that a segment is starting from here. andp[r + 1];
indicating that a segment has finished here.After doing all operations, if we run a for loop and do :
p[i] += p[i  1];
(make p it's own prefix sum), then we have in reality added $$$1$$$ in the range from $$$l$$$ to $$$r$$$ (if you consider there was only one operation, and then later see that it works for any number of operations).Thank you! Now I got AC :D
I have a strict $$$\mathcal{O}(n)$$$ approach to the Div1C problem, which uses longchain partitioning to optimize dynamic programming on trees.
283907902
The editorial is also O(n)????
This solution is different from the tutorial, and it is more extensible :D
Can someone explain div2D "speedbreaker" problem ? I am not able to understand it solution ?
hello, I am getting wrong answer on test 45, i tried my best but i was unable to find why it is happenning in this Solution, Problem D Name mouse hunt , in educational round 49,could anyone please help me
What is x in the solution of problem C?
Hi, this is my second account and in this round I was testing my solution to problem A before writing it in my official account and i had no idea that my solution will be skipped if I did this, so can you please accept my solution because as a beginner I was trying hard to solve it and thank you
I have a strict $$$\mathcal{O}(n)$$$ approach to the div1C problem, which uses longchain partitioning to optimize dynamic programming on trees.
283907902
Can anyone elaborate on Div1.D? I cannot understand the editorial. Thanks in advance!
hey you got some explanation??
Sadly, no :(
Then lets race, whoever gets it first will post their solution
Problem B is too difficult to understand
For problem Div2E, I understand for all leaves to have depth d, nodes that will be alive need to satisfy below properties:
Their depth(ai) >= d
Maximum depth of child in the subtree(bi) >= d
But how does it translate to this line in the tutorial: So every node is alive in the interval of depths
[ai, bi]
? How do above properties enable us to form an interval of depths[ai, bi]
?Sorry, for D1B, you say one possible solution is to intersect all $$$[ia_i+1, i+a_i1]$$$, but the second sample case fails?
$$$[3, 5]$$$ $$$[3, 7]$$$ $$$[0, 6]$$$ $$$[4,4]$$$ $$$[2,8]$$$ $$$[2,10]$$$
The intersection is $$$[4,4]$$$ right?
Oh I think I got it. According to the proof in the editorial for the last problem,either all of the cities in the intersection are valid, or none are.
Now I understand why problem D was named speedbreaker.
See number of submissions
Real, in fact E was easier than C I guess
I have trouble with problem E why order was O(n √n α(n)) ? i think is it O(n √(nlg) α(n))
It's explained here.
thank you
How to perform the following check (from editorial problem 2018A — Cards Partition) in O(1) time?
'' if you already have x⋅s or more cards at the beginning, you have to check if you can make m a multiple of s. ''