Sorry for not very good samples, and late editorial. It was our first round, so it was wary stressful for us, But we hope you liked the problems despite this!
Try to look at the parity of $$$n, m$$$.
The player who wins does not depend on player's strategy.
The only thing that the winning player depends on is the parity of $$$n, m$$$.
Note that the game will end only when the chip is in the upper right corner (otherwise you can move it $$$1$$$ square to the right or up). For all moves, the chip will move $$$n - 1$$$ to the right and $$$m - 1$$$ up, which means that the total length of all moves is $$$n + m - 2$$$ (the length of the move is how much the chip moved per turn). Since the length of any move is odd, then after any move of Burenka, the sum of the lengths will be odd, and after any move of Tonya is even. So we can find out who made the last move in the game by looking at $$$(n + m - 2) \operatorname{mod} 2 = (n + m) \operatorname{mod} 2$$$. With $$$(n + m) \operatorname{mod} 2 = 0$$$, Tonya wins, otherwise Burenka.
The complexity of the solution is $$$O(1)$$$.
Instead of $$$k$$$, we can consider the remainder of dividing $$$k$$$ by 4.
There is no answer for the remainder 0.
For all other remainders, there is an answer.
Note that from the number $$$k$$$ we only need the remainder modulo $$$4$$$, so we take $$$k$$$ modulo and assume that $$$0 \leq k \leq 3$$$.
If $$$k = 0$$$, then there is no answer, because the product of the numbers in each pair must be divisible by $$$4 = 2^2$$$, that is, the product of all numbers from $$$1$$$ to $$$n$$$ must be divisible by $$$2^{\frac{n}{2} \cdot 2} = 2^n$$$, but the degree of occurrence of $$$2$$$ in this sum is $$$\lfloor\frac{n}{2}\rfloor +\lfloor\frac{n}{4}\rfloor$$$, which is less than $$$n$$$.
If $$$k = 1$$$ or $$$k = 3$$$, then we will make all pairs of the form $$$(i, i + 1)$$$, where $$$i$$$ is odd. Then $$$a + k$$$ will be even in each pair, since $$$a$$$ and $$$k$$$ are odd, and since $$$b$$$ is also even, the product will be divisible by $$$4$$$. Если $$$k = 2$$$, then we will do the same splitting into pairs, but in all pairs where $$$i + 1$$$ is not divisible by $$$4$$$, we will swap the numbers (that is, we will make $$$a = i + 1$$$ and $$$b = i$$$). Then in pairs where $$$i + 1 \operatorname{mod} 4 = 0$$$, $$$b$$$ is divisible by $$$4$$$ (therefore the product too), and in the rest $$$a + k$$$ is divisible by $$$4$$$ (since $$$a$$$ and $$$k$$$ have a remainder $$$2$$$ modulo $$$4$$$).
The complexity of the solution is $$$O(n)$$$.
Who would win duels if the strongest athlete was the first in queue?
After what number of rounds is the strongest athlete guaranteed to be at the front of the queue?
Simulate the first $$$n$$$ rounds and write down their winners so that for each person you can quickly find out the number of wins in rounds with numbers no more than a given number.
Note that after the strongest athlete is at the beginning of the queue, only he will win. The strongest athlete will be at the beginning of the queue no more than after the $$$n$$$-th round. Let's simulate the first $$$n$$$ rounds, if the $$$j$$$-th athlete won in the $$$i$$$-th round, then we will write down the $$$i$$$ number for him. Now, to answer the query $$$(i, k)$$$, it is enough to find the number of wins of the $$$i$$$ athlete in rounds with numbers $$$j \leq k$$$, and if $$$k > n$$$, and the strength of the $$$i$$$ athlete is equal to $$$n$$$, then add to the answer another $$$k - n$$$.
The complexity of the solution is $$$O(n + q \operatorname{log}(n))$$$.
1718A2 - Burenka and Traditions (hard version)
Segments of length greater than 2 are not needed.
For A1 you can use dynamic programming
Dynamic programming where dp[i][v] means the minimum time to make $$$a_j = 0$$$ for $$$j < i$$$ and $$$a_i = v$$$. (Notice that v can be any number from 0 to 8191)
There is an answer in which segments of length $$$2$$$ does not intersect with segments of length $$$1$$$.
If $$$a_l\oplus a_{l + 1} \oplus \ldots \oplus a_{r} = 0$$$ is executed for $$$l, r$$$, then we can fill this segment with zeros in $$$r-l$$$ seconds using only segments of length 2.
There is an answer where the time spent is minimal and the lengths of all the segments taken are 1 and 2. because of the segment $$$l, r, x$$$ can be replaced to $$$\lceil\frac{r-l+1}{2}\rceil$$$ of segments of length 2 and 1, or rather $$$[l, l + 1, x], [l+ 2, l + 3, x], \ldots, [r, r, x]$$$(or $$$[r-1, r, x]$$$ if $$$(l - r + 1)$$$ even).
Note that if $$$a_l\oplus a_{l + 1} \oplus \ldots \oplus a_{r} = 0$$$ is executed for $$$l, r$$$, then we can fill the $$$l, r$$$ subsections with zeros for $$$r-l$$$ seconds with queries $$$[l, l+1, a_l], [l+1, l+2, a_l \oplus a_{l+1}], ... [r-1, r, a_l\oplus a_{l+1} \oplus \ldots \oplus a_r]$$$.
Note that if a segment of length 2 intersects with a segment of length 1, they can be changed to 2 segments of length 1.
It follows from all this that the answer consists of segments of length 1 and cover with segments of length 2. Then it is easy to see that the answer is ($$$n$$$ minus (the maximum number of disjoint sub-segments with a xor of 0)), because in every sub-segments with a xor of 0 we can spend 1 second less as I waited before. this amount can be calculated by dynamic programming or greedily. Our solution goes greedy with a set and if it finds two equal prefix xors($$$prefix_l = prefix_r$$$ means that $$$a_{l + 1} \oplus a_{l+2} \oplus \ldots \oplus a_{r} = 0$$$), it clears the set. 168724728
The complexity of the solution is $$$O(n \operatorname{log}(n))$$$.
Try to express $$$n$$$(the sum of all $$$c_i$$$) as $$$F_1 + F_2 + \ldots + F_m = n$$$.
Let, $$$n = F_1 + F_2 + \ldots + F_m$$$, then try to find the minimum $$$x$$$ starting from which if there is $$$c_i=x$$$ there is no answer.
$$$x$$$ is (the maximum sum of the Fibonacci numbers, among which there are no neighbors) + 1.
$$$x$$$ is $$$F_1 + F_3 + F_5 + \ldots + F_{m} + 1$$$ for odd $$$m$$$ and $$$F_2 + F_4 + F_6 + \ldots + F_{m} + 1$$$ for even.
$$$x$$$ is $$$F_{m+1} + 1$$$ for odd $$$m$$$ and $$$F_{m+1}$$$ for even.
Try to understand if it can happen that there are two different answers in which the letter forming the block of length $$$F_{m}$$$ differs.
The correct answer is: yes if there is $$$c_i = F_m$$$ and $$$c_j = F_m$$$ and this is only way.
Now, using everything you have understood, write a greedy solution.
Try to write a greedy solution.
The programmer does not need a second hint, he wrote a greedy solution after the first one.
At the beginning, let's check that the number $$$n$$$ (the sum of all $$$c_i$$$) is representable as the sum of some prefix of Fibonacci numbers, otherwise we will output the answer NO.
Let's try to type the answer greedily, going from large Fibonacci numbers to smaller ones. For the next Fibonacci number, let's take the letter with the largest number of occurrences in the string from among those that are not equal to the previous letter taken (to avoid the appearance of two adjacent blocks from the same letter, which cannot be). If there are fewer occurrences of this letter than this number, then the answer is NO. Otherwise, we will put the letter on this Fibonacci number and subtract it from the number of occurrences of this letter.If we were able to dial all the Fibonacci numbers, then the answer is YES.
Why does the greedy solution work? Suppose at this step you need to take $$$F_i$$$ (I will say take a number from $$$c_t$$$, this will mean taking $$$F_i$$$ letters $$$t$$$ from string), let's look at the maximum $$$c_x$$$ now, if it cannot be represented as the sum of Fibonacci numbers up to $$$F_i$$$ among which there are no neighbors, then the answer is no. It can be proved that if $$$c_x > F_{i+1}$$$, then it is impossible to represent $$$c_x$$$.
If there is exactly one number greater than or equal to $$$F_i$$$ at the step, then there is only one option to take a number, so the greedy solution works.
If there are two such numbers and they are equal, then the option to take a number is exactly one, up to the permutation of letters. the greedy solution works again.
If there are $$$c_j \geq F_i, c_x \geq F_i, j \ne x$$$, then we note that the larger of the numbers $$$c_j, c_x$$$ will be greater than $$$F_i$$$, if we don't take it, then at the next step this number will be greater than $$$F_{i+1}$$$ ($$$i$$$ will be 1 less), according to the above proven answer will not be, so, taking the larger of the numbers is always optimal.
The complexity of the solution is $$$O(k \operatorname{log}(n)\operatorname{log}(k))$$$.
The answer for $$$k=x$$$ and $$$k=\gcd(x,n)$$$ is the same.
In this problem, a general statement is useful: for any array $$$c$$$ of length $$$m$$$, $$$m\cdot\operatorname{max}(c_1, c_2, \ldots, c_m) \geq c_1 +c_2 +\ldots+c_m$$$ is true.
Try applying the second hint for $$$k_1$$$ and $$$k_2$$$ divisible by $$$k_1$$$, where $$$k_1, k_2$$$ are divisors of $$$n$$$.
We can consider only $$$k$$$ which equals to $$$\frac{n}{p}$$$, where $$$p$$$ is a simple divisor of $$$n$$$.
Let's note that the answer for $$$k=x$$$ and $$$k=\gcd(x,n)$$$ is the same.
Indeed, for the number $$$k$$$, we will visit numbers with indices $$$s + ik \mod n$$$ for $$$i$$$ from $$$0$$$ to $$$n-1$$$ inclusive, from this we can see that the index of the $$$i$$$-th number coincides with the index of $$$i + \frac{n}{\gcd(k,n)}$$$, and if we look at two indexes, the difference between which is $$$l$$$ and $$$l<\frac{n}{\gcd(k,n)}$$$, then they are different, since $$$k\cdot l \mod n \neq 0$$$, therefore, the answer is (the sum of numbers with indices $$$s + ik \mod n$$$ for $$$i$$$ from $$$0$$$ to $$$\frac{n}{\gcd(k,n)}-1$$$) $$$\cdot\gcd(k,n)$$$.
Now let's prove that the first $$$\gcd(k,n)$$$ numbers are the same for $$$(s,x)$$$ and $$$(s,\gcd(x,n))$$$, note that only those indices that have the same remainder as $$$s$$$ when divided by $$$\gcd(x,n)$$$ are suitable, but there are only $$$\frac{n}{\gcd(k,n)}$$$ of such indices, and we proved that we should have $$$\frac{n}{\gcd(k,n)}$$$ of different indices, so they are all represented once, therefore the answer for $$$k=x$$$ and $$$k=\gcd(x, n)$$$ is the same, because the sum consists of the same numbers.
So, we need to consider only $$$k$$$ being divisors of $$$n$$$, this is already passes tests if we write, for example, a segment tree, but we don't want to write a segment tree, so we go further, prove that for $$$k_1 = x$$$, the answer is less or equal than for $$$k_2 = x \cdot y$$$ if $$$k_1$$$ and $$$k_2$$$ are divisors of $$$n$$$, why is this so?
Note that for the number $$$k$$$, the answer beats for $$$\gcd(k,n)$$$ groups of numbers, so that in each group there is exactly $$$\frac{n}{\gcd(k,n)}$$$ and each number is in exactly one group, and for different $$$s$$$ the answer will be (the sum in the group that $$$s$$$ belongs to) $$$\cdot\gcd(k,n)$$$.
Let's look at the answer for the optimal $$$s$$$ for $$$k_1$$$, let's call the set at which it is reached $$$t$$$, note that in $$$k_2$$$ for different $$$s$$$ there are $$$m$$$ independent sets that are subsets of $$$t$$$. Let $$$m_i$$$ be the sum in the $$$i$$$-th set. Now note that we need to prove $$$max(m_1, m_2, \ldots, m_y) * y\geq m_1 +m_2 +\ldots m_y$$$ this is true for any $$$m_i$$$, easy to see.
So you need to iterate the divisors which equals to $$$\frac{n}{p}$$$ where $$$p$$$ is prime, now it can be passed with a set. Hurray!
For the divisor $$$d$$$, it is proposed to store answers for all pairs $$$(s, d)$$$, where $$$s\le\frac{n}{d}$$$ and the maximum among them, they can be counted initially for $$$O(n log n)$$$ for one $$$d$$$, each request is changing one of the values, this can be done for $$$O(log n)$$$.
The complexity of the author's solution is $$$O(6\cdot n\operatorname{log}(n))$$$, where 6 is the maximum number of different prime divisors of the number $$$n$$$.
1718D - Permutation for Burenka
Try to reduce the task to a single array and tree.
Try to reduce the tree to a match of bipartite graph.
The first part: For each $$$a_i=0$$$, a segment of suitable values.
The second part: The set $$$S$$$ and the number $$$d$$$.
There exist $$$L$$$ and $$$R$$$ such that the answer to the query is ''YES'' if and only if $$$L \le d \le R$$$.
Let's build a tree recursively starting from the segment $$$[1, n]$$$, at each step we will choose the root of the subtree $$$v = \operatorname{argmax}([p_l, p_{l+1}\ldots, p_r]) +l - 1$$$, then recursively construct trees for subtrees $$$[l, v-1], [v+1, r]$$$ if they are not empty, then create edges from the root to the roots of these subtrees. What we got is called a Cartesian tree by array. This Cartesian tree will be mentioned further in the analysis.
It is easy to see that Cartesian trees by arrays coincide if and only if these arrays are similar. Then the necessary and sufficient condition for the array $$$a$$$ to be similar to $$$p$$$ is that for any pair $$$u, v$$$ such that $$$u$$$ is in the subtree $$$v$$$, $$$a_v > a_u$$$ is satisfied, in other words $$$a_v$$$ is the maximum among the numbers in the subtree of the vertex $$$v$$$.
Let's call the position $$$i$$$ initially empty if initially $$$a_i = 0$$$.
Let's call an array $$$a$$$ almost similar to $$$p$$$ if for any pair $$$u, v$$$ such that $$$u$$$ is in the subtree $$$v$$$, $$$a_v > a_u$$$ is executed, or both positions $$$u$$$, $$$v$$$ are initially empty.
Let's prove that if there is a way to fill in the gaps in $$$a$$$ to get an array almost similar to $$$p$$$, then there is also a way to fill in the gaps to get a similar to $$$p$$$ array $$$a$$$.
Indeed, let's look at the $$$a$$$ array almost similar to $$$p$$$. let's walk through the tree recursively starting from the root. At the step with the vertex $$$v$$$, we first start recursively from all the children of $$$v$$$, now it is true for them that the maxima of their subtrees are in them, let's look at the maximum child $$$v$$$, let it be $$$u$$$, then if $$$a_v > a_u$$$, then everything is fine, otherwise $$$a_v < a_u$$$ note that $$$v$$$ is initially an empty position, because for all initially non-empty positions it is true that they are maximums in their subtrees (this is easy to see in the definition), but $$$v$$$ is not. Note that $$$u$$$ is initially an empty position. Otherwise, we have never changed $$$a_v$$$ and $$$a_u$$$, therefore, in the original array $$$a$$$ almost similar to $$$p$$$, $$$a_v > a_u$$$ contradiction was executed, it is contradiction. so $$$v, u$$$ are initially empty, we can perform $$$swap(a_v, a_u)$$$ and everything will be executed.
After executing this algorithm, we changed only the initially empty elements and got $$$a$$$ similar to $$$p$$$. Q.E.D.
How to check the existence of an array almost similar to $$$p$$$? Let's call the number $$$r_i$$$ the minimum among all the numbers $$$a_v$$$ such that $$$i$$$ is in the subtree of $$$v$$$. Let's call the number $$$l_i$$$ the maximum among all the numbers $$$a_v$$$ such that $$$v$$$ is in the subtree of $$$i$$$. it is easy to see that $$$a$$$ is almost similar to $$$p$$$ if and only if for all $$$i$$$, $$$l_i < a_i < r_i$$$ is satisfied.
That sounds incredibly good. So, all we need is to find a matching of the set $$$S$$$ and number $$$d$$$ and segments $$$[l_i, r_i]$$$ for $$$i$$$ that are initially empty. Now it is easy to prove that the suitable $$$d$$$ is a continuous segment (let's say the answer is yes for $$$d_1$$$, we don't know the answer for $$$d_2$$$, try looking at the alternating path from $$$d_1$$$ to $$$d_2$$$ in good matching for $$$d_1$$$, it's easy to see that if there is such a path, then for $$$d_2$$$ the answer is yes, otherwise, no. If you look at the structure of the matching of points with segments, you can see that an alternating path exists for a continuous segment of values $$$d_2$$$). the boundaries can be found by binary search or by typing greedily twice.
Final complexity is $$$O(n \operatorname{log} n)$$$ or $$$O(n \operatorname{log}^2 n)$$$
you can notice that a pair of operations $$$1\, i\, j$$$, $$$2\, k\, t$$$ and $$$2\, k\, t$$$, $$$1\, i\, j$$$ lead to the same result, what does this mean?
This means that operations can be represented as two permutations describing which swaps will occur with rows and which swaps will occur with columns.
Try to reduce this problem to a graph isomorphism problem.
Let's have two permutations: $$$p$$$ of length $$$n$$$ and $$$q$$$ of length $$$m$$$. Initially, both permutations are identical. For operation $$$1\, i\, j$$$, we will execute $$$\operatorname{swap}(p_i, p_j)$$$, for operation $$$2\, i\, j$$$ we will execute $$$\operatorname{swap}(q_i, q_j)$$$. It is easy to see that after performing all operations in the position $$$i, j$$$ will be $$$a_{p_i,q_j}$$$.
Now we need to find permutations of $$$p, q$$$ so that $$$a_{p_i,q_j} = b_{i, j}$$$.
Let's look at a bipartite graph where the first part is rows and the second is columns. Let if $$$a_{i,j}\ne 0$$$, then from $$$i$$$ there is an undirected edge in $$$j$$$ of the color $$$a_{i, j}$$$. if we construct the same graph for $$$b$$$, then we need to check for isomorphism two bipartite graphs, each edge of which has a color, and as we remember, by the condition of the vertex, 2 edges of the same color cannot be Incidence. Let's call them graph $$$a$$$ and graph $$$b$$$.
This problem is largely for implementation, therefore, starting from this moment, the actions that you can observe in the author's solution are described 168728958.
Let $$$n < m$$$ otherwise let's swap $$$n$$$ and $$$m$$$ in places. Let's try to match the elements of the array $$$a$$$ to the elements of the array $$$b$$$. Let's go through the indices from $$$1$$$ to $$$n$$$. Let's now choose which pair of $$$b$$$ to choose for the $$$i$$$-th vertex of the first fraction of the graph $$$a$$$. If we have already found a pair earlier, skip the step, otherwise let's sort out the appropriate option for the pair among all the vertices of the first component $$$b$$$ that do not yet have a pair, there are no more than $$$n$$$, after the vertices are matched, the edges are also uniquely matched (because there are no 2 edges of different colors), let's then run the substitution recursively for edges (if we have mapped the vertex $$$v$$$ to the vertex $$$u$$$ and from $$$v$$$ there is an edge of color $$$x$$$ in $$$i$$$, and from $$$u$$$ there is an edge $$$x$$$ in $$$j$$$, then it is easy to prove that in the answer $$$i$$$ will be mapped to $$$j$$$), which means we will restore the entire component. Let the sum of the number of vertices and the number of edges in the component be $$$k$$$, then we will perform no more than $$$n \cdot k$$$ actions to restore the component.
by permutations, getting a sequence of swaps is a matter of technique, I will not explain it, but this is a separate function in the author's solution you will understand.
in total, our solution works for $$$O(n(n\cdot m + n + m))$$$ or $$$O(min(n, m)(n\cdot m + n + m))$$$, where $$$n\cdot m + n +m$$$ is the total number of vertices and edges in all components.
feel free to ask any questions
1718F - Burenka, an Array and Queries
The only one hint to this problem is don't try to solve, it's not worth it.
In order to find the number of numbers from $$$1$$$ to $$$C$$$ that are mutually prime with $$$x$$$, we write out its prime divisors (various). Let these be prime numbers $$$a_{1}, a_{2}, \ldots, a_{k}$$$. Then you can find the answer for $$$2^{k}$$$ using the inclusion-exclusion formula.
Because $$$\left\lfloor\frac{a}{b\cdot c}\right\rfloor = \left\lfloor\frac{\left\lfloor\frac{a}{b}\right\rfloor}{c}\right\rfloor$$$, a similar statement is made for $$$k$$$ numbers (you can divide completely in any order).
Let's split the primes up to $$$2\times 10^4$$$ into 2 classes — small ($$$\leq 42$$$) and large (the rest). There are 13 small numbers, and 2249 large ones. Separately, we will write down pairs of large numbers that in the product give a number not exceeding $$$10 ^ 5$$$. There will be 4904 such pairs.
Let's match each set of small primes with a mask. Let's write $$$dp[mask]$$$ — alternating sum over the $$$mask$$$ submasks of numbers $$$\left\lfloor\frac{C}{a_{1}\cdot a_{2}\cdot\ldots \cdot a_{k}} \right\rfloor$$$, where $$$a_{1}, \ldots, a_{k}$$$ — prime numbers from the submask. Similarly, we define $$$dp[mask][big]$$$ (in addition to the mask, there is a large prime $$$big$$$) and $$$dp[mask][case]$$$ (in addition to the mask, there are a pair of primes from a pair of large primes $$$case$$$). Each $$$dp$$$ can be counted for $$$states\times bits$$$, where $$$states$$$ — is the number of states in $$$dp$$$, and $$$bits$$$ — is the mask size (number of bits).
If we write out all the large primes on the segment for which $$$mask$$$ — mask of small primes, the answer for this segment will be the sum of $$$dp[mask] + \sum dp[mask][big_{i}] + \sum dp[mask][case_{i}]$$$ (for $$$big$$$ and $$$case$$$ lying on the segment). Thus, the request can be answered in 7000 calls to $$$dp$$$.
In order to find a set of prime numbers on a segment, you can use the MO algorithm.
Final complexity is $$$O(n\sqrt{n} + q\times(\pi(m)+casesCount(C)))$$$
It is worth noting that (with a very strong desire) it is possible to further optimize the solution using avx, to speed up the calculation of the amount by submasks in dynamics by 16 times, to speed up the calculation of the amount of $$$dp[mask][big]$$$ by 8 times, which will allow you to answer the request in ~5000 (instead of 7000) calls to $$$dp$$$, and the pre-calculation for $$$4\cdot 10^7$$$ instead of $$$7\cdot 10^8$$$ (in fact, the pre-calculation works much faster due to compiler optimizations).
Please rate the problems, it will help us make the problems better next time!
You can choose one best problem, or not choose if you think there is no such task.
In Div1D, how it is easy to see that suitable d will form a contiguous segment. Can somebody please elaborate?
I think it's by far the hardest part of the entire problem, so the editorial's claim that it's easy is quite baffling.
But here's how someone explained it to me: by Hall's theorem, for there to be a matching between $$$x$$$ and intervals, you want for all sets of intervals $$$I$$$, the number of elements of $$$S$$$ inside these intervals must be greater than the number of intervals (let's say $$$x(I) \ge N(I)$$$). In other words, your added value $$$d$$$ is valid if and only if it is inside the union of intervals for every set $$$I$$$ such that $$$x(I) < N(I)$$$.
Now the trick is to realize that sets of intervals whose union is not a contiguous segment don't matter -- if a set whose union is two or more disjoint parts has $$$x(I) < N(I)$$$, then definitely one of the parts violates the condition as well, and that part alone would be a more restrictive condition on the added value $$$d$$$. Therefore the valid values of $$$d$$$ are in the intersection of some contiguous segments, which is a contiguous segment.
Wow!! Really nice proof. My approach didn't utilize this observation, it might be interesting to you.
Approach: The problem is same that we have $$$k$$$ intervals and $$$k-1$$$ points, one query point and we have to check whether we can perfectly match them or not.
The idea is to find those intervals, removing which we can perfectly match the given $$$k-1$$$ points and remaining $$$k-1$$$ intervals.
Let's match the intervals greedily:
Sort the intervals according to left endpoints
Sort the points in increasing order
Move from left to right and for the current point say $$$x$$$ insert all right endpoint values in a set whose left endpoint $$$\le x$$$(can be done using a pointer).
Match the point with the segment having the smallest $$$r$$$ value.
Also for each point, we can calculate what it will be matched to if the segment in the previous step was removed (-1 for those points that don't have such an option).
In the end, the set should have exactly one spare segment.
Let $$$seg_i$$$ be the segment alloted to $$$i^{th}$$$ $$$(1 \le i \le k - 1)$$$ interval and $$$seg_k$$$ be the only spare segment.
We can be sure that we can match $$$seg_k$$$ to $$$d$$$ ensuring that the remaining segments still have a perfect match.
Let's make a DP where $$$dp_i$$$ denotes whether we can remove $$$seg_i$$$ such that the remaining segments perfectly match with given $$$k-1$$$ points. Initially, $$$seg_k = true$$$.
Finally, we have all the segments in which $$$d$$$ can lie. Assuming segments do not always form a contiguous section, we can answer queries offline (using sweepline) or online (using segment tree).
Code
"Transposed" version of your solution can be used to prove that "suitable $$$d$$$ will form a contiguous segment" as well (and as a bonus, with a few standard tricks it can be implemented in $$$O(n \alpha(n))$$$, faster not only in theory but in practice).
Can we also just go in order of decreasing right endpoint and match every segment to its rightmost available value? I feel like this strategy works and is simpler.
Neat! Eventually it needs to be looked at in a way similar to this because we still need a way to construct the interval of valid $$$d$$$, even if we know it's an interval.
And it's easy to show it's an interval from your solution: $$$dp_i = dp_{nxt_i}$$$, and obviously $$$i$$$ and $$$nxt_i$$$ have at least one common point by the definition of $$$nxt_i$$$.
1718A2, I think the observations (segments of length 1 and 2) are pretty simple and obvious. The more interesting part is how to implement a solution from that. I worked like an hour on it and even did not solve A1.
The editorial answers this with "this amount can be calculated by dynamic programming or greedily." Thanks a lot!
Yeah, the editorial should definitely elaborate on that. And some of these observations aren't even needed for D1.
My DP approach: the optimal solution for an array of $$$n$$$ elements can be constructed by taking the optimal solution for the first $$$k$$$ elements (for some $$$k < n$$$) and then applying the XOR chain (overlapping segments of size 2) on the rest of the elements. We need to try each value of $$$k$$$, however, which leads to an $$$O(n^2)$$$ solution, maintaining both an array of optimal time as well as an array of accumulating XOR chain results for each possible starting index. This doesn't work in D1.
Greedy approach: maintain a running XOR while also storing each intermediate result in a set. If a result appears twice, i.e., $$$r \oplus a_i \oplus \oplus a_{i + 1} \oplus \cdots \oplus a_{i + k} = r$$$, then the subarray $$$a_i, \ldots, a_{i + k}]$$$ has a XOR sum of 0. We don't need to know where the subarray starts ($$$i$$$), but we just need to detect when it ends (when the running XOR is not a new value), so we can update the saving counter and reset the running XOR chain. You can check out just how simple it is over here: https://codeforces.net/contest/1719/submission/168640804
That being said, the observation that a running XOR will encounter an old element if a subarray has a XOR sum of 0 should definitely be included in the editorial, in my opinion, since that's essential to the greedy approach (unless the authors had a different greedy approach in mind).
Hi, I found a nice dp solution for div1 A1 :- https://codeforces.net/contest/1718/submission/168713136
Can you please help me to understand the transitions in following solution
It's literally according to the editorial.
How do you decide N=8191 ? It won't work with any other value.
It is just the maximum xor between 2 numbers between 1 and 5000
why we dont need to know where the subarray starts??
Say I have XOR equal to 0 for a subarray from index 4....20 and inside that we have a subarray from index 9....16 which have its XOR = 0 as well Then shouldn't not considering where the subarray would start be wrong in this case as I would have 2 saves in this case while if I clear the set when reaching index 16 then how would I know about the save for 4....20 ??
Just curious about this and couldn't actually understand how this works.
Any help would be really appreciated :)
We only want to count the number of non-overlapping subarrays with XOR of 0. If a 0-XOR subarray is inside a larger 0-XOR subarray, we will not benefit from exploiting the larger subarray, since we don't want to waste time in revisiting the smaller subarray.
For your example, if we find that 9-16 has a XOR of 0, then we can save 1 second from the 9-16 subarray. The "saving" means that we can perform the operation on [9, 10], [10, 11], [11, 12], [13, 14], [14, 15], and [15, 16]. We always choose $$$x$$$ as the current value of the first index we pick (so the first index turns into 0). Since indices 9-16 have a XOR value of 0, the latest operation of [15, 16] will also turn index 16 to 0, allowing us to make the 9-16 subarray (7 elements) into all 0s in 6 seconds.
The fact that the subarray from 4-20 also has a XOR of 0 is not something we can exploit anymore. It's true that, if we then perform the operation on pairs like [4, 5], [5, 6], ..., [18, 19], and [19, 20], then we will get them all to 0, but this requires revisiting indices 9 to 16, which are already 0! The operations from [8, 9] to [16, 17] take 8 seconds to perform, when it would have been much faster to just apply the operation on [8] alone, and then move on to [17, 18] and so on.
So it makes no sense to try to take advantage of BOTH the 9-16 subarray and the 4-20 subarray, because the moment you decide to exploit the 9-16 subarray, you turn this range into 0 (saving 1 second) and you don't want to touch this 9-16 subarray ever again. Now, it's possible that we could instead decide to exploit the 4-20 subarray directly without exploiting the 9-16 array; however, it's always better to exploit the subarray that ends the earliest first, since then you have more indices available for future exploits (e.g., if there is a subarray from say, 18-25 that has a XOR of 0, then we can save 1 second from 9-16 and another 1 second from 18-25; whereas if we instead decided to save 1 second from 4-20, then we are no longer able to save another second through 18-25 since 18-20 have been zero'd out).
Let me know if you're still confused about anything here.
I am quite the opposite of this, I couldn't do anything during the contest to both of the versions, and only when I was told the hint of segments of length 1 and 2, I solved the hard version instantly
Well, the above code is fairly simple, but actually I still do not really understand why this works.
What is the key idea we can see here instantly?
The Key Idea is to notice that the extra XOR that comes to a[i] before turning a[i] to zero with one operation can be represented as the xor subarray ending at index i-1 (can be empty) (where this subarray represents consecutive subarray of size 2 before going to index i), so to make a[i] already 0 before reaching it, we have to find a subarray ending at index i with xor sum equal to 0 (i.e find maximum j such that pref[i]^pref[j] = 0)
As far as I understood:
You can always use n steps to turn all elements to zero by just taking all indexes consecutively. And you want to improve this result. How you can do it?
You notice that there's no sense to use range of size greater than 2 since the given formula of seconds calculation. What can you do with this?
If you take two numbers a[i] and a[i+1] and apply xor with a[i] to them you can get two results:
1) [0,0] it means a[i]==a[i+1] => you saved 1 second here!
2) [0,X] now you can apply xor with X to next pair [X,a[i+2]] and so on. you can apply it until you get [0,0] pair or get to end of array and turn last element to zero with xor with range of 1. However you saved 1 second!
So what do these both situations have in common? xor of all elements in covered ranges equal to 0.
so now you can convert problem into finding count of subarrays with xor=0 (every subarray is a saved 1 second) and the answer will be n-(count of such subarrays)
"Unable to parse markup [type=CF_MATHJAX]" in problem D.
Can anyone share their div 1c solution using segment tree which passes for all factors of $$$n$$$ as claimed in the editorial? I used segment tree but got TLE.
Here is mine: 168744644.
I believe it is $$$O(n\sqrt{n} + q\sqrt{n}\log{n})$$$ or something like that.
Another approach for Div2 C, based on Editorial one.
Let's calculate next greater element and previous greater element. Then, for each $$$a_i$$$, we can calculate how many rounds athlete will win for infinite $$$k$$$. If previous greater element exist, he won't win a single round. Otherwise, the answer equals to distance between athlete and its next greater element.
When we answering queries, one should notice that an athlete start to fight after $$$i-1$$$ rounds — this is the time he should wait to get into second index and start fighting.
Time complexity: $$$O({n})$$$
https://codeforces.net/contest/1719/submission/168605787
Or we could do my weird solution:
submission: 168590217
instead of the previous greater element max of segment[0, i] should also work, right?
Yes, it's the same
I thoght quite similarly, and wrote this code:here. Can anyone detect the error here?
I did exactly what you said, i feel it's more intuitive.
Oh, wow, am I the only one who actually started filling up Sprague-Grundy values for Div2A (Chip Game) to realize and prove that they form a binary chessboard? I felt that this was more straightforward (albeit requiring knowledge on the Sprague-Grundy Theorem) than the number-theoretic observations.
Is it planned for solutions to be posted later? I'm curious to see whether the authors' solution for Div2E/Div1B actually passes some of the really nasty but valid input test cases.
I solve problem C (div2) in $$$O(n)$$$ time complexity:
submission : 168668282
me too my submission 168676244
There are $$$2$$$ conditions in this problem.
I use $$$tmp$$$ to mark the number of the steps to make the biggest value to the front of the array.
And array $$$cnt_i$$$ is used to count the number of the victories of the i_th player.
If $$$k \lt tmp$$$
There are two subtask in this conditions:
$$$val_{id}$$$ is the biggest number ($$$val_id == n$$$) then the answer should be
cnt[id] + k - tmp
, because this player will win after round $$$k - tmp - 1$$$Otherwise : the answer is $$$cnt_i$$$.
Otherwise
There are also two conditions:
I use array $$$r$$$ to mark the first element which is on its right and has a bigger value than it ( you can use a stack to get array $$$r$$$ in $$$O(n)$$$ in the begining.
The answer of this condition is
(id != 1) + min(r[id] - id - 1, k - id + 1)
In this way, I can get each answer in $$$O(1)$$$ and use $$$O(n)$$$ time to get $$$tmp$$$ and array $$$cnt$$$ and $$$r$$$.
Is there an English editorial for this contest?
Edit: The issue has been fixed. Thanks.
sorry, I accidentally removed it, but now it is returned
Any proof for Div 1B/2E why greedily choosing the letter with maximum frequency as the current last block will work?
You could maybe search more on Zeckendorf Representation, that's a formal name for the fibonacci-sum representation of a positive integer.
assume the answer is YES, with n+1 blocks total, then we can prove that the number of letters with third highest frequency cant be greater than F_n. If the last block used the letter with second highest frequency, then we can prove that ( because F_n = F_(n-1) + F_(n-2) ) the next two segments has to BOTH be from the letter from maximum frequency, that leads to a contradiction. Therefore the last block has to come from the letter of max frequency.
the parts I didn't prove can be done by messy but routine bounding
I'm sorry but i really do not understand the editorial of Div2D. Especially the "n − (the maximum number of disjoint sub-segments with a xor of 0) " part.
tried to make it clearer
Thank you. Got it!!
Another way to understand the solution to div1B is the Zeckendorf Theorem, which states that each positive integer can be uniquely represented by the sum of a set of unique fibonacci numbers where no two are adjacent.
We can apply this representation directly to the frequency of each letter and solve the problem.
And due to this uniqueness of the representation, the greedy solution will also always work.
DIV2 (ABCDEF)video Editorial for Chinese :
Bilibili
I remember now that there was a problem similar to Div2. D on USACO, except that it had to count the number of subsegments whose sum is divisible by some integer $$$k$$$ if I remember correctly. Similar observations to that problem but not quite the same.
Does anyone have the DP approach to Div2 D1? Also, is there a mistake in the Hint for A1 2? Is it a_i = v?
Hello there! A question to the authors of the round. I see that there are a lot of problems about Buryatia in this round. Is one of you from it, or is it just a meme? Thanks.
During contest my solution of problem B was accepted, but after thr contest it is showing that my time limit was exceeded at test case 5.Was that a bug?
Nope, not a bug. Looks like you failed system tests.
So, to put it short, the test cases that are run on your code during the contest are called pretests. These are normally not too big but are certainly meant to be strong. after the round is over, there will be system tests, which are basically extra cases added on to the pretests to make sure that your solution is indeed efficient/correct
My comment will be published soon
I have a solution with bitset in $$$O(\frac{nC\log n}{p(k)}+\frac{nC\log n}{w}+\frac{qC}{w}+\frac{2^kC}{w})$$$.
We just calculate the bitset of every subset of the first $$$k$$$ primes. For larger primes, we can use the trick called "猫树分治"(i don't know how to translate it into English, maybe it's called "cat tree divide and conquer").
Since std::bitset have an extremely small constant, we can solve this problem by this weird method.
Edit:
$$$p(k)$$$ is the $$$k$$$-th prime, i can't prove its exact bound, but it has a small constant.
Edit2:
I think $$$k=14$$$ or $$$k=15$$$ would be the fastest one, but it takes too much memory. It runs 1668 ms with $$$k=13$$$.
Hello SomethingNew, please can you explain , in problem 1718A2(hard version): What does disjoint sub-segments with XOR 0 actually mean ??
I know it is a very noob question and you can say that you have googled it and I have searched it and kind of understood that also. But in this problem , when using my that limited knowledge I kind of failed on working my own test cases/examples of this problem.
For example incase you want to know: the array I am confused on :
Not clearly able to find out "distinct subs-segments with XOR 0" .
if prefix_xor[i] = prefix_xor[j] then a[i+1] ^ a[i + 2] ^ ... ^ a[j] = 0, disjoint sub-segments means disjoint subarrays.
Div.1 A~D solutions for Chinese readers. https://www.cnblogs.com/jklover/p/16595927.html
A2. Burenka and Traditions (hard version) https://codeforces.net/contest/1718/submission/169707649 I just can't understand Why Is This Giving TLE ??? Although its O(N)
https://codeforces.net/blog/entry/62393
read this
Thanks a lot, bro. That really was quite helpful.
I don't think the editorial of 1E is understandable, but my poor English doesn't allow me to write a whole editorial.
But I can give a proof for the time complexity. Consider divide the connected components of the two graphs into groups, such that each component in a group has the same number of left vertices. Then we can consider only matchings between vertices that their connected components are in the same group.
Now we only consider components in the $$$i$$$-th group. Let the $$$x_i$$$ be the number of left vertices in each component, $$$y_i$$$ be the number of components in the graph $$$a$$$, $$$z_i$$$ be the number of edges in the graph $$$b$$$. If $$$x_i=0$$$, each of the components is made up of only a right vertices. It's easy to deal with this situation.
Otherwise $$$x_i\ne0$$$. Consider a component in $$$a$$$ and a vertex $$$u$$$ in it, we are looking for a vertex to match with $$$u$$$. For an edge in $$$b$$$, suppose the edge belongs to the component $$$S$$$, then the edge will contribute to the time only when we try to match $$$u$$$ with some left vertex in $$$S$$$. So the time complexity of dealing with this group is $$$O(x_iy_iz_i)$$$, and the total time is $$$\sum x_iy_iz_i\le(\sum x_iy_i)(\sum z_i)=O(n^2m)$$$. Now we can solve the whole problem in $$$O(nm\sqrt{nm})$$$ time.
coincidentally, i notice div2C this time is so similar with 569 div2C. XD
Stuck in Fibonacci Strings : 1718B (link: https://codeforces.net/problemset/problem/1718/B) since the past few days, cant think why it is going wrong, any help would be really nice.
Subsmission: https://codeforces.net/problemset/submission/1718/170409697 Test 9: WA in test 136 — found YES instead of NO.
Take a look at Ticket 16135 from CF Stress for a counter example.
Thanks a lot!
Can anyone find out my error in this code? It WA on test 2.560 :(
1718A1 — Burenka and Traditions (easy version)
"Note that if a segment of length 2 intersects with a segment of length 1, they can be changed to 2 segments of length 1."
Can someone help me understand this. If a segment of length 1 intersects a segment of length 2, it lies completely inside it, right? I do not understand the purpose of this statement. I know that I'm missing something very silly.
I just did the basic 2-length segment thing, which fails 183824117. I cannot find a case where it fails though. Please if somebody could help me out. Thanks!
Also if possible, it will be very helpful if the problem setters could create small test cases ( preferably corner cases ) which can fit the status screen, for say first five test cases. It will help a lot while upsolving/practising. I know WA is a mistake from our part, but doing this will help us a lot. Thanks!
Take a look at Ticket 16540 from CF Stress for a counter example.
Thanks! I handled this case but I still don't understand the editorial :(
Funny, but there is bruteforce soultion of problem Div1/B: 184648373
Fibonacci Strings : 1718B submission: https://codeforces.net/contest/1718/submission/194429652
Why am i getting TLE??
got it, solution was int to long long int!!
it's been a while but for problem C we can use monotonic stack good luck everyone !!!
Another stupid solution to div1F:
First, do divide and conquer on the queries to remap them to "answer this query given the set of these prime numbers in my range" (storing them in a bitset of O(primes under M) size)
Then, sort the queries in bitset lexicographical order (where primes[0] = 2, primes[1] = 3, etc.). Process them left to right, and process adjacent differences in bitsets; whenever we add/remove a bit, naively do O(C / prime[i]) work on all of the multiples of the number to maintain how many times that multiple is hit. We want to maintain how many indices are non-zero.
To really squeeze and cache optimize this, make everything chars since max # distinct prime factors under 2e4 = 6.
you can show that the sum of O(C / prime[i]) ops takes ~2e4 * MAXC operations in the worst case [sum of 2 *(1/2) + 4 * (1/3) + 8 * (1/5)... until we exhaust (max # distinct prime factors) * N terms] * 2 * MAXC, but I guess 2e9 simple operations are fast enough.