A – Repeat ACL
Easy programming-language-knowledge check typical of the first task of AtCoder beginner contests. The following is a passing Kotlin submission:
fun main() {
val k = readLine()!!.toInt()
val ans = "ACL".repeat(k)
println(ans)
}
B – Integer Preference
Assume there is an integer in both ranges and call it $$$x$$$. Note that both $$$x \ge \max(A, C)$$$ and $$$x \le \min(B, D)$$$ must therefore be true.
Thus $$$x$$$ exists if and only if $$$\max(A, C) \le \min(B, D)$$$ is true. This is the standard formula for finding the overlap of two ranges.
C – Connect Cities
Note that the initial network of cities can be divided into one or more components, where any city of a component can reach any other city in that component following roads, but there are no roads connecting components together.
If there are $$$c$$$ components, Snuke can connect two of them by choosing one city from each component and building a road between them, which combines them to a single component; so the answer would be $$$c - 1$$$.
Components can be found using breadth-first/depth-first searches, however there's an easier way if you already have a special data structure known as DSU (disjoint-set union). And there is an implementation available right in the dsu
module of the AtCoder library. So start with $$$c := N$$$, for each road given in the input, you can use same
to check if the vertices are already connected; if there isn't, merge
them and decrement $$$c$$$.
Note that though the input is $$$1$$$-indexed, the AtCoder DSU implementation is $$$0$$$-indexed.
D – TODO
Can be done with basic dynamic programming. Let $$$dp_i$$$ represent the number of sequences whose sum is $$$i$$$. $$$dp_0 = 1$$$, the empty sequence. Then $$$\displaystyle dp_i = \sum_{j=0}^{i-3} dp_j$$$, i.e. the sum of $$$dp_j$$$ for all integers $$$j$$$ between $$$0$$$ and $$$i-3$$$ inclusive, representing the addition of a number $$$\ge 3$$$ to each of the previous sequences. The final answer, naturally, is $$$dp_s$$$.
This can be easily calculated in $$$O(s^2)$$$, which is good enough to pass, however it can be improved to $$$O(s)$$$ by accumulating the prefix sum.
An $$$O(\log s)$$$ solution is possible via matrix exponentiation.
E – TODO
We can take the following approach — for each $$$i$$$ from $$$2$$$ to $$$n$$$, try to match $$$(x_i, y_i)$$$ with the "best" previous point $$$(x_j, y_j), 1 \le j < i$$$, then update the answer.
Testing all previous points is too slow ($$$O(n^2)$$$). However, it is sufficient only to keep up to $$$4$$$ previous points to test against. $$$2$$$ of those points are the maximal and minimal of the "score" $$$x_i + y_i$$$, while the other $$$2$$$ maximize and minimize $$$x_i - y_i$$$
Why? Note that taking any point as the center, the set of points with Manhattan distance some arbitrary constant $$$d$$$, forms a "diamond" shape, i.e. a square rotated 45 degrees. Thus the furthest point must lie on at least one of the four sides of this square when $$$d$$$ is maximized. As the sides are on lines with equations of $$$x+y = C$$$ or $$$x-y = C$$$, the maximums and minimums of these values are sufficient to find the furthest point in each of the 4 possible directions.
Thus we have a solution in $$$O(n)$$$.
F – Heights and Pairs
Define a bad pair as a matching where the two person's heights are equal.
Let's define a useful function $$$\displaystyle pairs(n) := \frac{(2n)!}{n!\cdot 2^n}$$$, the number of ways to make $$$n$$$ pairs (good or bad) out of $$$2n$$$ people. Explanation: Start with any of $$$(2n)!$$$ permutations. When you pair them up, you don't care about the order within the pair, so you divide by $$$2$$$ a total of $$$n$$$ times. Then you also don't care about the order between the pairs, so you divide by $$$n!$$$.
The main idea involves the Inclusion–exclusion principle. Let's assume we have a function, $$$f(k)$$$, denoting the number of ways to fix exactly $$$k$$$ bad pairs of people. $$$f(0) = 1$$$, because there exactly one set consisting of $$$0$$$ bad pairs: the empty set.
We want to find the number of matchings with at least one bad pair; from that we can subtract it from $$$pairs(N)$$$ for the final answer.
Say we had a list of bad pairs $$$F_1$$$; note that $$$|F_1| = f(1)$$$. Each bad pair can be the "seed" of a set $$$BAD_{(x, y)}$$$, denoting the set of all bad matchings that contain the bad pair $$$(x, y)$$$. We want to find the size of the union of all $$$BAD$$$ sets, i.e. $$$|BAD_{F_1[1]} \cup BAD_{F_1[2]} \cup ... \cup BAD_{F_1[f(1)]}|$$$ However we only have the sizes of the individual $$$BAD$$$ sets, which are all $$$pairs(N-1)$$$, cause we've constructed $$$N-1$$$ pairs after the fixed bad pair. Thus the sum of their sizes are $$$f(1) \cdot pairs(N-1)$$$
This however, overcounts the size of the union, as bad matchings with more than one bad pair would be represented in several $$$BAD$$$ sets. However, we can obtain the sum of sizes of pairwise intersections $$$BAD_X \cap BAD_Y$$$ by fixing two bad pairs using list $$$F_2$$$, and noting that the sum of constructed matchings is $$$f(2) \cdot pairs(N-2)$$$.
Likewise we can obtain sums of sizes of triple-wise intersections, etc., then using the inclusion-exclusion principle to obtain the formula
And note that our final answer,
Now let's figure out how to find $$$f(k)$$$. First note that the exact heights don't matter, only their equality. We can divide the people into components with the same height as each other. Collect all component sizes into array $$$a$$$. For example, if $$$h = [1, 1, 1, 2, 2, 4]$$$, then $$$a = [3, 2, 1]$$$. This can be easily done using sorting and run-length compression.
Then from each component $$$a_i$$$, construct array $$$b_i$$$, indexed over $$$[0, \lfloor a_i/2 \rfloor]$$$, where $$$b_i[j]$$$ denotes how many ways are there to fix $$$j$$$ bad pairs from this component. $$$\displaystyle b_i[j] = \binom {a_i}{2j} \cdot pairs(j)$$$, since we take $$$2j$$$ people from the component and make $$$j$$$ pairs out of them.
In the case of a single component, $$$f(j) = b_1[j]$$$, but how do we combine the results of several components? Let's assume we have only two components. Then:
$$$\displaystyle f(j) = b_1[0] \cdot b_2[j] + b_1[1]\cdot b_2[j-1] + b_1[2]\cdot b_2[j-2] + ... + b_1[j]\cdot b_2[0] = \sum_{i=0}^j b_1[i] \cdot b_2[j-i] $$$.
This is called a convolution, and is the same formula for when we multiply two polynomials of the form $$$p(x) = B_0x^0 + B_1x^1 + B_2x^2 + ... + B_nx^n$$$. The case with more components is similar, except we have to find the product of several polynomials of different lengths.
However, computing convolutions for two polynomials of size $$$n$$$ and $$$m$$$ naively is $$$O(nm)$$$, which is too slow. There is just the right tool in the AtCoder library for that; the function convolution
. This function uses the Fast Fourier Transform method to convolve two polynomials in $$$O((n+m) \log (n+m))$$$ (this is a fascinating topic in its own right; feel free to "roll your own" or use other implementations). However note that it only accepts two arguments, and convolving multiple polynomials could be inefficient if called in the wrong order; imagine if you had about $$$N/2$$$ polynomials, and all of them are short except for one. Then imagine if you convolved a short one to the longest one over and over; suddenly your submission takes $$$O(N^2 \log N)$$$ and times out.
Fortunately there is a simple fix: just put all the polynomials into a heap / priority queue that sorts them from shortest to longest. While there is more than one polynomial in that heap, withdraw two, convolve them, then put the result back in the heap.
And thus finally, you are able to obtain $$$f(k)$$$ by reading the coefficients of the final polynomial, and can apply the inclusion-exclusion formula above to obtain the answer.