Codeforces and Polygon may be unavailable from December 6, 19:00 (UTC) to December 6, 21:00 (UTC) due to technical maintenance. ×

[Tutorial] Generating Functions in Competitive Programming (Part 2)

Revision en1, by zscoder, 2020-05-17 18:44:21

Welcome to Part 2 of my tutorial on generating functions. The first part focused on introducing generating functions to those without any background in generating functions. In this post, I will demonstrate a few applications of generating functions in CP problems. Let us start with some relatively straightforward examples.

Note: Unless stated otherwise, all computations are done modulo a convenient prime (usually $$$998244353$$$). Also, $$$[n]$$$ denotes the set $$$\{1,2,...,n\}$$$.

Blatant Applications in Counting Problems

Problem. AGC 005 Problem F You have a tree $$$T$$$ with $$$n$$$ vertices. For a subset $$$S$$$ of vertices, let $$$f(S)$$$ denote the minimum number of vertices in a subtree of $$$T$$$ which contains all vertices in $$$S$$$. For all $$$1 \le k \le n$$$, find the sum of $$$f(S)$$$ over all subsets $$$S$$$ with $$$|S| = k$$$.

Constraints: $$$n \le 2 \cdot 10^{5}$$$.

First, we need to do some basic counting. For a set of vertices $$$S$$$, let $$$t(S)$$$ denote the minimal subtree that contains all vertices of $$$S$$$.

Fix $$$k$$$. It is clear that we have to somehow double count the sum of $$$f(S)$$$. If we look at a vertex $$$v$$$, it is not that easy to count the number of sets $$$S$$$ of size $$$k$$$ such that $$$t(S)$$$ contains $$$v$$$. However, if we look at an edge $$$e$$$, then it is easy to see that $$$t(S)$$$ contains $$$e$$$ if and only if all elements of $$$S$$$ is in the same connected component formed by deleting the edge $$$e$$$ from the tree. In other words, if the edge $$$e$$$ splits the tree into two components of size (number of vertices) $$$a$$$ and $$$n-a$$$ respectively, then the number of $$$S$$$ with $$$|S|=k$$$ and $$$e \in t(S)$$$ is exactly $$$\binom{n}{k} - \binom{a}{k} - \binom{n-a}{k}$$$. Thus, the sum of $$$f(S)$$$ over all subsets $$$S$$$ is just $$$\binom{n}{k}$$$ (since number of vertices = number of edges + 1) plus the sum of $$$\binom{n}{k} - \binom{a}{k} - \binom{n-a}{k}$$$ over all edges $$$e$$$.

We can find the value of $$$a$$$ for each edge $$$e$$$ by simple dfs. Suppose we have computed the value of $$$a$$$ for each edge $$$e$$$, say $$$a_{1},a_{2},...,a_{n-1}$$$. If we can compute $$$\displaystyle\sum_{i=1}^{n-1}\binom{a_{i}}{k}$$$ fast for each $$$1 \le k \le n$$$, then we are done, so we will focus on this task.

Let $$$c_{i}$$$ denote the number of times $$$i$$$ appear in the array $$$a_{1},a_{2},...,a_{n-1}$$$. Hence, we need to find $$$ans_{k} = \displaystyle\sum_{i=0}^{n}c_{i}\binom{i}{k}$$$.

It is almost customary to try and write binomial coefficients in terms of factorials and look for simplifications.

$$$ans_{k} = \displaystyle\sum_{i=0}^{n}c_{i}\binom{i}{k} = \displaystyle\sum_{i=0}^{n}c_{i}\frac{i!}{k!(i-k)!} = \frac{1}{k!}\displaystyle\sum_{i=0}^{n}c_{i}(i!) \cdot \frac{1}{(i-k)!}$$$

Obviously we only need to know how to compute the sum $$$\displaystyle\sum_{i=0}^{n}c_{i}(i!) \cdot \frac{1}{(i-k)!}$$$ fast for all $$$k$$$. If you have studied about generating functions, you see that our sum looks very much like the convolution of two sequences. If we can write our summand in terms of $$$f(k)g(i-k)$$$, then we can define $$$F(x) = \displaystyle\sum_{i \ge 0}f(i)x^{i}$$$ and $$$G(x) = \displaystyle\sum_{i \ge 0}g(i)x^{i}$$$ and read off $$$\displaystyle\sum_{i \ge 0}f(i)g(k-i)$$$ from the coefficients of $$$F(x)G(x)$$$.

Clearly, we can set $$$f(i)=c_{i}(i!)$$$. We want to set $$$g(i)=\frac{1}{(-i)!}$$$. However, you might worry that $$$(-i)!$$$ is not defined for $$$i > 0$$$. It's ok, since we can always shift our sequence can set $$$g(M+i)=\frac{1}{(-i)!}$$$ instead for some large integer $$$M$$$. Then, we have $$$g(i)=\frac{1}{(M-i)!}$$$, which we can now safely compute. Instead of reading off the coefficient of $$$x^{k}$$$ from $$$F(x)G(x)$$$, we read off the coefficient of $$$x^{k+M}$$$.

Convolutions of this type appear in countless FFT problems and usually the hard part is to reduce your sums into a form that can be seen as the convolution of two polynomials. However, generating functions are much more powerful than this, as we shall see the next examples.

Problem. The Child and Binary Tree You have a set of positive integers $$$C = \{c_1, c_2, ..., c_{n}\}$$$. A vertex-weighted rooted binary tree is called good if all the vertex weights are in $$$C$$$. The weight of such a tree is the sum of weights of all vertices. Count the number of distinct good vertex-weighted rooted binary trees with weight $$$s$$$ for all $$$1 \le s \le m$$$. Note that the left child is distinguished from the right child in this problem (see the samples for more info).

Constraints: $$$n, m \le 10^{5}$$$

This problem was created $$$6$$$ years ago and has a 3100 rating on CF, but if you know about generating functions in 2020 (in the era where polynomial operations template is available) it is almost trivial.

Firstly, the obvious step is to let $$$f_{s}$$$ denote the answer for $$$s$$$ and $$$F(x)$$$ as the OGF of $$$f$$$. Let us derive a recurrence for $$$f_{s}$$$.

Clearly, $$$f_{0} = 0$$$. For $$$s>0$$$, we iterate over all possible weights for the root, then iterate over all possible weights of the left subtree, giving the recurrence $$$f_{s} = \displaystyle\sum_{c \in C}\displaystyle\sum_{i \ge 0}f_{i}f_{s-c-i}$$$ (for convenience, $$$f_{i}=0$$$ for $$$i<0$$$).

Speaking in generating functions, this is just the equation

$$$\displaystyle\sum_{n \ge 1}f_{n}x^{n} = \displaystyle\sum_{n \ge 1}\displaystyle\sum_{c \in C}x^{c}\displaystyle\sum_{i \ge 0}f_{i}x^{i}f_{s-c-i}x^{s-c-i}$$$.

This motivates us to define $$$F(x) = \displaystyle\sum_{n \ge 0}f_{n}x^{n}$$$ and $$$C(x) = \displaystyle\sum_{c \in C}x^{c}$$$. Thus, $$$F(x) - 1 = C(x)F(x)^2$$$ (you should be able to deduce this directly from the equation above).

Our goal is to find $$$F(x)$$$, so we solve for $$$F(x)$$$ using the quadratic formula (remember what we did to obtain the OGF of Catalan numbers?) to obtain $$$F(x) = \frac{1 \pm \sqrt{1-4C(x)}}{2C(x)}$$$.

Similar to the case with Catalan numbers, we choose the negative sign since otherwise as $$$x \rightarrow 0$$$, the numerator goes to $$$2$$$ while the denominator goes to $$$0$$$, but we should converge to a finite limit since $$$F(0)=1$$$. Thus, $$$F(x) = \frac{1 - \sqrt{1-4C(x)}}{2C(x)}$$$.

In the language of generating functions, we are pretty much done, but there is one minor implementation detail here. $$$C(x)$$$ has constant term $$$0$$$, and thus we cannot take the reciprocal directly. However, we just need to "rationalize" the numerator and rewrite our function as

$$$F(x) = \frac{1 - \sqrt{1-4C(x)}}{2C(x)} = \frac{(1 - \sqrt{1-4C(x)})(1 + \sqrt{1-4C(x)})}{2C(x)(1 + \sqrt{1-4C(x)})} = \frac{4C(x)}{2C(x)(1 + \sqrt{1-4C(x)})} = \frac{2}{1 + \sqrt{1-4C(x)}}$$$. You can verify that the constant term of the denominator is nonzero, so we can calculate the first $$$m$$$ terms of the series with usual polynomial operations and solve the problem in $$$O(m\log m)$$$.

To be honest, it is unfair to say that the problem is easy because the main difficulty was to compute the square root and reciprocal of a power series. However, in the present times, these algorithms can be easily found online and in online rounds you can just use templates to perform these operations and thus I would consider this problem to be straightforward.

Problem. Descents of Permutations (from Enumerative Combinatorics 1) Call a permutation $$$p_1, p_2, ..., p_n$$$ of $$$[n]$$$ $$$k$$$-good if $$$p_{i}>p_{i+1}$$$ iff $$$k \mid i$$$. Find the number of $$$k$$$-good permutations of length $$$n$$$.

Constraints: $$$n, k \le 5 \cdot 10^{5}, n \ge 3, k \ge 2$$$

There is a simple $$$O\left(\frac{n^2}{k}\right)$$$ dp solution (try to find it), but that won't be sufficient for this problem. We need to use generating functions. Let $$$f(n)$$$ denote the answer ($$$k$$$ is fixed).

For a permutation $$$p = p_1, p_2, ..., p_n$$$, define the descent set as the set of integers $$$1 \le i \le n-1$$$ such that $$$p_{i}>p_{i+1}$$$. We are looking for the number of length $$$n$$$ permutations with descent set equal to $$$S = \{k,2k,3k,4k,...\}$$$.

The idea is that counting "exact" conditions is hard, but counting "at least" conditions is easier. Let $$$f(S)$$$ denote the number of permutations with descent set exactly equal to $$$S$$$ and $$$g(S)$$$ denote the number of permutations with descent set that is a subset of $$$S$$$. Clearly, we have $$$g(S) = \displaystyle\sum_{T \subseteq S}f(T)$$$. By the Principle of Inclusion-Exclusion, we have $$$f(S) = \displaystyle\sum_{T \subseteq S}(-1)^{|S| - |T|}g(T)$$$ (see here, this is a commonly used form of PIE).

$$$g(S)$$$ is much easier to count. Suppose $$$S = \{s_1,s_2,...,s_m\}$$$ where $$$s_{i}<s_{i+1}$$$ for all $$$1 \le i \le k-1$$$. This means that $$$p_{j}<p_{j+1}$$$ must hold for all $$$j$$$ that is not in $$$S$$$. A better way to visualize this is that we divide our permutation into several increasing blocks, the first block has size $$$s_{1}$$$, the second block has size $$$s_{2}-s_{1}$$$ and so on. The last block has size $$$n-s_{m}$$$. The only restrictions is that the elements in each block must be in increasing order. Hence, it is clear that the probability that a random permutation satisfies this condition is just $$$\frac{1}{s_{1}!(s_{2}-s_{1})!(s_{3}-s_{2})!...(n-s_{m})!}$$$ (multiply the probability that each block is ordered correctly). Hence, $$$g(S) = \frac{n!}{s_{1}!(s_{2}-s_{1})!(s_{3}-s_{2})!...(n-s_{m})!}$$$.

Let's substitute this back to our equation. For simplicity, let $$$D = {k,2k,3k,...,} \cap [n-1]$$$. Our problem reduces to finding $$$f(D)$$$.

Any subset $$$T = \{s_1,s_2,...,s_{m-1}\}$$$ (elements sorted in increasing order) of $$$D$$$ can be described by a sequence of positive integers $$$b_{1},b_{2},...,b_{m}$$$ where $$$b_{i} = s_{i}-s_{i-1}$$$ ($$$s_{0}=0$$$, $$$s_{m}=n$$$ for simplicity) denote the gap between consecutive elements of $$$T$$$. For example, when $$$T = \{3,9,12\}$$$ and $$$n=13$$$, we can describe it with the sequence $$$3,6,3,1$$$. Note that $$$k$$$ divides $$$b_{1},b_{2},...,b_{m-1}$$$ and $$$b_{m}$$$ has the same remainder as $$$n$$$ mod $$$k$$$ (call such sequences good).

This allows us to simplify the formula of $$$g(T)$$$ to $$$\frac{n!}{b_{1}!b_{2}!...b_{m}!}$$$.

Hence,

$$$f(D) = \displaystyle\sum_{T \subseteq D}(-1)^{|D| - |T|}g(T)$$$

$$$= \displaystyle\sum_{\sum b_{i} = n, b_{i} \text{ good}}(-1)^{|D|-(m-1)}\frac{n!}{b_{1}!b_{2}!...b_{m}!}$$$

For simplicity, let $$$n = kq+r$$$ where $$$1 \le r \le k$$$. Since we only care about the answer for $$$k \mid n - r$$$, we look at the EGF on these terms only, i.e. consider $$$F(x) = \displaystyle\sum_{q \ge 0}\frac{f(kq+r)}{q!}x^{q}$$$.

We have

$$$F(x) = \displaystyle\sum_{q \ge 0}\frac{f(kq+r)}{(kq+r)!}x^{kq+r}$$$

$$$= \displaystyle\sum_{q \ge 0}\displaystyle\sum_{m \ge 1}\displaystyle\sum_{\sum b_{i} = kq+r, b_{i} \text{ good}}(-1)^{q-m+1}\frac{(kq+r)!}{b_{1}!b_{2}!...b_{m}!} \cdot \frac{x^{kq+r}}{(kq+r)!}$$$

$$$= \displaystyle\sum_{m \ge 1}\displaystyle\sum_{q \ge 0}\displaystyle\sum_{\sum b_{i} = kq+r, b_{i} \text{ good}}(-1)^{q-m+1}\frac{x^{b_{1}} \cdot x^{b_{2}} \cdot ... \cdot x^{b_{m}}}{b_{1}!b_{2}!...b_{m}!}$$$

$$$= \displaystyle\sum_{m \ge 1}\displaystyle\left(\displaystyle\sum_{i \ge 1}\frac{(-1)^{i-1} \cdot x^{ki}}{(ki)!}\right)^{m-1} \cdot \left(\displaystyle\sum_{i \ge 0}\frac{(-1)^{i} \cdot x^{ki+r}}{(ki+r)!}\right)$$$

Take a moment to digest the last identity (try expanding the brackets). The idea is that we are able to make $$$b_{1},b_{2},...,b_{m}$$$ independent of each other and simplify our sum into the convolution (or power) of several polynomials.

Continuing our simplifications, we have

$$$= \left(\displaystyle\sum_{i \ge 0}\frac{(-1)^{i} \cdot x^{ki+r}}{(ki+r)!}\right) \cdot \displaystyle\sum_{m \ge 1}\displaystyle\left(\displaystyle\sum_{i \ge 1}\frac{(-1)^{i-1} \cdot x^{ki}}{(ki)!}\right)^{m-1}$$$

$$$ = \left(\displaystyle\sum_{i \ge 0}\frac{(-1)^{i} \cdot x^{ki+r}}{(ki+r)!}\right) \cdot \frac{1}{1 - \left(\displaystyle\sum_{i \ge 1}\dfrac{(-1)^{i-1} \cdot x^{ki}}{(ki)!}\right)}$$$

$$$ = \frac{\displaystyle\sum_{i \ge 0}\frac{(-1)^{i} \cdot x^{ki+r}}{(ki+r)!}}{\displaystyle\sum_{i \ge 0}\dfrac{(-1)^{i} \cdot x^{ki}}{(ki)!}}$$$

We can compute the first $$$n$$$ terms of the last expression in $$$O(n\log n)$$$ time, and we're done.

I think exponential generating functions are especially good at handling sums involving multinomial coefficients as you can separate the factorials into different polynomials and reduce it to convolution of EGFs.

Expected Value of Stopping Time

I think this is also a beautiful application of generating functions. The recent problem Slime and Biscuits can be solved using the trick I will demonstrate here (there is a short editorial using this method here). Let's look at a different example.

Problem. Switches There are $$$n$$$ switches, each of which can be on or off initially. Every second, there is a probability of $$$\frac{p_{i}}{S}$$$ that you will flip the state of the $$$i$$$-th switch. The game ends when all switches are off. What is the expected number of seconds the game will last?

Constraints: $$$n \le 100$$$, $$$\sum p_{i} = S$$$, $$$p_{i}>0$$$, $$$S \le 50000$$$

It is hard to compute when a game ends, and it is also hard to compute the probability that the game ends in exactly $$$k$$$ moves. However, it is relatively easier to compute the probability that the state of switches are all off after exactly $$$k$$$ moves. Let $$$a(k)$$$ denote the probability that all switches are off after exactly $$$k$$$ moves, and $$$A(x)$$$ be the EGF (we'll see why we choose EGF soon) of $$$a$$$. How to find $$$A(x)$$$?

Suppose we fix a sequence $$$a_{1}, a_{2}, ..., a_{n}$$$ such that $$$\sum a_{i} = k$$$ where $$$a_{i}$$$ denotes the number of flips of the $$$i$$$-th switch. The probability of achieving this sequence is $$$\frac{k!}{a_{1}!a_{2}!...a_{n}!}\left(\frac{p_1}{S}\right)^{a_{1}}\left(\frac{p_2}{S}\right)^{a_{2}}...\left(\frac{p_n}{S}\right)^{a_{n}}$$$ (the product of the number of sequences of switch flips such that switch $$$i$$$ is flipped exactly $$$a_{i}$$$ times and the probability the sequence of switch flips occur). Let $$$q_{i} = \frac{p_i}{S}$$$ for convenience. Hence, $$$\frac{a(k)}{k!} = \frac{q_{1}^{a_{1}}}{a_{1}!} \cdot \frac{q_{2}^{a_{2}}}{a_{2}!} \cdot ... \cdot \frac{q_{n}^{a_{n}}}{a_{n}!}$$$.

Let's assume that all switches are off at the initial state for now. Then, the EGF is $$$A(x) = \displaystyle\prod_{i=1}^{n}\left(\frac{(q_{i}x)^{0}}{0!} + \frac{(q_{i}x)^{2}}{2!} + ...\right)$$$, since we require each switch to be flipped an even number of times. In the general case, our EGF is similar, but some switches are required to be flipped an odd number of times instead. Motivated by this special case, we let $$$E_{i}(x) = \displaystyle\sum_{j \text{ even}}\frac{(q_{i}x)^{j}}{j!}$$$ and $$$O_{i}(x) = \displaystyle\sum_{j \text{ odd}}\frac{(q_{i}x)^{j}}{j!}$$$. Then, if $$$s_{i}=1$$$ (the switch is initially on), we choose $$$O_{i}(x)$$$ as the $$$i$$$-th term of our product (in the formula for $$$A(x)$$$), while if $$$s_{i}=0$$$, we choose $$$E_{i}(x)$$$ as the $$$i$$$-th term of our product.

There's an even more compact way to write this "observation". Recall that $$$\frac{e^{x}+e^{-x}}{2} = \cosh x = 1 + \frac{x^2}{2!} + \frac{x^4}{4!} + ...$$$. We can use a similar idea here. To express $$$E_{i}(x)$$$, we can try to add or subtract $$$\exp(q_{i}x)$$$ with $$$\exp(-q_{i}x)$$$ to "filter out" the even or odd power terms (we will see a generalization of this trick called the roots of unity filter later in the post). Verify that $$$E_{i}(x) = \frac{\exp(q_{i}x) + \exp(-q_{i}x)}{2}$$$ and $$$O_{i}(x) = \frac{\exp(q_{i}x) - \exp(-q_{i}x)}{2}$$$. Thus, we can even write this as $$$\frac{\exp(q_{i}x) + (-1)^{s_{i}}\exp(-q_{i}x)}{2}$$$.

To summarize, $$$A(x) = \displaystyle\prod_{i=1}^{n}\frac{[\exp(q_{i}x) + (-1)^{s_{i}}\exp(-q_{i}x)]}{2}$$$.

Ok, so we can find $$$a(k)$$$. Let $$$c(k)$$$ denote the probability that the game ends (all switches are off for the first time) after exactly $$$k$$$ moves. How can we relate $$$c(k)$$$ and $$$a(k)$$$? Here is the trick. Consider any sequence of $$$k$$$ moves resulting in all switches being off. After $$$i$$$ moves (possibly $$$i=k$$$), the switches are all off for the first time and the game ends. For the remaining $$$k-i$$$ moves, we need to flip each switch an even number of times. Thus, if we let $$$b(k)$$$ denote the probability that we flip each switch an even number of times after exactly $$$k$$$ moves, then $$$a(k) = \displaystyle\sum_{i=0}^{k}c(i)b(k-i)$$$. Does this look familiar? Yes, it is just normal convolution (but on OGFs instead of EGFs).

Firstly, let's find the EGF of $$$b(k)$$$. This is just a special case of $$$a(k)$$$ when $$$s_{i}=0$$$, so we have $$$B(x) = \displaystyle\prod_{i=1}^{n}\frac{[\exp(q_{i}x) + \exp(-q_{i}x)]}{2}$$$.

To relate $$$c(k)$$$ with $$$a(k), b(k)$$$, we need to look at the OGFs of $$$a$$$ and $$$b$$$ (call them $$$A_{o}(x)$$$ and $$$B_{o}(x)$$$), since the recurrence we found is related to the convolution of OGFs. Thus, defining $$$C(x)$$$ and $$$C_{o}(x)$$$ analogously, we have $$$A_{o}(x) = C_{o}(x)B_{o}(x)$$$, so $$$C_{o}(x) = \frac{A_{o}(x)}{B_{o}(x)}$$$.

Our answer is $$$\displaystyle\sum_{k=0}^{\infty}kc(k)$$$ (recall the definition of expected value and $$$c(k)$$$). This is just $$$C_{o}'(1)$$$. By Quotient Rule, this is equivalent to finding $$$\frac{A_{o}'(1)B_{o}(1) - A_{o}(1)B_{o}'(1)}{B_{o}(1)^2}$$$.

Let's see if we can find $$$A_{o}(x)$$$ from $$$A(x)$$$. It is hard to extract the coefficients of $$$A(x)$$$ if we are looking at a product of sums like $$$\displaystyle\prod_{i=1}^{n}\frac{[\exp(q_{i}x) + (-1)^{s_{i}}\exp(-q_{i}x)]}{2}$$$. To turn this into a sum, let's expand the brackets! Note that since $$$q_{i} = \frac{p_{i}}{S}$$$, if we expand the whole thing, we will get a sum where each term is of the form $$$c_{i}\exp(\frac{i}{S}x)$$$ for some $$$-S \le i \le S$$$ (since we are multiplying terms of the form $$$\exp(\frac{j}{S}x)$$$ and the sum of $$$p_{i}$$$ is $$$S$$$). In other words, $$$A(x) = \displaystyle\sum_{-S}^{S}a_{i}\exp\left(\frac{i}{S}x\right)$$$ for some constants $$$a_{i}$$$.

How do we expand the terms quickly? We can use dp! Go through the brackets one by one, and maintain $$$dp[i][j]$$$ which is the coefficient of $$$\exp\left(\frac{j}{S}x\right)$$$ after expanding $$$i$$$ brackets. You can do dp transitions in $$$O(1)$$$ to get the final answer in $$$O(nS)$$$ time.

From $$$A(x) = \displaystyle\sum_{i=-S}^{S}a_{i}\exp\left(\frac{i}{S}x\right)$$$, we can find a formula for $$$A_{o}(x)$$$. Indeed, $$$\exp\left(\frac{i}{S}x\right) = \sum_{j \ge 0}\left(\frac{(\frac{i}{S}x)^j}{j!}\right)$$$, so by removing the $$$j!$$$ terms we get a formula for $$$A_{o}(x)$$$. Specifically,

$$$A_{o}(x) = \displaystyle\sum_{i=-S}^{S}a_{i}\sum_{j \ge 0}\left(\frac{i}{S}x\right)^{j} = \displaystyle\sum_{i=-S}^{S}\frac{a_{i}}{1 - \frac{i}{S}x}$$$.

Similarly, we can derive $$$B_{o}(x) = \displaystyle\sum_{i=-S}^{S}\frac{b_{i}}{1 - \frac{i}{S}x}$$$ for some constants $$$b_{i}$$$.

We want to compute $$$A_{o}(1)$$$, $$$A_{o}'(1)$$$ (and similar for $$$B$$$). However, we run into a slight problem of dividing by zero if we try to do it directly, since $$$1 - \frac{S}{S}(1) = 0$$$. However, since $$$C_{o}(x) = \frac{A_{o}(x)}{B_{o}(x)}$$$, we can multiply both $$$A_{o}(x)$$$ and $$$B_{o}(x)$$$ by the troublesome $$$1-x$$$ term. Formally, let $$$E(x) = (1-x)A_{o}(x)$$$ and $$$F(x) = (1-x)B_{o}(x)$$$. Then, we only need to compute $$$E(1), E'(1), F(1)$$$ and $$$F'(1)$$$ (and as we shall see they are computable and easy to compute).

$$$E(1)$$$ is trivial, since $$$(1-x)A_{o}(x) = \displaystyle\sum_{i=-S}^{S}\frac{a_{i}(1-x)}{1 - \frac{i}{S}x} = \displaystyle\sum_{i=-S}^{S-1}\frac{a_{i}(1-x)}{1 - \frac{i}{S}x} + a_{S}$$$. Since all the terms except $$$a_{S}$$$ when we substitute $$$x=1$$$, $$$E(1) = a_{S}$$$.

$$$E'(1)$$$ is not hard either. By Quotient Rule,

$$$E'(x) = \left[\displaystyle\sum_{i=-S}^{S-1}\frac{a_{i}(1-x)}{1 - \frac{i}{S}x}\right]' = \displaystyle\sum_{i=-S}^{S-1}\frac{-a_{i}\left(1 - \frac{i}{S}x\right) - a_{i}(1-x)\left(-\frac{i}{S}\right)}{\left(1 - \frac{i}{S}x\right)^2}$$$. Substituting $$$x=1$$$ gives us $$$\displaystyle\sum_{i=-S}^{S-1}\frac{-a_{i}}{1 - \frac{i}{S}}$$$, which is easy to compute in $$$O(S)$$$ time.

Thus, we have an easy-to-code $$$O(Sn)$$$ time solution.

In general, the trick of introducing $$$A$$$ and $$$B$$$ can be used to solve other problems that asks for the first stopping time of some system if you have multiple possible ending states and the time taken to reach from one ending state to another is equal, and the probability to reach an ending state in a fixed amount of moves is easy to compute.

Roots of Unity Filter

Next, we introduce a trick that is more rarely used in competitive programming but nevertheless interesting to learn. The motivation is the following classic problem.

Problem. Roots of Unity Filter Find the sum $$$\displaystyle\sum_{i \equiv r \pmod{m}}\binom{n}{i}$$$ modulo an arbitrary $$$MOD$$$.

Constraints: $$$1 \le n \le 10^{18}, 2 \le m \le 2000, 0 \le r \le n - 1, 10^{8} \le MOD \le 10^{9}$$$

This is a very standard problem in math olympiads, but let's see how to solve it in CP. Firstly, we need to know the trick behind the problem. Let's look at the case $$$m=2$$$, $$$r=0$$$, i.e. find $$$\displaystyle\sum_{i \equiv 0 \pmod{2}}\binom{n}{i}$$$.

It is well-known that this is just $$$2^{n-1}$$$, but where does it come from. Let us look at the generating function $$$\displaystyle\sum_{i \ge 0}\binom{n}{i}x^{i}$$$. If we plug in $$$x=1$$$, we get the sum of all binomial coefficients. However, we want to "filter" out the terms with even (or odd) power. What values can we easily substitute? Another easy candidate to try is $$$x=-1$$$, which gives us $$$\displaystyle\sum_{i \ge 0}\binom{n}{i}(-1)^{i}$$$. If we write out the terms, we get the equations:

$$$(1+1)^{n} = \binom{n}{0} + \binom{n}{1} + \binom{n}{2} + \binom{n}{3} + ...$$$

$$$(1-1)^{n} = \binom{n}{0} - \binom{n}{1} + \binom{n}{2} - \binom{n}{3} + ...$$$

Notice that the odd-numbered terms are gone when we add up the equations! Thus, adding up the equations and dividing by $$$2$$$, we obtain $$$\binom{n}{0}+\binom{n}{2}+... = 2^{n-1}$$$.

How to generalize the above method? Let's say we want to find $$$\binom{n}{0} + \binom{n}{3} + \binom{n}{6} + ...$$$.

We can split our sum into three parts:

$$$\binom{n}{0}x^{0} + \binom{n}{3}x^{3} + \binom{n}{6}x^{6} + ...$$$

$$$\binom{n}{1}x^{1} + \binom{n}{4}x^{4} + \binom{n}{7}x^{7} + ...$$$

$$$\binom{n}{2}x^{2} + \binom{n}{5}x^{5} + \binom{n}{8}x^{8} + ...$$$

To leave only the sum of binomial coefficients in each group, we need to substitute $$$x$$$ so that $$$x^{3}=1$$$.

Clearly, $$$x=1$$$ works. What other values of $$$x$$$ work?

The values of $$$x$$$ such that $$$x^{3} = 1$$$ are also called the $$$3$$$rd roots of unity. In this case, $$$x = e^{\frac{2\pi i}{3}}, e^{\frac{4\pi i}{3}}$$$ are the other two roots of unity.

Let $$$S(n,i)$$$ denote the sum of $$$\binom{n}{k}$$$ for all $$$k \equiv i \pmod{3}$$$. (so we want to find $$$S(n,0), S(n,1), S(n,2)$$$).

Let $$$\omega = e^{\frac{2\pi i}{3}}$$$, then $$$1, \omega, \omega^{2}$$$ are the 3rd roots of unity. Note that $$$\omega^{3} = 1$$$ by definition. We substitute $$$x = \omega^{i}$$$ for $$$0 \le i \le 2$$$, to get the following system of equations:

$$$S(n,0)+S(n,1)+S(n,2) = (1+1)^{n}$$$

$$$S(n,0)+\omega S(n,1) + \omega^{2}S(n,2) = (1 + \omega)^{n}$$$

$$$S(n,0) + \omega^{2}S(n,1) + \omega^{4}S(n,2) = (1 + \omega^{2})^{n}$$$

How to solve this system of equations? We need an important identity of the roots of unity, which is $$$1 + \omega^{k} + \omega^{2k} + ... + \omega^{(m-1)k} = 0$$$ whenever $$$k$$$ is not divisible by $$$m$$$ (if $$$\omega^{m}=1$$$). This is because by the geometric series formula, we have $$$1 + \omega^{k} + \omega^{2k} + ... + \omega^{(m-1)k} = \frac{1 - \omega^{k}}{1 - \omega^{k}} = 0$$$ if $$$\omega^{k} \neq 1$$$.

Hence, in this specific case, $$$1 + \omega + \omega^{2} = 0$$$ and $$$1 + \omega^{2} + \omega^{4} = 0$$$.

Summing all three equations gives us:

$$$3S(n,0) = (1+1)^{n} + (1 + \omega)^{n} + (1 + \omega^{2})^{n}$$$

How to obtain $$$S(n,1)$$$ and $$$S(n,2)$$$ easily? Here is a simple trick. Instead of looking at $$$(x+1)^{n}$$$ only, we look at $$$x(x+1)^{n}$$$ and $$$x^{2}(x+1)^{n}$$$ as well and repeat the same process. Now, all coefficients are shifted properly and thus we can take the sum and divide by $$$3$$$ to find $$$S(n,1)$$$ and $$$S(n,2)$$$ as in the previous case.

In summary, you should get something like:

$$$3S(n,0) = (1+1)^{n} + (1 + \omega)^{n} + (1 + \omega^{2})^{n}$$$

$$$3S(n,2) = (1+1)^{n} + \omega(1 + \omega)^{n} + \omega^{2}(1 + \omega^{2})^{n}$$$

$$$3S(n,1) = (1+1)^{n} + \omega^{2}(1 + \omega)^{n} + \omega(1 + \omega^{2})^{n}$$$ (note that $$$\omega^{4} = \omega$$$)

The remaining problem is how to evaluate the values from this formula. We have a problem because $$$\omega$$$ seems to represent a complex number here ($$$\omega = e^{\frac{2\pi i}{3}}$$$).

The idea is that we do not necessarily need to work in the world of complex numbers. What we require of $$$\omega$$$ is for it to satisfy $$$\omega^{3}=1$$$ and $$$\omega^{k} \neq 1$$$ for $$$1 \le k \le 2$$$. Let us compute our answer as a polynomial in $$$\omega$$$, but modulo $$$\omega^{3}-1$$$ (which means that we will have $$$\omega^{3}=1$$$, $$$\omega^{4}=\omega$$$, etc...). Also, obviously the coefficients will be computed modulo $$$MOD$$$.

For example, $$$(1+2\omega+\omega^{2})(1+\omega+3\omega^{2}) = 3\omega^{4} + 7\omega^{3} + 6\omega^{2} + 3\omega + 1 = 3\omega + 7 + 6\omega^{2} + 3\omega + 1 = 6\omega^{2} + 6\omega + 8$$$.

Hence, at any point of time we will always have a polynomial in $$$\omega$$$ with degree $$$\le 2$$$.

To compute something like $$$(1 + \omega)^{n}$$$, we can use the usual divide-and-conquer method for computing large powers. Multiplication of polynomials can be implemented naively.

We can generalize this method for any $$$m$$$. Let $$$\omega$$$ be the $$$m$$$-th root of unity, so $$$\omega^{m}=1$$$. Let $$$S(n,r)$$$ denote the sum of $$$\binom{n}{k}$$$ over all $$$k \equiv r \pmod{m}$$$.

$$$(1+w)^{n} = \displaystyle\sum_{i \ge 0}\binom{n}{i}w^{i}$$$ for any number $$$w$$$. We want to make the coefficients of the form $$$\binom{n}{jm+r}$$$ to match with the powers $$$w^{0}, w^{m}, w^{2m}, ...$$$ because these will help us obtain the sum $$$S(n,r)$$$. So, it is more helpful to consider the polynomial $$$w^{m-r}(1+w)^{n} = \displaystyle\sum_{i \ge 0}\binom{n}{i}w^{i+m-r}$$$.

Substituting $$$w = 1, \omega, \omega^{2}, ..., \omega^{m-1}$$$ and summing up, we get

$$$\displaystyle\sum_{j=0}^{m-1}w^{j(m-r)}(1+w^{j})^{n} = \displaystyle\sum_{j=0}^{m-1}\displaystyle\sum_{i \ge 0}\binom{n}{i}\omega^{j(i+m-r)} = \displaystyle\sum_{i \ge 0}\binom{n}{i}\displaystyle\sum_{j=0}^{m-1}(\omega^{i+m-r})^{j}$$$.

Recall that $$$1 + w + w^{2} + ... + w^{m-1} = 0$$$ whenever $$$w = \omega, \omega^{2}, ..., \omega^{m-1}$$$ (by the geometric series formula) and $$$= m$$$ whenever $$$w = 1$$$. Note that $$$\omega^{i+m-r} = 1$$$ if and only if $$$i \equiv r \pmod{m}$$$. Hence,

$$$\displaystyle\sum_{i \ge 0}\binom{n}{i}\displaystyle\sum_{j=0}^{m-1}(\omega^{i+m-r})^{j} = m \cdot \displaystyle\sum_{i \equiv r \pmod{m}}\binom{n}{i} = mS(n,r)$$$ (note the multiple of $$$m$$$).

It remains to compute $$$\frac{1}{m}\displaystyle\sum_{j=0}^{m-1}w^{j(m-r)}(1+w^{j})^{n}$$$ modulo $$$MOD$$$. The easiest way to do this is to first calculate the polynomial $$$F(x) = x^{m-r}(1+x)^{n}$$$ modulo $$$x^{m}-1$$$. We can do this via binary exponentiation and multiplying polynomials naively in $$$O(m^{2}\log n)$$$ time (remembering to set $$$x^{m} = 1, x^{m+1} = x, ...$$$ after each multiplication).

After we obtained the polynomial, we are actually done. The sum we want to find is $$$\frac{1}{m}[F(1)+F(\omega)+F(\omega^{2})+...+F(\omega^{m-1})]$$$. Letting $$$F(x) = \displaystyle\sum_{i=0}^{m-1}a_{i}x^{i}$$$, we realize that the desired sum is

$$$\frac{1}{m}\displaystyle\sum_{i=0}^{m-1}a_{i}\displaystyle\sum_{j=0}^{m-1}(\omega^{j})^i = a_{0}$$$ since all the terms with $$$i \ge 1$$$ sum up to $$$0$$$. Hence, we just need to output the constant term of $$$F(x)$$$.

Note: This problem is also solvable in $$$O(m\log m\log n)$$$ if $$$MOD$$$ is a FFT-friendly modulo by using FFT to multiply the polynomials during exponentiation.

Next, we look at a nice harder example.

Problem. Rhyme Compute the sum of $$$\frac{n!}{x_{1}!x_{2}!...x_{k}!}$$$ over all tuples of positive integers $$$(x_{1},x_{2},...,x_{k})$$$ such that $$$d|x_{i}$$$ and $$$\sum x_{i} = n$$$, modulo $$$19491001$$$ (a prime).

Constraints: $$$n \le 10^{9}, k \le 2000, d \in \{4, 6\}$$$.

This problem is mentioned in jqdai0815's blog, but I will explain it in detail here. A funny story is that I came up with this problem and was trying to solve it one day, but then the next day I saw this problem on his post :) Anyway, I think this is a nice application of generating functions which deserves to be mentioned here.

By now, it should be clear what the first step should be. We want to seperate the terms $$$\frac{1}{x_{i}!}$$$ into different polynomials and convolute them, and the most obvious way to do it is to consider the following product:

$$$\left(1 + \frac{x^d}{d!} + \frac{x^{2d}}{2d!} + ... +\right)\left(1 + \frac{x^d}{d!} + \frac{x^{2d}}{2d!} + ... +\right)...\left(1 + \frac{x^d}{d!} + \frac{x^{2d}}{2d!} + ... +\right)$$$ ($$$k$$$ times).

It is easy to see that the coefficient of $$$x^{n}$$$ is precisely the sum of $$$\frac{1}{x_{1}!x_{2}!...x_{k}!}$$$ over all valid tuples of $$$x_{i}$$$.

Let $$$F(x) = 1 + \frac{x^d}{d!} + \frac{x^{2d}}{2d!} + ...$$$. How do we write $$$F(x)$$$ is a simpler way? For the case $$$d=2$$$, we know that this is just $$$\cosh x = \frac{e^{x} + e^{-x}}{2}$$$. The idea is the same as the previous problem: we want to substitute different values of $$$x$$$ into our "original function" (in this case it is $$$e^{x} = 1 + \frac{x}{1!} + \frac{x^2}{2!} + ...$$$) to "filter out" the powers that are not divisible by $$$d$$$.

Let's try to play with roots of unity again. Let $$$\omega$$$ be the $$$m$$$-th root of unity (so $$$\omega^{m}=1$$$ and $$$1 + \omega + \omega^{2} + ... + \omega^{m-1} = 0$$$). We substitute $$$x$$$ as $$$\omega^{i}x$$$ for $$$0 \le i \le d-1$$$ into $$$e^{x}$$$ and see what happens (note that this is where $$$e^{-x}$$$ comes from for $$$d=2$$$).

$$$e^{x} = 1 + \frac{x}{1!} + \frac{x^2}{2!} + ... + \frac{x^{d}}{d!} + ...$$$

$$$e^{\omega x} = 1 + \frac{\omega x}{1!} + \frac{\omega^{2} x^2}{2!} + ... + \frac{\omega^{d} x^{d}}{d!} + ...$$$

...

$$$e^{\omega^{d-1} x} = 1 + \frac{\omega^{d-1} x}{1!} + \frac{\omega^{2(d-1)} x^2}{2!} + ... + \frac{\omega^{d(d-1)} x^{d}}{d!} + ...$$$

If we sum all these equations, by the identity $$$1 + (\omega^{i}) + (\omega^{i})^2 + ... + (\omega^{i})^{d-1} = 0$$$ for $$$i$$$ not divisible by $$$d$$$, we obtain

$$$e^{x} + e^{\omega x} + e^{\omega^{2} x} + ... + e^{\omega^{d-1} x} = d\left(1 + \frac{x^d}{d!} + \frac{x^{2d}}{2d!} + ... +\right)$$$

Back to our problem, our goal is to find the coefficient of $$$x^n$$$ in the following product:

$$$\frac{1}{d^{k}}\left(e^{x} + e^{\omega x} + e^{\omega^{2} x} + ... + e^{\omega^{d-1} x}\right)^{k}$$$.

The next step requires some knowledge on roots of unity. The good thing about $$$d=4,6$$$ is that $$$\phi(4) = \phi(6) = 2$$$, so the $$$d$$$-th cyclotomic polynomial has degree $$$2$$$ (it is the minimal polynomial of the primitive $$$d$$$-th roots of unity). It can be obtained via the expansion of $$$\displaystyle\prod_{\gcd(i,d)=1}(x-\omega^{i})$$$ (you can find more details on the wikipedia page).

For example, for $$$d=4$$$, we have $$$\omega^{2} + 1 = 0$$$ and for $$$d=6$$$, we have $$$\omega^{2} - \omega + 1 = 0$$$. In both cases, we can always represent $$$\omega^{i}$$$ in the form of $$$a\omega + b$$$ by repeatedly reducing the maximal power using this equation.

Thus, if we let $$$\omega^{i} = a_{i} + b_{i}\omega$$$, our goal is to find $[x^{n}]\frac{1}{d^{k}}\left(\displaystyle\sum_{i=0}^{d-1} e^{(a_i+b_i\omega)x}\right)^{k}.

Notice that the terms of the form $$$e^{ax}$$$ and $$$e^{b\omega x}$$$ are in some sense separated. We make the substitution $$$u = e^{x}$$$ and $$$v = e^{\omega x}$$$ to make things simpler:

$$$[x^{n}]\frac{1}{d^{k}}\left(\displaystyle\sum_{i=0}^{d-1} u^{a_i}v^{b_i}\right)^{k}$$$.

Notice that $$$-1 \le a_{i}, b_{i} \le 1$$$ (you can explicitly write out these values from the definition). If we expand this bivariate polynomial in $$$u$$$ and $$$v$$$, we'll get a sum where all the terms are of the form $$$u^{i}v^{j}$$$ if $$$-k \le i,j \le k$$$ (consider the way to choose the terms in the expansion).

To avoid dealing with negative terms, let us multiply by $$$(uv)^{k}$$$. Let $$$F(u,v) = \displaystyle\sum_{i=0}^{d-1}u^{a_{i}+1}v^{b_{i}+1}$$$. We want to find $$$G(u,v) = F(u,v)^{k}$$$. Note that $$$G(u,v) = \displaystyle\sum_{0 \le i,j \le 2k}g_{i,j}u^{i}v^{j}$$$ for some constants $$$c_{i,j}$$$ (define $$$f_{i,j}$$$ similarly).

While we can do something like 2D FFT, it is probably not going to pass the time limit. The next step is a magical trick mentioned in jqdai0815's article. The idea is that we want to find a recurrence on the coefficients $$$g_{i,j}$$$ so that we can use dp to compute them in $$$O(k^2)$$$ time. Let's differentiate both sides of $$$G(u,v) = F(u,v)^{k}$$$ with respect to $$$u$$$ (or $$$v$$$, it doesn't matter). Let $$$g(u,v)$$$ and $$$f(u,v)$$$ denote the partial derivative of $$$G(u,v)$$$ and $$$F(u,v)$$$ with respect to $$$u$$$. Then, by the Chain Rule,

$$$g(u,v) = kF(u,v)^{k-1}f(u,v)$$$

Noting that $$$F(u,v)^{k-1} = \frac{G(u,v)}{F(u,v)}$$$, we have

$$$F(u,v)g(u,v) = kG(u,v)f(u,v)$$$

Comparing the coefficients of $$$u^{i}v^{j}$$$ on both sides, and noting that the coefficient of $$$u^{i}v^{j}$$$ of $$$f(u,v)$$$ is $$$(i+1)f_{i+1,j}$$$, we obtain the recurrence

$$$\displaystyle\sum_{0 \le i_1 \le i, 0 \le j_1 \le j} (i+1-i_{1})g_{i+1-i_{1},j-j_{1}}f_{i_{1},j_{1}} = k\displaystyle\sum_{0 \le i_1 \le i, 0 \le j_1 \le j} (i_{1}+1)f_{i_{1}+1,j_{1}}g_{i-i_{1},j-j_{1}}$$$

The good thing about this equation is that if we find the values of $$$g_{i,j}$$$ in increasing order of $$$i$$$ followed by increasing order of $$$j$$$, the value of $$$g_{i,j}$$$ is only dependent on previous values! A subtle note is that $$$f_{0,0} = 0$$$ but $$$f_{0,1} \neq 0$$$. Thus, if you only leave the summand with $$$i_{1}=0$$$ and $$$j_{1}=1$$$ on the LHS and throw everything to the RHS, you can compute $$$g_{i,j}$$$ with dp. Note that there are only $$$O(1)$$$ nonzero values of $$$f_{i,j}$$$, so you can actually just iterate over $$$i_{1}, j_{1} \le 2$$$ to compute the dp transitions in $$$O(1)$$$.

The base case is $$$g_{i,j}$$$ with $$$i$$$ or $$$j$$$ equal to $$$0$$$, which can be found by binomial theorem after substituting $$$v=0$$$ (I leave this as an exercise).

To summarize, you can find the expansion of $$$[x^{n}]\frac{1}{d^{k}}\left(\displaystyle\sum_{i=0}^{d-1} u^{a_i}v^{b_i}\right)^{k}$$$ in $$$O(k^2)$$$ time.

How to compute the answer after you reduce it to $$$[x^{n}]\frac{1}{d^{k}}\displaystyle\sum_{-k \le i,j \le k}g_{i+k,j+k}u^{i}v^{j}$$$?. Let's look at each individual term $$$[x^{n}]u^{i}v^{j}$$$. It is exactly equal to $$$[x^n]\exp(x(i + j\omega)) = \frac{1}{n!}(i+j\omega)^{n}$$$. The $$$\frac{1}{n!}$$$ cancels off with the numerator of $$$\frac{n!}{x_{1}!x_{2}!...x_{k}!}$$$. Hence, it is sufficient to compute $$$(i+j\omega)^n$$$ for $$$-k \le i,j \le k$$$.

We can try to use the same trick as the previous problem: Maintaining a polynomial of the form $$$a+b\omega$$$ while doing binary exponentiation. However, it turns out to be a bit too slow (at least my implementation of it). Here, we can exploit the fact that we are computing the answer modulo $$$p = 19491001$$$.

You can find that $$$7$$$ is a primitive root of $$$19491001$$$, and $$$p-1$$$ is divisible by both $$$4$$$ and $$$6$$$. Thus, if we take $$$\omega = 7^{\frac{p-1}{d}}$$$, we will preserve the property that $$$\omega^{d} = 1$$$ and $$$\omega^{i} \neq 1$$$ for $$$1 \le i \le d-1$$$ (equivalently, $$$7^{\frac{p-1}{d}}$$$ is a primitive $$$d$$$-th root of unity in $$$\mathbb{Z}_{p}$$$). Thus, the computations can be done in integers which is faster.

In any case, this gives an $$$O(k^2\log n)$$$ solution with relatively small constants.

Generating Functions that you can't compute "naively"

In some problems, the constraints might be too large to even compute the generating functions you found with FFT or algorithms involving polynomial operations. In this case, you usually need to analyze some special properties of the generating function you are dealing with (and thus it is helpful to recognize some common generating functions).

We start with a relatively easy example.

Problem. Perfect Permutations Find the number of permutations of length $$$n$$$ with exactly $$$k$$$ inversions, modulo $$$10^{9}+7$$$.

Constraints: $$$1 \le n \le 10^{9}$$$, $$$0 \le k \le 10^{5}$$$, $$$n \ge k$$$. There are $$$100$$$ testcases per input file.

Recall that for a permutation $$$p_1, p_2, ..., p_n$$$, an inversion is a pair of indices $$$i<j$$$ such that $$$p_i > p_j$$$.

In my post $$$4$$$ years ago, I mentioned how to solve an easier variant of this problem using a doubling trick to perform dp. Now, let's describe a much simpler and faster solution using generating functions.

Firstly, let us rephrase the problem. Suppose we start with the sequence $$$1$$$ and add the elements $$$2, 3, ..., n$$$ to the permutation one by one. Note that by adding $$$2$$$, we can increase the number of inversions by $$$0$$$ or $$$1$$$ in exactly one way each. Similar, among the $$$i$$$ possible ways to add $$$i$$$ into the sequence, there is exactly one way to increase the number of inversions by $$$0$$$ to $$$i-1$$$ each. Thus, our problem is equivalent to finding the number of sequences $$$a_{0},a_{1},...,a_{n-1}$$$ such that $$$a_{i} \le i$$$ for all $$$i$$$ and $$$\sum a_{i} = k$$$.

Let us rephrase this in the language of generating functions. The idea is that we can "pick" any element from $$$0$$$ to $$$i-1$$$ in the $$$i$$$-th "bracket", so it is natural to consider the function

$$$F(x) = (1)(1+x)(1+x+x^2)(1+x+x^2+x^3)...(1+x+x^2+...+x^{n-1})$$$

The coefficient of $$$x^k$$$ in $$$F(x)$$$ gives us the answer.

Unfortunately, this polynomial is too large to compute directly. Let us rewrite it using the geometric series formula.

$$$F(x) = \frac{1-x}{1-x} \cdot \frac{1-x^2}{1-x} \cdot \frac{1-x^3}{1-x} \cdot ... \cdot \frac{1-x^n}{1-x}$$$

$$$= \frac{(1-x)(1-x^2)...(1-x^n)}{(1-x)^n}$$$

$$$= \prod_{i=1}^{n}(1-x^i) \cdot (1-x)^{-n}$$$

We know how to find the coefficient of $$$[x^i]$$$ in $$$(1-x)^{-n}$$$ for $$$0 \le i \le k$$$ (recall that $$$[x^i]$$$ $$$(1-x)^{-n} = \binom{n+i-1}{i}$$$). Hence, it is sufficient to find $$$[x^j]\prod_{i=1}^{n}(1-x^i)$$$ for all $$$0 \le j \le k$$$.

As $$$n \ge k$$$, we actually only need to compute $$$(1-x)(1-x^2)...(1-x^k)$$$ (as larger terms here don't contribute to the coefficients of lower powers of $$$x$$$). You can do it using a divide-and-conquer approach with FFT in $$$O(k\log^{2} k)$$$, but the MOD is not FFT friendly so it might be a bit annoying. Also, we will present an $$$O(k)$$$ solution which can solve the problem even when $$$k \le 10^6$$$.

The trick is that since the larger terms $$$1-x^{k+1}$$$, $$$1-x^{k+2}$$$, etc doesn't matter in our product, why not just add all of them and consider the infinite product $$$\displaystyle\prod_{n \ge 1}(1 - x^{n})$$$. It turns out that this is a well-known generating function, whose series expansion has a simple form given by the Pentagonal number theorem.

The theorem states that $$$\displaystyle\prod_{n \ge 1}(1 - x^{n}) = 1 + \displaystyle\sum_{i \ge 1}(-1)^{i}\left(x^{\frac{i(3i+1)}{2}} + i^{\frac{i(3i-1)}{2}}\right)$$$. There is a nice bijective proof of this (which you can find on Wikipedia or Enumerative Combinatorics Volume 1), but we only need to use the formula here.

From the series expansion, it is obvious how we can extract the coefficients in $$$O(k)$$$ time (actually even in $$$O(\sqrt{k})$$$ time).

Note that since $$$n$$$ is large, to compute $$$\binom{n+i-1}{i}$$$ for all $$$i$$$, you need to write it in the form $$$\frac{(n+i-1)(n+i-2)...(n)}{i!}$$$ and calculate it in increasing order of $$$i$$$ by multiplying a suitable factor each time $$$i$$$ increases.

Overall, we get an $$$O(k)$$$ time solution, which is more than enough for this problem.

Next, we look at a problem that has a natural statement in generating functions, but it turns out that the computation in generating functions is quite tricky. The official editorial has a nice and simpler solution using a magical observation, but to demonstrate the power of generating functions I will show an alternative method (which seems more straightforward and generalizable).

Problem. Sum of Fibonacci Sequence Let $$$d_{n,k}$$$ be defined by the recurrence $$$d_{1,1}=d_{1,2}=1$$$, $$$d_{1,k}=d_{1,k-1}+d_{1,k-2}$$$ for $$$k \ge 3$$$, and $$$d_{n,k}=\displaystyle\sum_{i=1}^{k}d_{n-1,i}$$$ for $$$n \ge 2$$$, $$$k \ge 1$$$.

Compute $$$d_{n,m}$$$ modulo $$$998244353$$$.

Constraints: $$$1 \le n \le 200000$$$, $$$1 \le m \le 10^{18}$$$

Let's get straight to the generating functions. Define $$$P_{n}(x)$$$ as the OGF for $$$d_{n,k}$$$. $$$d_{1,k}$$$ is just the Fibonacci sequence, so we know that $$$P_{1}(x) = \frac{x}{1-x-x^2}$$$.

How to obtain $$$P_{n}(x)$$$? Thanks to our wonderful "Prefix Sum Trick", we know that multiplying $$$P_{1}(x)$$$ by $$$\frac{1}{1-x}$$$ gives us $$$P_{2}(x)$$$ because $$$d_{2,k}$$$ is just the prefix sum of $$$d_{1,k}$$$. Similarly, we have $$$P_{n}(x) = \frac{1}{(1-x)^{n-1}}P_{1}(x) = \frac{1}{(1-x)^{n-1}} \cdot \frac{x}{1-x-x^2}$$$.

However, now we have some problems, because we need to calculate the coefficient of $$$x^m$$$ in this function with $$$m$$$ up to $$$10^{18}$$$. There is no way we can expand this naively and thus we need to do something clever.

The main annoyance is that we are dealing with a product in the denominator. We know how to compute $$$[x^m]\frac{1}{(1-x)^{n-1}}$$$ and $$$[x^m]\frac{x}{1-x-x^2}$$$ fast (the former is just some binomial coefficient while the latter is just the Fibonacci sequence). However, we don't know how to compute their convolution fast. The trick here is to forcefully separate these two functions by partial fractions. Note that the theory of partial fractions tell us that

$$$\frac{x}{(1-x)^{n-1} \cdot (1-x-x^2)} = \frac{A(x)}{(1-x)^{n-1}} + \frac{B(x)}{1 - x - x^2}$$$

where $$$A(x)$$$ is a polynomial of degree $$$\le n-2$$$ and $$$B(x)$$$ is a polynomial of degree $$$\le 1$$$. If we can find $$$A(x)$$$ and $$$B(x)$$$, then our problem will be much easier to solve. However, $$$A(x)$$$ and $$$B(x)$$$ is hard to find explicitly on paper (though you can guess $$$B(x)$$$ by some pattern from small cases). How do we proceed?

Here is a painless way to do it. Firstly, we clear the denominators to obtain the identity

$$$A(x)(1-x-x^2) + B(x)(1-x)^{n-1} = x$$$.

Since this is an identity, it remains true for any value of $$$x$$$ we substitute! What convenient values of $$$x$$$ can we substitute? We want to make either $$$1-x-x^2$$$ or $$$1-x$$$ equal to $$$0$$$ to leave us with only one polynomial to deal with. Substituting $$$x=1$$$ doesn't tell us too much since $$$A(x)$$$ has degree $$$\le n-2$$$ and we can't determine it with only $$$1$$$ point of information. However, what if we let $$$1-x-x^2=0$$$? We can solve the quadratic equation to get two roots $$$a+b\sqrt{5}$$$ and $$$a-b\sqrt{5}$$$ for some constants $$$a,b$$$. Substituting $$$x=a \pm b\sqrt{5}$$$, we have the nice pair of equations

$$$B(a \pm b\sqrt{5})(1 - (a \pm b\sqrt{5}))^{n-1} = a \pm b\sqrt{5}$$$.

Since $$$B(x)$$$ is linear, if we let $$$B(x) = mx+c$$$ we can solve for the coefficients of $$$B$$$ using these $$$2$$$ simultaneous equations! An implementation detail is that since we are dealing with $$$\sqrt{5}$$$ here, it is helpful to store the numbers as a pair $$$(a,b)$$$ which denotes $$$a+b\sqrt{5}$$$ and do arithmetic on these pairs. The value $$$(1 - (a \pm b\sqrt{5}))^{n-1}$$$ can be found via binary exponentiation in $$$O(\log n)$$$ time (or even naive multiplication works here).

In any case, after some work you can find $$$B(x)$$$. How do we find $$$A(x)$$$? Just refer to the identity to obtain $$$A(x) = \frac{x - B(x)(1-x)^{n-1}}{1-x-x^2}$$$, which we can compute in $$$O(n\log n)$$$ time with one FFT (note that you don't even need to compute the reciprocal of a polynomial, as you can just divide using long division since $$$A(x)$$$ is a polynomial).

It remains to compute $$$[x^m]\frac{A(x)}{(1-x)^{n-1}} + \frac{B(x)}{1 - x - x^2}$$$, which is a significantly easier task. For the second term, we can partition it into $$$\frac{Mx}{1-x-x^2}$$$ and $$$\frac{C}{1-x-x^2}$$$ and note that both are generating functions for the Fibonacci numbers and thus we can just compute the coefficient of $$$x^m$$$ as a Fibonacci number in $$$O(\log m)$$$ time (using any divide-and-conquer method you like).

For the first term, we have $$$A(x)(1-x)^{-(n-1)}$$$ and we know how to compute both $$$[x^i]A(x)$$$ and $$$[x^j]$$$ $$$(1-x)^{-(n-1)}$$$ (it is a binomial coefficient) for all $$$i,j$$$. Since $$$A(x)$$$ is of degree $$$\le n-2$$$, we can iterate through all $$$i$$$ and compute the sum of $$$[x^i]A(x) \cdot [x^{m-i}]$$$ $$$(1-x)^{-(n-1)}$$$ (the latter requires large binomial coefficients which should be computed in a similar manner as the previous problem).

Thus, we obtain an $$$O(n\log n + \log m)$$$ solution.

I believe you can generalize this solution to solve other recurrences of a similar form.

To end this section, we conclude with a problem that heavily relies on linear recurrences. Actually, it might be a stretch to call this a generating function problem but I just want to demonstrate the trick of using generating functions to compute linear recurrences which is basically the same as the one shown here.

Problem. Sum Modulo You have a number $$$x$$$ which is initially $$$K$$$. Every second, for $$$1 \le i \le n$$$, there is a probability $$$p_{i}$$$ that you will replace $$$x$$$ with $$$(x - i) \pmod{M}$$$. Find the expected number of moves before the counter goes to $$$0$$$. $$$p_{i}$$$ are given as $$$\frac{A_{i}}{\sum A_{i}}$$$ for some positive integers $$$A_{1}, A_{2}, ..., A_{n}$$$ and your output should be modulo $$$998244353$$$ (and it is guaranteed that you don't have to worry about division by $$$0$$$).

Constraints: $$$1 \le n \le \min(500,M-1)$$$, $$$2 \le M \le 10^{18}$$$, $$$1 \le A_{i} \le 100$$$

Let us derive a simple recurrence first. Let $$$E(i)$$$ denote the expected number of moves to reach $$$0$$$ from $$$i$$$. Clearly, $$$E(0) = 0$$$, and we have $$$E(i) = p_{1}E(i-1) + p_{2}E(i-2) + ... + p_{n}E(i-n) + 1$$$. We use the convention that $$$E(-r) = E(M-r)$$$.

First, we look at the high-level idea of the solution. The idea is that for $$$i \ge n$$$, we can always write $$$E(i)$$$ in terms of $$$c_{0}E(0) + c_{1}E(1) + c_{2}E(2) + ... + c_{n-1}E(n-1) + C$$$ for some constants $$$c_{i}$$$ and $$$C$$$ by repeatedly using the recurrence relation. In particular, $$$E(M+1) = E(1)$$$, $$$E(M+2) = E(2)$$$, ..., $$$E(M+n-1)=E(n-1)$$$ can all be represented in this form in a non-trivial manner using the recurrence (note that the recurrence doesn't hold for multiples of $$$M$$$, but $$$E(M+i)$$$ can still be represented non-trivially in this form for $$$1 \le i \le n-1$$$).

Hence, moving everything unknown to one side and the constants to the other, we have $$$n-1$$$ nontrivial equations of the form $$$c_{i,1}E(1) + c_{i,2}E(2) + ... + c_{i,n-1}E(n-1) = C_{i}$$$ for $$$1 \le i \le n-1$$$. We can solve this system of equations using Gaussian Elimination in $$$O(n^3)$$$ time.

Once we obtained the values of $$$E(1), E(2), ..., E(n-1)$$$, we just need to represent $$$E(k)$$$ in terms of $$$E(0), E(1), .., E(n-1)$$$ and a constant and we are done.

The remaining problem here is how do we get the representation of $$$E(m)$$$ in terms of $$$E(0), E(1), .., E(n-1)$$$ and a constant $$$C$$$ for any $$$m$$$ in an efficient manner.

Let's assume for the time being that $$$E(M), E(2M), E(3M), ...$$$ also satisfy the linear recurrence $$$E(i) = p_{1}E(i-1) + p_{2}E(i-2) + ... + p_{n}E(i-n) + 1$$$. Thus, the values of $$$E(M)$$$, $$$E(M+1)$$$, ... might not be what we want now but we will deal with this issue later.

Now, we use the same trick as in this blog. For a polynomial $$$f(x) = a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^3+...+a_{k}x^{k}$$$ define its valuation $$$val(f)$$$ as $$$a_{0}+a_{1}E(1)+...+a_{k}E(k)$$$. Since $$$E(i) - p_{1}E_(i-1) - p_{2}E(i-2) - ... - p_{n}E(i-n) = 1$$$ for all $$$i \ge n$$$, we have $$$val(x^{i} - p_{1}x^{i-1} - p_{2}x^{i-2} - ... - p_{n}x^{i-n}) = 1$$$ for all $$$i \ge n$$$. Let $$$P(x) = x^{n} - p_{1}x^{n-1} - p_{2}x^{n-2} - ... - p_{0}x^{0}$$$ for convenience. Then, $$$val(x^{k}P(x)) = 1$$$ for all $$$k \ge 0$$$. Since $$$val$$$ is additive, for any polynomial $$$Q(x)$$$, we have $$$val(Q(x)P(x)) = Q(1)$$$ (sum of coefficients, since $$$x^{k}$$$ corresponds to $$$1$$$. If this is unclear, try writing $$$Q(x)$$$ as $$$q_{0}x^{0} + q_{1}x^{1} + ...$$$).

Our goal is to find $$$val(x^{m})$$$ for some integer $$$m$$$. By the division algorithm, we can write $$$x^{m} = P(x)Q(x)+R(x)$$$ where $$$R(x)$$$ is a polynomial of degree $$$\le n-1$$$. Hence, $$$val(x^{m}) = val(P(x)Q(x)+R(x)) = val(P(x)Q(x))+val(R(x)) = Q(1)+val(R(x))$$$. Notice that $$$val(R(x))$$$ is already a linear combination of $$$E(1), E(2), ..., E(n-1)$$$, while $$$Q(1)$$$ is a constant. Thus, if we can somehow find $$$Q(x)$$$ and $$$R(x)$$$, we can represent $$$E(m)$$$ as a linear combination of $$$E(1)$$$, $$$E(2)$$$, ..., $$$E(n-1)$$$ and a constant.

To find $$$Q(1)$$$ and $$$R(x)$$$, we can use a divide-and-conquer algorithm similar to binary exponentiation. Consider the function $$$solve(m)$$$ that returns a pair denoting $$$R(x)$$$ and $$$Q(1)$$$ (a polynomial and a constant). If $$$m$$$ is even, let $$$R_{1}(x)$$$, $$$Q_{1}(1)$$$ be the return value of $$$solve\left(\frac{m}{2}\right)$$$. Let $$$x^{\frac{m}{2}} = P(x)Q_{1}(x) + R_{1}(x)$$$. We have

$$$x^{m} = (P(x)Q_{1}(x) + R_{1}(x))(P(x)Q_{1}(x) + R_{1}(x)) = P(x)^{2}Q_{1}(x)^{2} + P(x)[2R_{1}(x)Q_{1}(x)] + R_{1}(x)^2 = P(x)[P(x)Q_{1}(x)^{2} + 2R_{1}(x)Q_{1}(x)] + R_{1}(x)^2$$$.

Let $$$R_{1}(x)^{2} = Q_{2}(x)P(x) + R_{2}(x)$$$ (just do long division). It follows from the equation above that we can take $$$R(x) = R_{2}(x)$$$ and $$$Q(1) = P(1)Q_{1}(1)^{2} + 2R_{1}(1)Q_{1}(1) + Q_{2}(1)$$$. The case where $$$m$$$ is odd is similar. Thus, we can compute $$$val(x^{m})$$$ in $$$O(n^{2}\log m)$$$ time.

Also, note that if we have computed $$$val(x^{m})$$$, we can compute $$$val(x^{m+1})$$$ in $$$O(n^{2})$$$ time using the same trick. Thus, we can use this method to compute a representation of $$$E(M-n+1)$$$, $$$E(M-n+2)$$$, ..., $$$E(M-1)$$$ in terms of $$$E(1),E(2),...,E(n-1)$$$ and a constant in $$$O(n^{2}\log m + n^{3})$$$ time. The reason we cannot compute $$$E(M+1)$$$, $$$E(M+2)$$$, ..., $$$E(M+n-1)$$$ directly using $$$val$$$ is because the recurrence doesn't hold for $$$E(M)$$$ as stated before. However, once we have the representation for $$$E(M-n+1)$$$, $$$E(M-n+2)$$$, ..., $$$E(M-1)$$$, we can now plug those into the original recurrence $$$E(i) = p_{1}E(i-1) + p_{2}E(i-2) + ... + p_{n}E(i-n) + 1$$$ and obtain the representations for $$$E(M+1)$$$, $$$E(M+2)$$$, ..., $$$E(M+n-1)$$$ in $$$O(n^{2})$$$ time each.

This gives us a $$$O(n^{2}(n + \log m))$$$ solution.

Lagrange Inversion Formula

Finally, inspired by this Div. 1 F problem, I looked up on some applications on Lagrange Inversion Formula and found a few examples on it. I would like to end this article by demonstrating a few applications of it.

Some examples here are from Enumerative Combinatorics Volume 2 and some are from jcvb's Chinese paper on generating functions.

The idea of the Lagrange Inversion Formula is that sometimes we want to find the compositional inverse of a function but it is difficult to find. However, the coefficients of this inverse function might have a simpler formula, which we can obtain from Lagrange Inversion Formula.

There are many variants of stating the Lagrange Inversion Formula, so I will show what I think is the most helpful version of it (also given in this comment).

Theorem. Let $$$F(x), G(x)$$$ be formal power series which are compositional inverses (i.e. $$$F(G(x)) = x$$$). Suppose $$$F(0)=G(0)=0$$$, $$$[x^{1}]F(x) \neq 0$$$, $$$[x^{1}]G(x) \neq 0$$$, then

$$$[x^{n}]G(x) = \frac{1}{n}[x^{-1}]\frac{1}{F(x)^{n}}$$$

Also, for any power (or Laurent) series $$$H(x)$$$, we have

$$$[x^{n}]H(G(x)) = \frac{1}{n}[x^{-1}]H'(x)\frac{1}{F(x)^{n}}$$$

Note: Laurent Series can be intuitively seen as the generalization of power series where the powers can go negative.

Intuitively, if you "know" how to compute $$$F(x)$$$, then you can also get the coefficients of the compositional inverse of $$$F(x)$$$. Let's go through a few examples.

Tree Enumeration

Problem. Count the number of labelled trees on $$$n$$$ vertices (number of trees where vertices are labelled).

If you have heard of Cayley's Formula, you know that the answer is $$$n^{n-2}$$$.

Let us count the number of rooted trees on $$$n$$$ vertices. Call this number $$$t(n)$$$. If we remove the root from a rooted tree, we get a collection of rooted subtrees. This allows us to get the recurrence

$$$t(n+1) = (n+1)\displaystyle\sum_{k \ge 0}\sum_{i_{1}+i_{2}+...+i_{k}=n, i_{j} \ge 1}\frac{n!}{i_{1}!i_{2}!...i_{k}!}t(i_{1})t(i_{2})...t(i_{k}) \cdot \frac{1}{k!}$$$, as we have $$$n+1$$$ ways to choose the root, $$$\frac{n!}{i_{1}!i_{2}!...i_{k}!}$$$ ways to assign the non-root nodes to a subtree (we divide by $$$k!$$$ because each set of subtrees is counted $$$k!$$$ times for each permutation), and $$$t(i_{1})t(i_{2})...t(i_{k})$$$ is the number of ways to form each rooted subtree.

Rearranging and multiplying by $$$x^{n+1}$$$, we obtain

$$$\frac{t(n+1)}{(n+1)!}x^{n+1} = x\displaystyle\sum_{k \ge 0}\frac{1}{k!} \cdot \sum_{i_{1}+i_{2}+...+i_{k}=n, i_{j} \ge 1}\frac{t(i_{1})x^{i_1}}{i_{1}!} \cdot \frac{t(i_{2})x^{i_2}}{i_{2}!} \cdot ... \cdot \frac{t(i_{k})x^{i_k}}{i_{k}!}$$$.

Hence, letting $$$T(x)$$$ be the OGF of $$$t(n)$$$ (and define $$$t(0)=0$$$ for simplicity), we have

$$$T(x) = \displaystyle\sum_{n \ge 0}\frac{t(n+1)}{(n+1)!}x^{n+1} = x\displaystyle\sum_{k \ge 0}\frac{1}{k!} \cdot \displaystyle\sum_{n \ge 0}\sum_{i_{1}+i_{2}+...+i_{k}=n, i_{j} \ge 1}\frac{t(i_{1})x^{i_1}}{i_{1}!} \cdot \frac{t(i_{2})x^{i_2}}{i_{2}!} \cdot ... \cdot \frac{t(i_{k})x^{i_k}}{i_{k}!}$$$

$$$= x\displaystyle\sum_{k \ge 0} \frac{T(x)^{k}}{k!}$$$ (verify that we get the previous line by expanding this)

$$$= xe^{T(x)}$$$

Hence, we have the functional equation $$$T(x) = xe^{T(x)}$$$. It is not easy to solve this equation directly, however. However, we can see that we have a function in $$$T(x)$$$ which is equal to $$$x$$$, which motivates us to write

$$$T(x)e^{-T(x)} = x$$$

and let $$$F(x) = xe^{-x}$$$, $$$G(x) = T(x)$$$ in Lagrange Inversion Formula, we obtain

$$$[x^{n}]T(x) = \frac{1}{n}[x^{-1}]\frac{1}{(xe^{-x})^n} = \frac{1}{n}[x^{-1}]x^{-n}e^{nx} = \frac{1}{n}[x^{n-1}]e^{nx} = \frac{1}{n} \cdot \frac{n^{n-1}}{(n-1)!} = \frac{n^{n-1}}{n!}$$$.

Thus, $$$t(n) = n^{n-1}$$$.

Finally, to count the number of unrooted labelled trees, simply divide $$$t(n)$$$ by $$$n$$$ as each unrooted tree is counted $$$n$$$ times in $$$t(n)$$$ by the $$$n$$$ choices of root. Hence, the answer is $$$n^{n-2}$$$.

Number of $$$2$$$-edge connected graphs

Problem. Find the number of labelled $$$2$$$-edge connected graphs on $$$n$$$ vertices. A graph is $$$2$$$-edge connected graphs if it has no bridges, i.e. removing any edge does not disconnect the graph.

Constraints: $$$n \le 3 \cdot 10^{5}$$$

Let's warmup with an easier problem. Suppose we want to count the number of labelled connected graphs on $$$n$$$ vertices. There are different ways to compute this, but let's show a method using EGFs as it will be useful later. Let $$$C(x)$$$ be the EGF of the number of connected labelled graphs.

Connected graphs on $$$n$$$ vertices are hard to count, but labelled graphs on $$$n$$$ vertices are trivial: there are exactly $$$2^{\binom{n}{2}}$$$ labelled graphs on $$$n$$$ vertices since we can either choose each edge or not. Let $$$G(x)$$$ denote the EGF of the number of labelled graphs. The nice thing is that labelled graphs are made up of several connected components, so we can use a similar argument as above (I will skip this step) to obtain $$$G(x) = \exp(C(x))$$$, which gives $$$C(x) = \ln(G(x))$$$. Since we know how to find $$$G(x)$$$, we can find $$$C(x)$$$ in $$$O(n\log n)$$$ time using polynomial operations.

Ok, now let's return to our problem. Let $$$b(n)$$$ denote the number of $$$2$$$-edge connected graphs on $$$n$$$ vertices and $$$B(x)$$$ be its EGF. Our goal is to find $$$B(x)$$$.

The idea is to relate $$$b(n)$$$ with $$$c(n)$$$. Suppose we have a labelled connected graph on $$$n$$$ vertices, say $$$G$$$. Any connected graph $$$G$$$ can be decomposed into a bridge tree, where each vertex is a $$$2$$$-edge connected component and the edges of the tree are the bridges of the graph. Let $$$s$$$ be the size of the $$$2$$$-edge connected component containing vertex $$$1$$$, and fix the $$$2$$$-edge connected component containing vertex $$$1$$$ as the root of the bridge tree. There are $$$\binom{n-1}{s-1}$$$ ways to choose the other elements in the component and $$$b(s)$$$ ways to connect edges within the component. Now, in the bridge tree, let $$$a_{1}, a_{2}, ..., a_{k}$$$ be the total weight of subtrees of the children of the root (we define the weight of a vertex in the bridge tree as the size of the $$$2$$$-edge connected component represented by it and the weight of a subtree as the sum of weights of all vertices in the subtree). Then, there are $$$\frac{(n-s)!}{a_{1}!a_{2}!...a_{k}!}$$$ ways to assign the remaining $$$n-s$$$ vertices to each subtree. Each subtree represented a general connected graph on $$$a_{i}$$$ vertices, so there are $$$c(a_{i})$$$ ways to connect edges in the $$$i$$$-th subtree. Finally, there are $$$ka_{i}$$$ ways to choose the "bridge" between subtree $$$i$$$ and the root, because we need to pick exactly one vertex from the subtree and the root component to connect.

Summing over all tuples $$$(a_1,a_2,...,a_k)$$$ with sum $$$n-s$$$, and dividing by $$$k!$$$ to account for the fact that each set of subtrees is counting $$$k!$$$ times, we obtain the recurrence

$$$c(n) = \displaystyle\sum_{s=1}^{n} b(s) \cdot \binom{n-1}{s-1} \cdot (n-s)! \cdot \displaystyle\sum_{k \ge 0} \frac{1}{k!} \displaystyle\sum_{a_{1}+a_{2}+...+a_{k}=n-s} \displaystyle\prod_{j=1}^{k}\frac{c(a_{j}) \cdot a_{j} \cdot s}{a_{j}!}$$$

$$$= \displaystyle\sum_{s=1}^{n} b(s) \cdot \frac{(n-1)!}{(s-1)!} \cdot \displaystyle\sum_{k \ge 0} \frac{s^{k}}{k!} \displaystyle\sum_{a_{1}+a_{2}+...+a_{k}=n-s} \displaystyle\prod_{j=1}^{k}\frac{c(a_{j}) \cdot a_{j}}{a_{j}!}$$$

Hence,

$$$\frac{nc(n)}{n!} = \displaystyle\sum_{s=1}^{n} \frac{b(s)}{(s-1)!} \cdot \displaystyle\sum_{k \ge 0} \frac{s^{k}}{k!} \displaystyle\sum_{a_{1}+a_{2}+...+a_{k}=n-s} \displaystyle\prod_{j=1}^{k}\frac{c(a_{j}) \cdot a_{j}}{a_{j}!}$$$

Note that we have $$$nc(n)$$$ and $$$a_{j} \cdot c(a_{j})$$$ appearing in the summand. It seems like it is easier to consider the EGF of the sequence $$$nc_{n}$$$, say $$$C_{1}(x)$$$. Note that $$$C_{1}(x) = xC'(x)$$$ by the "multiplication by $$$n$$$" rule. In any case, we work in generating functions to obtain

$$$C_{1}(x) = \displaystyle\sum_{n \ge 0}\frac{nc(n)}{n!}x^{n} = \displaystyle\sum_{n \ge 0}x^{n}\displaystyle\sum_{s=1}^{n} \frac{b(s)}{(s-1)!} \cdot \displaystyle\sum_{k \ge 0} \frac{s^{k}}{k!} \displaystyle\sum_{a_{1}+a_{2}+...+a_{k}=n-s} \displaystyle\prod_{j=1}^{k}\frac{c(a_{j}) \cdot a_{j}}{a_{j}!}$$$

$$$= \displaystyle\sum_{n \ge 0}\displaystyle\sum_{s=1}^{n} \frac{b(s)x^{s}}{(s-1)!} \cdot \displaystyle\sum_{k \ge 0} \frac{s^{k}}{k!} \displaystyle\sum_{a_{1}+a_{2}+...+a_{k}=n-s} \displaystyle\prod_{j=1}^{k}\frac{c(a_{j}) \cdot a_{j} \cdot x^{a_{j}}}{a_{j}!}$$$

Simplifying the interior sum and product by noting its relevance to the coefficients of $$$C_{1}(x)$$$, we obtain

$$$= \displaystyle\sum_{n \ge 0}\displaystyle\sum_{s=1}^{n} \frac{b(s)x^{s}}{(s-1)!} \cdot \displaystyle\sum_{k \ge 0} \frac{s^{k}x^{n-s}}{k!} [x^{n-s}]C_{1}(x)^k$$$

$$$= \displaystyle\sum_{n \ge 0}\displaystyle\sum_{s=1}^{n} \frac{b(s)x^{s}}{(s-1)!} \cdot x^{n-s}[x^{n-s}]\displaystyle\sum_{k \ge 0} \frac{s^{k}}{k!}C_{1}(x)^k$$$

$$$= \displaystyle\sum_{n \ge 0}\displaystyle\sum_{s=1}^{n} \frac{b(s)x^{s}}{(s-1)!} \cdot x^{n-s}[x^{n-s}]\exp(sC_{1}(x))$$$

Swapping the order of summation, we get

$$$= \displaystyle\sum_{s \ge 0}\frac{b(s)x^{s}}{(s-1)!}\displaystyle\sum_{n \ge s}x^{n-s}[x^{n-s}]\exp(sC_{1}(x))$$$

$$$= \displaystyle\sum_{s \ge 0}\frac{b(s)x^{s}}{(s-1)!}\displaystyle\sum_{n \ge 0}x^{n}[x^{n}]\exp(sC_{1}(x))$$$

$$$= \displaystyle\sum_{s \ge 0}\frac{b(s)x^{s}}{(s-1)!}\exp(sC_{1}(x))$$$.

$$$= \displaystyle\sum_{s \ge 0}\frac{sb(s)(x\exp(C_{1}(x))^{s}}{s!}$$$.

Let $$$B_{1}(x) = xB'(x)$$$ be the EGF of $$$sb(s)$$$. Thus, we obtain $$$C_{1}(x) = B_{1}(x\exp(C_{1}(x)))$$$.

We know how to find $$$C_{1}$$$ and our aim now is to find the coefficients of $$$B_{1}$$$.

Let $$$P(x) = C_{1}(x)$$$ and $$$Q(x) = x\exp(C_{1}(x))$$$. We have the relation $$$P(x) = B_{1}(Q(x))$$$ and want to find $$$[x^n]B_{1}(x)$$$. To make $$$B_{1}(x)$$$ appear, substitute $$$x$$$ as $$$Q^{-1}(x)$$$ (the compositional inverse exist because $$$Q(x)$$$ is a power series with a nonzero $$$x$$$ term and no constant term). Thus, $$$B_{1}(x) = P(Q^{-1}(x))$$$. This looks very similar to Lagrange Inversion Formula!

Indeed, we let $$$P = H$$$ and $$$Q^{-1} = G$$$. Then, $$$Q = F$$$, so we have

$$$[x^{n}]B_{1}(x) = [x^{n}]P(Q^{-1}(x)) = \frac{1}{n}[x^{-1}]P'(x)\frac{1}{Q(x)^{n}} = \frac{1}{n}[x^{-1}]C_{1}'(x)\frac{1}{x^{n}\exp(C_{1}(x))^{n}} = \frac{1}{n}[x^{n-1}]\frac{C_{1}'(x)}{\exp(C_{1}(x))^{n}}$$$.

This can be computed via standard polynomial operations in $$$O(n\log n)$$$ time.

Coefficient of fixed $$$x^{k}$$$ in $$$f(x)^{i}$$$

This is a more of a trick than a specific problem. Let $$$f(x)$$$ be a power series with a compositional inverse ($$$[x^{0}]f(x) = 0$$$, $$$[x^{1}]f(x) \neq 0$$$). We can find the coefficient of $$$x^{k}$$$ (assume $$$k \ge 1$$$) in $$$f(x)^{i}$$$ for all $$$1 \le i \le n$$$ in $$$O(n\log n)$$$ time (assume $$$k = O(n)$$$).

Let $$$ans(i)$$$ denote the answer for fixed $$$i$$$. Instead of looking at $$$ans(i)$$$ as a sequence, let's introduce a new variable $$$u$$$ and consider the OGF

$$$A(u) = ans(0) + ans(1)u + ans(2)u^{2} + ... = \displaystyle\sum_{n \ge 0}[x^{k}]f(x)^{n}u^{n} = [x^{k}]\displaystyle\sum_{n \ge 0}(f(x)u)^{n} = [x^{k}]\frac{1}{1 - uf(x)}$$$.

Since $$$f(x)$$$ has a compositional inverse (say $$$g(f(x)) = x$$$), by Lagrange Inversion formula (with $$$H(x) = \frac{1}{1 - ux}$$$), we obtain

$$$[x^{k}]\frac{1}{1-uf(x)} = \frac{1}{k}[x^{-1}]\left(\frac{1}{1-ux}\right)'\left(\frac{1}{g(x)^{k}}\right) = \frac{1}{k}[x^{k-1}]\left(\frac{1}{1-ux}\right)'\left(\frac{1}{\left(\frac{g(x)}{x}\right)^{k}}\right)$$$.

Note that by Quotient Rule, $$$\left(\frac{1}{1-ux}\right)' = \frac{u}{(1-ux)^{2}}$$$.

Our goal is to rewrite our sum in a clear manner so that we can "read off" the coefficients of $$$u^{i}$$$. We try to change the problem of finding the coefficients of $$$u^{i}$$$ into a problem about purely finding the coefficients of $$$x^{j}$$$ of some function.

The idea is to expand the series

$$$\frac{u}{(1-ux)^{2}} = u(1-ux)^{-2} = u\displaystyle\sum_{i \ge 0}\binom{i+1}{1}(ux)^{i}$$$ (recall how to expand $$$(1-x)^{-2}$$$)

$$$= u^{i+1}\displaystyle\sum_{i \ge 0}(n+1)x^{i}$$$, thus

$$$[x^{k}]\frac{1}{1-uf(x)} = \frac{1}{k}[x^{k-1}]\displaystyle\sum_{i \ge 0}(i+1)x^{i}u^{i+1} \left(\frac{1}{\left(\frac{g(x)}{x}\right)^{k}}\right)$$$

Let's look at $$$ans(i+1)$$$, the coefficient of $$$u^{i+1}$$$. We have

$$$ans(i+1) = [u^{i+1}]\frac{1}{k}[x^{k-1}]\displaystyle\sum_{i \ge 0}(i+1)x^{i}u^{i+1} \left(\frac{1}{\left(\frac{g(x)}{x}\right)^{k}}\right) = \frac{i+1}{k}[x^{k-i-1}]\frac{1}{\left(\frac{g(x)}{x}\right)^{k}}$$$.

Now, our problem reduces to computing the coefficients of one fixed function $$$P(x) = \frac{1}{\left(\frac{g(x)}{x}\right)^{k}}$$$, which we can compute the first $$$n$$$ terms of using the usual polynomial operations! Thus, we can compute $$$ans(i)$$$ for all $$$i$$$ in $$$O(n\log n)$$$ time!

If $$$f(x)$$$ does not have a compositional inverse, it is possible to "adjust" our function $$$f$$$ (create a new function related to $$$f$$$) so that it has a compositional inverse. I leave this as an exercise.

Final Boss: Div 1 F — Slime and Sequences

As the grand finale of this 2-part article, I would like to discuss the recent very difficult Div. 1 F problem which was the inspiration of the entire blog in the first place.

Problem. Slime and Sequences A sequence of positive integers $$$s$$$ is called good if for each $$$k>1$$$ that is present in $$$s$$$, the first occurrence of $$$k-1$$$ (which must exist) must occur before the last occurrence of $$$k$$$. Count the number of times each integer $$$1 \le i \le n$$$ appears over all good sequences of length $$$n$$$.

Constraints: $$$1 \le n \le 100000$$$

The official editorial has a solution using PIE which ends up with an application of Lagrange Inversion Formula very similar to the example just shown. You can try to see the connection between the previous example and the trick used in the official editorial (for the Lagrange Inversion part). Here, I will demonstrate Elegia's solution described here which to me is a more intuitive way to approach the problem (thanks to Elegia for helping me with some parts I didn't understand and showing that this appraoch can lead to a full solution!).

The first step is to reduce the problem into something simpler. If we look at the samples, we see that curiously the sum of all the answers in the output is $$$n \cdot n!$$$, which suggest that there are only $$$n!$$$ good sequences. This cannot be coincidence, and a bijection between permutations and good sequences should exist.

In fact, this is IMO 2002 Shortlist C3. Since the last occurrence of $$$k$$$ must occur after the first occurrence of $$$k-1$$$, we can consider iterating through the good sequence from right to left several times. In the first run, we record the positions of the number $$$1$$$s, from right to left. On the second run, we record the positions of the number $$$2$$$s, from right to left and so on. At the end of the process, $$$n$$$ distinct numbers from $$$1$$$ to $$$n$$$ are recorded. This will be our permutation.

We claim that this is the bijection we seek. The idea is that if we read the values in our final permutation $$$p$$$ from left to right, every time we meet an ascent ($$$p_{i} < p_{i+1}$$$), it indicates that we have finished recording all occurrences of the current number and is now going from right to left again. The condition of good sequences guarantees that every time we record the occurrences of a new number, there will be a new ascent in our permutation. Thus, we can easily recover the good sequence by marking the ascents of the permutation.

This also gives us a nice way to formulate the problem. Let $$$ans(i)$$$ denote the $$$i$$$-th answer for our problem. For any permutation $$$p$$$, divide it into a minimum number of decreasing blocks. Let's say the block sizes are $$$b_{1},b_{2},...,b_{k}$$$. Then, this permutation contributes $$$b_{1}$$$ to $$$ans(1)$$$, $$$b_{2}$$$ to $$$ans(2)$$$ and so on. For example, if $$$p = (5, 3, 2, 7, 1, 4, 6)$$$, then we divide it into blocks $$$[5,3,2]$$$, $$$[7,1]$$$, $$$[4]$$$, $$$[6]$$$. $$$p$$$ increases $$$b_{1}$$$, $$$b_{2}$$$, $$$b_{3}$$$ and $$$b_{4}$$$ by $$$3$$$, $$$2$$$, $$$1$$$, $$$1$$$ respectively.

Now, let's do some double counting. Fix a position $$$1 \le j \le n$$$. Can we calculate how many times this position contributes to $$$ans(k)$$$ over all $$$n!$$$ permutations (for a fixed $$$k$$$)?

Note that for position $$$j$$$ to contribute to the answer, the prefix $$$p_{1}$$$, $$$p_{2}$$$, ..., $$$p_{j}$$$ must contain exactly $$$k-1$$$ ascents. Additionally, the last $$$n-j$$$ elements can be permuted in any manner. Finally, there are $$$\binom{n}{j}$$$ ways to choose which elements occur in the prefix. Thus, if we let $$$A(n,k)$$$ denote the number of permutations of length $$$n$$$ with exactly $$$k-1$$$ ascents, position $$$j$$$ contributes $$$(n-j)!\binom{n}{j}A(j,k) = \frac{n!}{j!}A(j,k)$$$ to the answer.

Summing over all $$$j$$$, we get the nice-looking formula $$$ans(k) = n!\displaystyle\sum_{j=1}^{n}\frac{A(j,k)}{j!}$$$. From here, you can derive a simple $$$O(n^2)$$$ solution, since there is a simple recurrence for $$$A(n,k)$$$. However, we are looking for more here.

For convenience, now we ignore the $$$n!$$$ term and just let $$$ans(k)$$$ be $$$\displaystyle\sum_{j=1}^{n}\frac{A(j,k)}{j!}$$$. We can multiply each answer by $$$n!$$$ at the end.

What's nicer is that $$$A(n,k)$$$ is actually a well-known sequence called the Eulerian numbers! If you use Wikipedia or Enumerative Combinatorics $$$1$$$, you can find the formulas related to generating functions involving $$$A(n,k)$$$.

Here, I use the notation $$$A(n,k)$$$ in Enumerative Combinatorics 1, which differs a bit from Wikipedia $$$k$$$ is shifted and $$$A(0,0)=1$$$. You can either derive or google the following formula (first thing that comes up when you look for generating functions of Eulerian numbers, though it may look slightly different due to variable shifting):

$$$F(x,y) = \displaystyle\sum_{k \ge 0}\displaystyle\sum_{n \ge 0}A(n,k)\frac{x^{n}}{n!}y^{k} = \frac{1-y}{1 - ye^{(1-y)x}}$$$

What we want to find is $$$\displaystyle\sum_{j=1}^{n}\frac{A(j,k)}{j!}$$$, so let us rewrite

$$$F(x,y) = \displaystyle\sum_{k \ge 0}y^{k}\displaystyle\sum_{n \ge 0}A(n,k)\frac{x^{n}}{n!}$$$ and focus on $$$G_{k}(x) = \displaystyle\sum_{n \ge 0}A(n,k)\frac{x^{n}}{n!}$$$ for a fixed $$$k$$$.

Observe that we want the prefix sum of the coefficients of $$$G_{k}(x)$$$. By the prefix sum trick, we get $$$ans(k) = [x^{n}]\frac{1}{1-x} \cdot G_{k}(x)$$$.

Thus, $$$ans(k) = [x^{n}y^{k}]\frac{1}{1-x}F(x,y) = [x^{n}y^{k}]\frac{1-y}{(1-x)(1 - ye^{(1-y)x})}$$$, which is exactly the same expression quoted in Elegia's post.

The hard part is finding this fast. I got to this point on my own but didn't really know how to proceed (I wasn't even sure if it was doable) before reading Elegia's post so huge thanks to him!

Let us focus on the expression $$$(1-y)[x^n]\frac{1}{(1-x)(1 - ye^{(1-y)x})}$$$ and keep in mind that we want to find the answer as a polynomial in $$$y$$$, which we can read the answer from.

The first major concern is that we have $$$e^{(1-y)x}$$$, which is an exponential of a multivariable function. This seems painfully awful to deal with especially if we need to expand it in power series form in the end. A nice trick here is to eliminate this nonsenes by making the substitution $$$z = (1-y)x$$$. Then, $$$x = \frac{z}{1-y}$$$ and our expression becomes:

$$$(1-y)[x^n]\frac{1}{(1-x)(1 - ye^{(1-y)x})} = (1-y)[x^n]\frac{1}{\left(1-\frac{z}{1-y}\right)(1 - ye^{z})} = (1-y)^{n+1}[z^n]\frac{1}{\left(1-\frac{z}{1-y}\right)(1 - ye^{z})}$$$

Let us clear the denominator in $$$1 - \frac{z}{1-y}$$$ by multiplying $$$1-y$$$ to the denominator, so we need to find

$$$(1-y)^{n+2}[z^n]\frac{1}{(1-y-z)(1 - ye^{z})}$$$$

Ok, this looks simpler, though we still have products of multivariable functions. Life would be simpler if we can separate the two factors in the denominator. As a matter of fact, we can! Let's write the expression in partial fractions. Let

$$$\frac{1}{(1-y-z)(1 - ye^{z})} = \frac{A}{1 - y - z} + \frac{B}{1 - ye^{z}}$$$. We want to find some simple functions $$$A$$$, $$$B$$$ (hopefully in one variable) so that $$$A(1 - ye^{z}) + B(1 - y - z) = 1$$$.

Note that we have a linear function on $$$y$$$ on the LHS if $$$z$$$ is a treated as a constant. Thus, it motivates us to let $$$A$$$ and $$$B$$$ be functions in $$$z$$$ that annilhates the LHS. We can treat $$$z$$$ as a constant and compare the coefficients of $$$y$$$ and $$$1$$$ to obtain $$$A(z) = \frac{1}{1 - e^{z}(1-z)}$$$, $$$B(z) = \frac{-e^{z}}{1 - e^{z}(1-z)}$$$.

Substituting back, we need to find

$$$(1-y)^{n+2}[z^n]\frac{1}{(1-y-z)(1 - ye^{z})} = (1-y)^{n+2}[z^{n}]\left(\frac{1}{(1 - e^{z}(1-z))(1 - y - z)} + \frac{-e^{z}}{(1 - e^{z}(1-z))(1 - ye^{z})}\right)$$$

It remains to compute $$$[z^{n}]$$$ of each fraction fast. Our main goal is to isolate the variable $$$y$$$ as much as possible, treating $$$z$$$ pretty much like a constant.

For example, let's look at the second fraction. We want to find $$$[z^{n}]\frac{-e^{z}}{(1 - e^{z}(1-z))(1 - ye^{z})}$$$. Let $$$f(z) = \frac{-e^{z}}{1 - e^{z}(1-z)}$$$. Thus, we need to compute $$$[z^{n}]\frac{f(z)}{1 - ye^{z}}$$$.

Similarly, the first fraction can be rewritten as

$$$[z^{n}]\frac{1}{(1-e^{z}(1-z))(1-z-y)} = \frac{\frac{1}{1-z}}{(1-e^{z}(1-z)\left(1 - \frac{y}{1-z}\right)}$$$ (note that we divide by $$$1-z$$$ to make something of the form $$$\frac{1}{1-h(z)y}$$$ to make it similar to our second fraction).

Letting $$$g(z) = \frac{1}{(1-z)(1 - e^{z}(1-z))}$$$, the first fraction reduces to $$$[z^{n}]\frac{g(z)}{1 - \frac{y}{1-z}}$$$.

In both cases, we have to compute something of the form $$$[z^{n}]\frac{F(z)}{1 - G(z)y}$$$ for some functions $$$F$$$ and $$$G$$$. You can see that this is similar to $$$[x^{k}]\frac{1}{1-uf(x)}$$$ in our previous example.

Here, we have $$$G(z) = e^{z}$$$ and $$$\frac{1}{1-z}$$$. There is a subtle detail here which is that $$$[z^{0}]G(z) = 1 \neq 0$$$ in both cases, so $$$G$$$ does not have a compositional inverse and we can't apply what we did just now directly. However, $$$G(z)-1$$$ does have a compositional inverse in both cases, so let $$$A(z) = e^{z}-1$$$ and $$$B(z) = \frac{1}{1-z} - 1$$$. Thus, we need to compute $$$[z^{n}]\frac{f(z)}{1 - (A(z)+1)y}$$$ and $$$[z^{n}]\frac{g(z)}{1 - (B(z)+1)y}$$$.

Now we use the same approach as the previous example. Let $$$A^{-1}(z)$$$ denote the compositional inverse of $$$A(z)$$$ (you can find it explicitly by the definition of inverse). We let $$$H(z) = \frac{f(A^{-1}(z))}{1 - (z+1)y}$$$, $$$G(z) = A(z)$$$ and $$$F(z) = A^{-1}(z)$$$ in Lagrange Inversion Formula. Then,

$$$[z^{n}]\frac{f(z)}{1 - (A(z)+1)y} = [z^{n}]H(G(z)) = \frac{1}{n}[x^{-1}]H'(x)\frac{1}{F(x)^{n}} = \frac{1}{n}[x^{n-1}]\left(\frac{f(A^{-1}(x))}{1 - (x+1)y}\right)'\frac{1}{\left(\frac{A^{-1}(x)}{x}\right)^{n}}$$$. Let $$$C(x) = f(A^{-1}(x))$$$ and $$$D(x) = \frac{1}{\left(\frac{A^{-1}(x)}{x}\right)^{n}}$$$. Take a moment to check that we can still compute the first $$$n$$$ terms of $$$C(x)$$$ and $$$D(x)$$$ using polynomial operations in $$$O(n\log n)$$$ (find $$$A^{-1}(x)$$$ and substitute back into their definitions).

Now, our problem reduces to finding $$$\frac{1}{n}[x^{n-1}]\left(\frac{C(x)}{1-(x+1)y}\right)'D(x)$$$. Using quotient rule, we have (differentiating with respect to $$$x$$$),

$$$\left(\frac{C(x)}{1-(x+1)y}\right)' = \frac{C'(x)[1-(x+1)y] - C(x)(-y)}{(1 - (x+1)y)^2} = \frac{C'(x)}{1 - (x+1)y} + \frac{C(x)y}{(1 - (x+1)y)^2}$$$.

Thus,

$$$\frac{1}{n}[x^{n-1}]\left(\frac{C(x)}{1-(x+1)y}\right)'D(x) = \frac{1}{n}[x^{n-1}]\frac{C'(x)D(x)}{1 - (x+1)y} + [x^{n-1}]\frac{C(x)D(x)y}{(1 - (x+1)y)^2}$$$.

We have two subproblems, finding $$$[x^{n}]\frac{P(x)}{1 - (x+1)y}$$$ and $$$[x^{n}]\frac{P(x)y}{(1-(x+1)y)^2}$$$ for some computable functions $$$P$$$. Both can be dealt with using the geometric series formula. We have

$$$[x^{n}]\frac{P(x)}{1 - (x+1)y} = [x^{n}]P(x)\displaystyle\sum_{i \ge 0}(x+1)^{i}y^{i}$$$

Let $$$P(x) = p_{0} + p_{1}x + p_{2}x^{2} + ...$$$ be the series expansion, then

$$$[x^{n}]P(x)\displaystyle\sum_{i \ge 0}(x+1)^{i}y^{i} = \displaystyle\sum_{i \ge 0}y^{i}[x^{n}]\left(P(x)(x+1)^{i}\right) = \displaystyle\sum_{i \ge 0}y^{i}\displaystyle\sum_{j \ge 0}p_{i-j}\binom{i}{j}$$$.

Hence, we need to compute $$$ans(i) = \displaystyle\sum_{j \ge 0}p_{i-j}\binom{i}{j} = i!\displaystyle\sum_{j \ge 0}\frac{p_{i-j}}{j!(i-j)!}$$$ fast. But this is almost the same thing as what we did in the Atcoder problem mentioned at the very beginning of this article! Define $$$E(x) = \displaystyle\sum_{i \ge 0}\frac{p_{i}}{i!}x^{i}$$$ and $$$F(x) = \displaystyle\sum_{i \ge 0}\frac{1}{i!}x^{i}$$$ for some large enough $$$M$$$. Then, our answer is the coefficient of $$$x^{i}$$$ in $$$E(x)F(x)$$$. This part can be solved in $$$O(n\log n)$$$ time.

Finally, we need to compute $$$[x^{n}]\frac{P(x)y}{(1 - (x+1)y)^2}$$$. With a similar expansion,

$$$[x^{n}]\frac{P(x)y}{(1 - (x+1)y)^2} = [x^{n}]P(x)y\displaystyle\sum_{i \ge 0}(i+1)(x+1)^{i}y^{i}$$$

$$$= \displaystyle\sum_{i \ge 0}(i+1)y^{i+1}[x^{n}]\left(P(x)(x+1)^{i}\right)$$$

$$$= \displaystyle\sum_{i \ge 0}(i+1)y^{i+1}[x^{n}]\left(P(x)(x+1)^{i}\right)$$$

$$$= \displaystyle\sum_{i \ge 0}(i+1)y^{i+1}\displaystyle\sum_{j \ge 0}p_{i-j}\binom{i}{j}$$$.

Thus, we can compute this in the same way as before in $$$O(n\log n)$$$ using FFT.

Putting it altogether, we obtain a (albeit complicated) solution in $$$O(n\log n)$$$ time!

I hope this explanation makes Elegia's solution more intuitive to understand. :) Thanks to Elegia for the wonderful solution.

If you have any questions or spotted any errors, please tell me in the comments. There are probably more cool applications of generating functions in CP that I am not aware of, so feel free to share them in the comments too. :)

Tags mathforces, advanced math, math and programming, generating function

History

 
 
 
 
Revisions
 
 
  Rev. Lang. By When Δ Comment
en9 English zscoder 2021-01-24 09:38:52 1 Tiny change: '- \omega^{k}}{1 - \o' -> '- \omega^{mk}}{1 - \o'
en8 English zscoder 2020-05-20 14:39:11 2 Tiny change: 'here are $ka_{i}$ way' -> 'here are $sa_{i}$ way'
en7 English zscoder 2020-05-19 06:04:53 151
en6 English zscoder 2020-05-18 16:47:22 10
en5 English zscoder 2020-05-18 09:42:27 2 Tiny change: '1)}{2}} + i^{\frac{i(' -> '1)}{2}} + x^{\frac{i('
en4 English zscoder 2020-05-18 04:14:20 2 Tiny change: ')$ be the OGF of $t(n' -> ')$ be the EGF of $t(n'
en3 English zscoder 2020-05-17 19:56:29 504
en2 English zscoder 2020-05-17 19:18:08 1 Tiny change: ' ye^{z})}$$\n\nOk, t' -> ' ye^{z})}$\n\nOk, t'
en1 English zscoder 2020-05-17 18:44:21 76375 Initial revision (published)