A – Vanishing Pitch
The ball is invisible when it's between $$$V \cdot T$$$ and $$$V \cdot S$$$ meters away, inclusive. Thus the answer is Yes
if $$$D < V \cdot T$$$ or $$$D > V \cdot S$$$, and No
otherwise.
B – Remove It
Iterate through every row and column. If the current square is .
, look at the neighbor to the right, if it's also .
, increment the answer. Likewise the neighbor downward. There is no need to look leftward or upward as those positions would already have been counted.
C – Neq Min
Initiate an array of booleans indexed over $$$[0, 1 + \max p]$$$. Maintain a pointer $$$j$$$ starting at index $$$0$$$.
Iterate through $$$p$$$. In each iteration, mark the corresponding index as "true", then use $$$j$$$ to find the first index that remains "false" and output it.
This operation is more commonly known as the mex, over the sets/multisets represented by the prefixes of the sequence.
D – Squares
Special case: If $$$A + B > N$$$, the answer is always $$$0$$$.
Let $$$\alpha := N - A + 1$$$ and $$$\beta := N - B + 1$$$. It can be noted that, disregarding overlaps, there are $$$\alpha^2$$$ choices for the position of the blue square, likewise $$$\beta^2$$$ for the red square. Let's try to count the number of overlapping configurations and subtract them from $$$\alpha^2\beta^2$$$.
It can be noted that if the two squares overlap, the segments they occupy considering the $$$x$$$ axis must overlap, and likewise the $$$y$$$ axis as well. Thus we can reduce the problem to counting $$$v$$$, the number of ways an $$$A$$$-length segment can overlap with a $$$B$$$-length segment within an $$$N$$$-length interval, then squaring it to get the number of overlapping configurations.
To count that, we can reverse the problem yet again by subtracting the number of ways the intervals can not overlap from $$$\alpha \beta$$$. Assuming the $$$A$$$-length segment goes to the left of the $$$B$$$-length segment, let $$$k$$$ be the left end of the $$$B$$$-length segment. Then it can be noted that:
If $$$k = A$$$, there is only one position the $$$A$$$-length segment can go in;
If $$$k = A+1$$$, there are two positions the $$$A$$$-length segment can go in;
If $$$k = A+2$$$, there are three positions the $$$A$$$-length segment can go in, etc.
Thus the total number of configurations (assuming $$$A$$$ goes before $$$B$$$) is the $$$\gamma$$$-th triangular number, i.e. $$$\dfrac {\gamma (\gamma + 1)} 2$$$, where $$$\gamma := N-A-B+1$$$, the number of possible positions of $$$k$$$. This number also applies to the case where $$$B$$$ goes before $$$A$$$ (by reflection), so we can get rid of the $$$2$$$ divisor and get $$$v := \alpha \beta - \gamma (\gamma + 1)$$$
Then obtain the final answer by $$$\alpha^2\beta^2 - v^2$$$. Note that the intermediate products may overflow even a 64-bit integer, thus it's important to calculate them under modulo as well.
E – Lamps
Reframe the problem as: for each tidy square, how many configurations will light it? This sum is equal to the sum the problem asks for.
Let $$$p_{i, j}$$$ denote the number of positions a lamp can be placed to light square $$$(i, j)$$$. This can be precalculated in $$$O(HW)$$$ by two-pointer technique (whenever you encounter a wall or the end of a line, add the run-length to the tidy squares. This can be done both horizontally and vertically, but remember to subtract $$$1$$$ from the end as the square $$$(i, j)$$$ itself will be counted twice)
The number of configurations that light a tidy square $$$(i, j)$$$ is equal to the total number of configurations minus the configurations that don't light the square (i.e. all the places that could light this square are left empty), thus the sum for that square is $$$2^K - 2^{K - p_{i, j}}$$$.
F – Random Max
Thanks to Everule for help with this problem.
Let's denote the random variates with $$$X_i$$$ (capital X) to avoid confusion with the parametric variable $$$x$$$ in the following notations.
Let $$$F(x) := \Pr[\max X \leq x]$$$, the cumulative distribution function (CDF) of the maximum variate. There is a formula for deriving the expected value from the CDF of a random variate:
The latter integral can be ignored as $$$F(x) = 0$$$ for all $$$x \leq 0$$$. We can also adjust the upper bound of the left integral to $$$\max R$$$ as $$$F(x) = 1$$$ when $$$x \geq \max R$$$. Thus we may simplify:
Now let's figure out the behavior of $$$F(x)$$$. We may note that it's the product of the CDFs of all the individual random variates:
Let $$$f_i(x) := \Pr[X_i \leq x]$$$. Let's decompose $$$f_i(x)$$$, the CDF for a single variate:
Note that the middle expression is a degree-$$$1$$$ polynomial. To analyze the behavior of $$$F(x)$$$ we can imagine a sweep line on decreasing $$$x$$$. When $$$x$$$ touches the $$$R_i$$$ of one of the given intervals, its corresponding $$$f_i(x)$$$ "switches on" and contributes its middle expression to the product. Then as soon as $$$x$$$ touches $$$\max L$$$ one of the CDFs goes to $$$0$$$, which sets the product to $$$0$$$ for good. (This matches intuition, as the maximum variate could never be less than $$$\max L$$$). From this we can conclude that $$$F(x)$$$ is a piecewise function, with each piece being describable by a polynomial. We can integrate each piece individually, and their sum would be the integral of $$$F(x)$$$.
What does this mean in terms of implementation? We could at first set variable $$$E := \max R$$$, and sort all intervals in non-increasing (descending) order of $$$R_i$$$ (henceforth we assume $$$R_i \geq R_{i+1}$$$ for all $$$1 \leq i \leq N-1$$$). Let $$$m$$$ denote the number of intervals where $$$R_i > \max L$$$. Let $$$S_{1..m} := R_{1..m}$$$, and $$$S_{m+1} := \max L$$$, which together denote the change-points of $$$F(x)$$$.
Now let $$$p_0(x)$$$ be the polynomial $$$1$$$ (storing all polynomials as arrays of coefficients). As you iterate $$$i$$$ over $$$[1, m]$$$, construct the new polynomial:
Then integrate this polynomial over $$$[S_{i+1}, S_{i}]$$$ (the formula being derivable from the power rule for integrating polynomials):
Subtract each integral from $$$E$$$ and we have our desired expected value. Don't forget to multiply by the required scaling factor $$$(N+1)! \cdot \prod_{i=1}^N (R_i - L_i)$$$ before outputting.
This works in $$$\tilde O(N^2)$$$ as the polynomial increases in length by $$$1$$$ each iteration, taking about $$$\tilde O(n)$$$ to multiply and to integrate. The tildes (~) over the O's indicate hidden $$$\log$$$ factors from modular inverses and exponentiation. It can also be optimized to strict $$$O(N^2)$$$ using the multiple inverse trick and by accumulating the powers instead of exponentiating each time.