Problem A: GCD Sum↵
==================↵
Author and Problemsetting: [user:ninja_28,2021-03-29] ↵
Editorialist: [user:sigma_g,2021-03-29]↵
↵
<spoiler summary="Hint">↵
Can you think of the simplest properties that relate a number and its sum of digits?↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
Note that if $X$ is a multiple of 3, then **both** $X$ as well as the sum of digits of $X$ are a multiple of 3! Can you put this property to use here?↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
If $X$ is a multiple of 3, then $\texttt{gcd-sum}(X) \ge 3$. Therefore, we are guaranteed that at least every third number will satisfy the constraints required by our problem $(\texttt{gcd-sum}(X) > 1)$.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
Therefore, for the input $n$, we can simply check which one of $n$, $n+1$, and $n+2$ has its gcd-sum $> 1$, and print the lowest of them.↵
</spoiler>↵
↵
<spoiler summary="Corner cases">↵
Note that you must take `long long`, as input integers exceed the range of int.↵
↵
Moreover, simply outputting $\text{ceil}((n / 3) \times 3)$ is incorrect as some non-multiples of three may also may have gcd-sum $> 1$, for example, 26.↵
</spoiler>↵
↵
<spoiler summary="C++ solution">↵
~~~~~↵
#include <bits/stdc++.h>↵
using namespace std;↵
↵
long long int gcd_sum(long long int num) {↵
// returns gcd-sum↵
long long int tmp = num, digitsum = 0;↵
↵
while (tmp > 0) {↵
digitsum += tmp % 10;↵
tmp /= 10;↵
}↵
↵
long long int gcd = __gcd(num, digitsum);↵
return gcd;↵
↵
}↵
↵
int main(void) {↵
int t;↵
cin >> t;↵
↵
while (t--) {↵
long long int n;↵
cin >> n;↵
if (gcd_sum(n) != 1) {↵
cout << n << endl;↵
} else if (gcd_sum(n + 1) != 1) {↵
cout << n + 1 << endl;↵
} else if (gcd_sum(n + 2) != 1) {↵
cout << n + 2 << endl;↵
}↵
}↵
return 0;↵
↵
}↵
↵
```~~~~~↵
</spoiler>↵
↵
```↵
↵
<spoiler summary="Python solution">↵
~~~~~↵
from math import gcd↵
↵
def valid(x):↵
return gcd(x, sum([ord(c) — ord('0') for c in str(x)])) != 1↵
↵
t = int(input())↵
↵
while t > 0:↵
t -= 1↵
↵
n = int(input())↵
while not valid(n):↵
n += 1↵
↵
print(n)↵
↵
```~~~~~↵
</spoiler>↵
↵
```↵
↵
↵
↵
Problem B: Box Fitting↵
==================↵
Author and Editorialist: [user:sigma_g,2021-03-29] ↵
Problemsetting: [user:ninja_28,2021-03-29]↵
↵
<spoiler summary="Hint">↵
There can exist multiple optimal packings for a given set of rectangles. However, all of them can always be rearranged to follow a specific pattern, based on the rectangles' sizes.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2">↵
Can you show that it is always possible to replace a set of consecutive small blocks with a single large block? (of same or larger size)↵
</spoiler>↵
↵
↵
<spoiler summary="Solution summary">↵
We follow a greedy strategy:↵
↵
0. Initialize height of box as 1.↵
1. Initialize current layer size as $W$.↵
2. Pick the largest available rectangle that can fit into the current layer, and place it there. Repeat until this layer cannot fit any more rectangles.↵
3. If more rectangles remain, increment height by 1 and now repeat the last three steps. Else, output the current value of height.↵
</spoiler>↵
↵
↵
↵
<spoiler summary="Solution implementation">↵
First count sort the given rectangles based on their widths. There can only be 20 distinct rectangle widths in the range $[1, 10^9]$, so the following works:↵
↵
```~~~~~↵
counts = [0 for w in range(0, 20)]↵
for w in widths:↵
counts[log2(w)] += 1↵
```~~~~~↵
↵
The solution can be implemented by iterating $N$ times.↵
↵
At each iteration, step through the `counts` array and take the largest available rectangle that can fit into the current space. If no rectangle was able to fit, increment height by 1 and reset box width to $W$.↵
↵
This has complexity $\mathcal{O}(n\log(w_\text{max}))$.↵
</spoiler>↵
↵
<spoiler summary="Another implementation">↵
It is also possible to implement the solution with a `multiset` and `upper_bound`, for a complexity of $\mathcal{O}(n\log(n))$.↵
↵
Store all rectangle sizes in a multiset. At each iteration, find using `upper_bound` the largest rectangle that can fit into the current space we have, and fit it in. If no rectangle can fit in this space, reset the space to maximum, increment height, and repeat.↵
</spoiler>↵
↵
<spoiler summary="Proof of correctness - brief">↵
It is always possible to replace several smaller blocks with a single larger block if it is available. Because all blocks are powers of two, it must so happen that smaller powers of two will sum to a larger power. Therefore, we can always place a larger block first if it can be placed there.↵
</spoiler>↵
↵
<spoiler summary="Proof of correctness - elaborate">↵
This elaborate proof isn't actually required for solving the problem. The intuition in the brief proof is sufficient for solving the problem. This proof is for correctness purpose only.↵
↵
Let's first note a property: if $a_1 + \ldots + a_n>a_0$, then there exists some $i$ such that $a_1+ \ldots +a_i=a_0$, when all $a_i$ are powers of 2 AND $a_1$ to $a_n$ is a non-increasing sequence (AND $a_1 <= a_0$, of course). Why is this so? You can take an example and observe this intuitively, this is a common property of powers of two. For example, $4+2+2+1+1>8$, but $4+2+2$ (prefix) $=8$. Formally: if $a_1 =a_0$, this property is trivially true. If $a_1 < a_0$, we can splilt $a_0=2^ka_1$ for some $k$ into $2^k$ parts and — by strong induction — claim this property holds.↵
↵
Let us now compare some optimal solution and our greedy solution. Before comparing, we first sort all blocks in each layer of the optimal solution in decreasing order. This does not affect the final answer but helps in our comparison. This comparison goes from bottom to top, left to right. Let us look at the first point of difference: when the block placed by us ($B_G$) is different from the block placed by the optimal solution ($B_O$). There are three cases.↵
↵
If $B_O > B_G$: this is impossible, as in our greedy solution, we are always placing the largest possible block. We wouldn't place $B_G$ in there if $B_O$ was also possible. If $B_O == B_G$: we have nothing to worry about (this isn't a point of difference)↵
↵
If $B_O < B_G$: let us assume that the optimal solution places several consecutive small blocks, and not just one $B_O$. Since the blocks are in decreasing order, none of them would be bigger than $B_G$. Note that either all of these blocks will sum to less than $B_G$ or a prefix of them will be exactly equal to $B_G$. In either case, we can swap them with one single large block $B_G$ (swapping with a $B_G$ which was placed in some higher layer in the optimal solution)↵
↵
Hence, in the last case, we have shown that any optimal solution (an arrangement of blocks) can be rearranged such that each layer fits the largest possible block first. This is also what is done in our greedy strategy. Therefore, with this proof by rearrangement, we conclude that our greedy strategy gives the same minimum height as that of the optimal answer.↵
</spoiler>↵
↵
<spoiler summary="Does this solution work when block widths are not a power of two?">↵
As we understood in the proof, this solution only works when it's guaranteed that smaller blocks will always exactly sum to any larger block. Therefore, if the blocks are not powers of two, this guarantee does not hold.↵
↵
The following counterexample suffices:↵
↵
```~~~~~↵
6 13↵
6 6 4 4 3 3↵
```~~~~~↵
↵
As you can see here the smaller blocks are not guaranteed to sum to the larger block (no prefix of $4,3,3$ sums to $6$). Our greedy solution gives minimum height 3, but best possible minimum height is $2$ (first layer: $6,4,3$, second layer: $6,4,3$)↵
</spoiler>↵
↵
<spoiler summary="C++ solution">↵
~~~~~↵
#include <cmath>↵
#include <iostream>↵
#include <vector>↵
using namespace std;↵
↵
int main() {↵
int t;↵
cin >> t;↵
↵
while (t--) {↵
int n, box_width, w;↵
cin >> n >> box_width;↵
↵
vector<int> counts(20);↵
for (int i = 0; i < n; i++) {↵
cin >> w;↵
counts[log2(w)]++;↵
}↵
↵
int height = 1, space_left = box_width;↵
↵
for (int iter = 0; iter < n; iter++) {↵
int largest = -1;↵
↵
for (int size = 19; size >= 0; size--) {↵
if (counts[size] and (1 << size) <= space_left) {↵
largest = size;↵
break;↵
}↵
}↵
↵
if (largest == -1) {↵
space_left = box_width;↵
height++;↵
for (int size = 19; size >= 0; size--) {↵
if (counts[size] and (1 << size) <= space_left) {↵
largest = size;↵
break;↵
}↵
}↵
}↵
↵
counts[largest] -= 1;↵
space_left -= 1 << largest;↵
}↵
↵
cout << height << endl;↵
}↵
↵
}↵
↵
```~~~~~↵
</spoiler>↵
↵
<spoiler summary="Python solution">↵
```~~~~~↵
↵
from math import log2↵
↵
def solve_tc():↵
n, w = list(map(int, input().split()))↵
widths = list(map(int, input().split()))↵
↵
counts = [0 for _ in range(20)]↵
↵
for width in widths:↵
counts[int(log2(width))] += 1↵
↵
space = w↵
height = 1↵
↵
for it in range(n):↵
largest = -1↵
↵
for size, count_width in list(enumerate(counts))[::-1]:↵
if counts[size] > 0 and (2 ** size) <= space:↵
largest = size↵
break↵
↵
if largest == -1:↵
space = w↵
height += 1↵
↵
for size, count_width in list(enumerate(counts))[::-1]:↵
if counts[size] > 0 and (2 ** size) <= space:↵
largest = size↵
break↵
↵
assert(largest != -1)↵
counts[largest] -= 1↵
space -= 2 ** largest↵
↵
print(height)↵
↵
t = int(input())↵
↵
while t > 0:↵
t -= 1↵
↵
solve_tc()↵
↵
```~~~~~↵
</spoiler>↵
↵
<spoiler summary="C++ solution - multiset">↵
```~~~~~↵
↵
#include <bits/stdc++.h>↵
using namespace std;↵
↵
int main() {↵
int t;↵
cin >> t;↵
↵
while (t--) {↵
int n, w;↵
int box_width;↵
cin >> n >> box_width;↵
↵
multiset<int> st;↵
for (int i = 0; i < n; i++) {↵
cin >> w;↵
st.insert(w);↵
}↵
↵
int height = 1, space_left = box_width;↵
↵
while (!st.empty()) {↵
auto it = st.upper_bound(space_left);↵
if (it != st.begin()) {↵
it--;↵
int val = *it;↵
space_left -= val;↵
st.erase(it);↵
} else {↵
space_left = box_width;↵
height++;↵
}↵
}↵
↵
cout << height << endl;↵
}↵
return 0;↵
↵
}↵
↵
```~~~~~↵
</spoiler>↵
↵
```↵
Problem C: Planar Reflections↵
==================↵
↵
Author, Problemsetter and Editorialist: [user:sigma_g,2021-03-29]↵
↵
<spoiler summary="Hint 1">↵
We can use dynamic programming to store the state of the simulation at a given time. Therefore, we can simulate the entire situation by reusing the dp states.↵
</spoiler>↵
↵
<spoiler summary="Solution idea">↵
I will describe the most intuitive solution. Naturally, looking at the constraints as well as at the output that is required, we can store a 3-state dp: `dp[n][k][2]`. Here, `dp[n][k][d]` indicates the total number of particles (at the end of the simulation) when one particle of decay age $k$ hits the $n$-th plane in the direction $d$. ($d$ is either $0$ or $1$, and indicates the direction (left/right) in which the particle is going)↵
↵
There can be two directions, $N$ planes and maximum decay age is $K$. So, this dp requires $2\times1000\times1000\times4$ bytes $\approx 24MB$ which is well within our memory constraints.↵
</spoiler>↵
↵
<spoiler summary="Solution details">↵
How to update the DP states? If $k = 1$, the value of `dp[n][1][d]` for any $n$ or $d$ is obviously $1$ (as no copies are produced).↵
↵
So, let us consider a particle $P$ with $k>1$. This particle $P$ produces a copy $P'$ going in the opposite direction. We can count the number of particles produced by $P'$ as `dp[n - 1][k - 1][1 - d]`, as it hits the previous plane with a smaller decay age and in the opposite direction. Moreover, the particle $P$ itself hits the next plane and continues to produce more stuff. We can calculate its number of particles produced as `dp[n + 1][k][d]`, as it hits the next plane with the same decay age and the same direction!↵
↵
Finally, we can combine these two values to get the value of `dp[n][k][d]`.↵
</spoiler>↵
↵
<spoiler summary="Implementation details">↵
We can implement this is easily as a recursive dp with memoization. At each state, we only need to iterate in the correct order (in one case, from right to left, and in the other, from left to right), and update states as required. Look at the implementation for more details. The total complexity of this algorithm is $\mathcal{O}(nk)$↵
</spoiler>↵
↵
<spoiler summary="Note">↵
Obviously, there exist much better solutions which do not use a third state and are much sleaker. However, I wanted to describe the simplest idea required to solve the problem.↵
</spoiler>↵
↵
<spoiler summary="C++ solution">↵
~~~~~↵
#include <cstring>↵
#include <iostream>↵
#include <vector>↵
↵
using namespace std;↵
↵
const int N = 1001;↵
const int K = 1001;↵
↵
int n, k;↵
const int mod = 1e9 + 7;↵
↵
int dp[N][k][2];↵
↵
int solve(int curr, int k, int dir) {↵
if (k == 1) {↵
return 1;↵
}↵
↵
if (dp[curr][k][dir] != -1) {↵
return dp[curr][k][dir];↵
}↵
↵
int ans = 2; // me and my copy↵
↵
if (dir == 1) {↵
if (curr < n)↵
ans += solve(curr + 1, k, dir) - 1;↵
↵
ans %= mod;↵
↵
if (curr > 1)↵
ans += solve(curr - 1, k - 1, 1 - dir) - 1;↵
↵
ans %= mod;↵
dp[curr][k][dir] = ans;↵
} else {↵
if (curr > 1)↵
ans += solve(curr - 1, k, dir) - 1;↵
↵
ans %= mod;↵
↵
if (curr < n)↵
ans += solve(curr + 1, k - 1, 1 - dir) - 1;↵
↵
ans %= mod;↵
dp[curr][k][dir] = ans;↵
}↵
↵
return ans;↵
↵
}↵
↵
int main() {↵
int t;↵
cin >> t;↵
↵
while (t--) {↵
cin >> n >> k;↵
memset(dp, -1, sizeof(dp));↵
cout << solve(1, k, 1) << endl;↵
}↵
↵
}↵
↵
```↵
</spoiler>↵
↵
<spoiler summary="Python solution">↵
~~~~~↵
~~~~~↵
</spoiler>↵
↵
```↵
↵
↵
Problem D: Bananas in a Microwave↵
==================↵
Author: [user:shash42,2021-03-29]↵
Problemsetting and Editorialist: [user:sigma_g,2021-03-29]↵
↵
<spoiler summary="Brute force solution">↵
We have a brute force $\mathcal{O}(N\cdot M^2)$ solution.↵
↵
At every timestep $t$, for each banana $b_i$ that has already been reached previously, apply this timestep's operation $y_t$ times on $b_i$. For all the $y_t$ bananas reachable from $b_i$, update their minimum reachability time if they hadn't been reached previously.↵
↵
Why is this correct? Simply because we are simulating each possible step of the algorithm exactly as it is described. Therefore, we cannot get an answer that's better or worse than that of the optimal solution.↵
</spoiler>↵
↵
<spoiler summary="Optimizing the brute force: hint">↵
Observe if we can reduce our search space when we visit nodes that have already been visited previously.↵
</spoiler>↵
↵
<spoiler summary="Optimizing the brute force: hint 2">↵
Let us take an example. We have some timestep $t,x,y = 1,5,10$. If we start visiting from $k=10$, we will visit $k=15,20,\ldots,55,60$. Let us say that $k=40$ was an already visited state. Do we now need to continue visiting $k=45,\ldots,60$ or can we stop our search here? ↵
</spoiler>↵
↵
<spoiler summary="Optimizing the brute force: details">↵
We can actually stop our search as soon as we reach a previouly visited node! Why is this so? This is because — within the same iteration — that already visited node will once again start its own search, and will hit all the points that we were going to hit, and some more!↵
↵
For example, let us say we would reach points $a, a\cdot x, a\cdot x^2, \ldots, a\cdot x^{y-1}$. Now, assume that $a\cdot x^5$ had been previously reached, then it is better to stop at $a\cdot x^5$, as this node itself will reach $a\cdot x^5, a\cdot x^6, \ldots, a\cdot x^{y-2}, a\cdot x^{y-1},a\cdot x^{y}, \ldots, a\cdot x^{5+y-1}$. Note the latter range includes as a prefix the entire remaining suffix of the former range! Therefore, in this approach, nodes that would _have_ been visited, will eventually be visited, and get assigned the minimum timestamp.↵
</spoiler>↵
↵
<spoiler summary="Optimized solution implementation">↵
We can implement the optimized solution by simply adding an extra `if already_visited[k]: break` to our inner loop. Yup, really, that's all it takes to solve this question!↵
↵
Complexity analysis: We can show that each node is visited at most twice: an unvisited node is reached atmost once, whereas a visited node is reached atmost twice (once during the main loop and once when searching from the immediately previous visited node) There are $N$ iterations, and in each iteration, each of the $M$ nodes is visited at most twice. Therefore, the total complexity is $\mathcal{O}(2NM)$.↵
</spoiler>↵
↵
<spoiler summary="Common mistakes">↵
1. Integer overflows: $x'$ does not fit in integer range↵
2. Out of bounds access: simulating the $y_t$ steps of the algorithm even when $k$ exceeds $M$ prematurely↵
</spoiler>↵
↵
<spoiler summary="C++ solution">↵
~~~~~↵
#include <iostream>↵
#include <vector>↵
↵
using namespace std;↵
using LL = long long;↵
↵
const int DIV = 1e5;↵
↵
inline LL ceil(LL x, LL y) {↵
return (x + y — 1) / y;↵
}↵
↵
int main() {↵
int T, M;↵
cin >> T >> M;↵
↵
vector<bool> is_seen(M + 1, false);↵
is_seen[0] = true;↵
vector<int> answer(M + 1, -1);↵
answer[0] = 0;↵
↵
for (int timestep = 1; timestep <= T; timestep++) {↵
auto new_is_seen = is_seen;↵
↵
int t, y;↵
LL x;↵
cin >> t >> x >> y;↵
↵
auto operation = [&](long long &curr) {↵
if (t == 1) {↵
curr = curr + ceil(x, DIV);↵
} else {↵
curr = ceil(curr * x, DIV);↵
}↵
};↵
↵
for (int k = 0; k <= M; k++) {↵
if (is_seen[k]) {↵
long long curr = k;↵
operation(curr);↵
↵
for (int a = 1; a <= y;) {↵
if (curr > M) break;↵
if (is_seen[curr]) break;↵
↵
new_is_seen[curr] = true;↵
answer[curr] = timestep;↵
↵
a++;↵
operation(curr);↵
}↵
}↵
}↵
↵
is_seen = new_is_seen;↵
}↵
↵
for (int i = 1; i <= M; i++)↵
cout << answer[i] << " ";↵
↵
cout << endl;↵
↵
}↵
↵
```↵
</spolier>↵
↵
<spoiler summary="Python solution">↵
```↵
↵
```↵
</spolier>↵
```↵
↵
Problem E: Two Houses↵
==================↵
Author: [user:dixitgarg,2021-03-29]↵
Problemsetting and Editorialist: [user:dixitgarg,2021-03-29]↵
↵
In this problem we have to output two nodes $a$ and $b$ such that there is a path from $a$ to $b$ and $b$ to $a$ and the absolute value of the difference of the indegree $(|k_a - k_b|)$ should be maximum. First of all let us think of bireachability only i.e how to find two nodes $a$ and $b$ such that they are both reachable from each other? How can we verify this from the judge? Because if we ask $"? \ a \ b"$ i.e whether there is a path from $a$ to $b$ or not, then if the judge answers "Yes", we can't ask further queries. So we have to ask queries for those pairs $(a,b)$ for which we are sure that there is a path from $b$ to $a$. So how to ensure whether there is a path from $b$ to $a$ or not?↵
↵
The answer lies in the fact that the given graph is not an ordinary graph, it is a special one. For every pair of vertices in this graph, there is a directed edge. So this type of graph has some interesting properties which we are going to see now.↵
↵
Image how the compressed SCC of the graph will look like. For every pair of nodes of compressed SCC, there will be an edge, so it will have exactly one source, one sink and there would be only one possible topological sorting of this compressed SCC.↵
↵
<spoiler summary="Proof of unique topological sorting">↵
↵
Since there is an edge between every pair of nodes, for every pair of nodes in the compresses SCC also, there would be an edge. And we know that if there is an edge between node $a$ to node $b$, then node $a$ comes before node $b$ in the topological sort. So for every pair of nodes of compressed SCC, we know which node would come first in the topological sorting, so it would result in a unique topological sorting.↵
</spoiler>↵
↵
Now we'll see one interesting and important property of this graph.↵
↵
Property : Consider two consecutive strongly connneted components in the topological sorting, then all nodes present in the left component will have indegree strictly less than all nodes present in the right component. Here left denotes lower enumeration in the topological sorting.↵
↵
<spoiler summary="Proof">↵
Consider two nodes $u$ and $v$ from left component and right component respectively. Since the contribution to the indegree from the nodes which don't lie in these two components would be the same for both $u$ and $v$ (because $u$ and $v$ lie in adjacent components), we are ignoring it as we have to just compare their values. If we consider all the edges between the nodes of left component and right component, then all of them would be directed from the node in the left component to the node in the right component. So the node $v$ would have minimum indegree of $Size of Left Component$. The node $u$ would have the maximum indegree of $Size of Left Component - 1$. This is because there is no edge direted from the right component to the node $u$ and the maximum indegree will be when all other nodes in the left component have an edge directed towards node $u$. In that case it would be $Size of Left Component -1$. So the indegree of $u$ is strictly less than the indegree of $v$.↵
Since $u$ and $v$ are arbitrary nodes, it is true for all pairs of nodes.↵
</spoiler>↵
↵
Using the above property we can argue that if indegree of node $a$ is greater than the indegree of node $b$, then node $a$ is reachable from node $b$. Because either node $a$ lies in the same SCC or to the SCC of higher enumeration in topological sorting. In both cases $a$ is reachable from $b$.↵
↵
So we can store all pairs of nodes $(a,b), 1 \leq a \leq n, 1 \leq b \leq n, a < b$ in an array and sort it according to decreasing order of absolute value of difference of indegrees i.e $|k_a - k_b|$. And if we pick a pair, let it be ($a$,$b$) and $indegree[b] > indegree[a]$, then we are sure that $b$ is reachable from $a$ so we need to check whether $a$ is reachable from $b$ or not, so we ask $"? \ b \ a"$ and if the judge responds by "Yes", then it means both $a$ and $b$ are reachable from each other. Since we were iterating in the dereasing order of $|k_a - k_b|$, we get the optimal answer.If judge never outputs "Yes" in the whole process, then there is no pair of nodes which are reachable from each other. So we will output $"? \ 0 \ 0"$↵
↵
Overall Complexity : $O(n^2 \log_2 n)$↵
↵
<spoiler summary="C++ solution">↵
~~~~~↵
#include <bits/stdc++.h>↵
using namespace std;↵
↵
int main() {↵
int n;↵
cin >> n;↵
↵
vector<int> indegree(n);↵
↵
for (int i = 0; i < n; i++) {↵
cin >> indegree[i];↵
}↵
↵
vector<pair<int, pair<int, int>>> vec;↵
↵
for (int i = 0; i < n; ++i) {↵
for (int j = i + 1; j < n; ++j) {↵
if (indegree[i] > indegree[j])↵
vec.push_back({indegree[i] - indegree[j], {i, j}});↵
else↵
vec.push_back({indegree[j] - indegree[i], {j, i}});↵
}↵
}↵
↵
sort(vec.rbegin(), vec.rend());↵
↵
for (auto it : vec) {↵
if (it.first < 0)↵
break;↵
↵
int a = it.second.first, b = it.second.second;↵
cout << "? " << a + 1 << " " << b + 1 << endl;↵
cout.flush();↵
↵
Problem F: Christmas Game↵
==================↵
Author: [user:nikhil_c,2021-03-29]↵
Problemsetting and editorialist: [user:sigma_g,2021-03-29]↵
↵
<spoiler summary="How do we solve a standard Nim game on arrays?">↵
By the Sprague-Grundy theorem, we know that the current player has a winning strategy if $a_1 \oplus a_2 \oplus \ldots \oplus a_n$ (xorsum of sizes of the existring str;↵
cin >> str;↵
if (str == "Yes") {↵
cout << "! " << a + 1 << " " << b + 1 << endl;↵
cout.flush();↵
return 0;↵
}↵
}↵
↵
cout << "! 0 0" << endl;↵
cout.flush();↵
↵
return 0;↵
↵
}↵
↵
```↵
</spolier>↵
↵
<spoiler summary="Python solution">↵
```↵
↵
import sys↵
input = lambda: sys.stdin.readline().rstrip()↵
↵
n = int(input())↵
↵
indegs = list(map(int, input().split()))↵
↵
pairs = []↵
for i in range(n):↵
for j in range(i + 1, n):↵
if indegs[i] > indegs[j]:↵
pairs.append((indegs[i] — indegs[j], (i, j)))↵
else:↵
pairs.append((indegs[j] — indegs[i], (j, i)))↵
↵
pairs = sorted(pairs, reverse=True)↵
l = len(pairs)↵
↵
for \_, nodes in pairs:↵
ni, nj = nodes↵
↵
sys.stdout.write(f"? {ni + 1} {nj + 1}\n")↵
sys.stdout.flush()↵
↵
s = input()↵
if s == "Yes":↵
sys.stdout.write(f"! {ni + 1} {nj + 1}\n")↵
exit(0)↵
↵
sys.stdout.write("! 0 0\n")↵
↵
```↵
</spolier>↵
```↵
↵
Problem F: Christmas Game↵
==================↵
Author: [user:nikhil_c,2021-03-29]↵
Problemsetting and editorialist: [user:sigma_g,2021-03-29]↵
↵
<spoiler summary="How do we solve a standard Nim game on arrays?">↵
By the Sprague-Grundy theorem, we know that the current player has a winning strategy if $a_1 \oplus a_2 \oplus \ldots \oplus a_n$ (xorsum of sizes of the existing piles) is non-zero. For a proof, read details on [CP-algorithms](https://cp-algorithms.com/game_theory/sprague-grundy-nim.html).↵
</spoiler>piles) is non-zero. For a proof, read details on [CP-algorithms](https://cp-algorithms.com/game_theory/sprague-grundy-nim.html).↵
</spoiler>↵
↵
↵
<spoiler summary="How to solve tree nim game for one rooting if K = 1: classifying odd/even steps">↵
Let us classify nodes into odd or even depending on their depth relative to the root. Note that even steps do not affect the game state. Let us prove how:↵
↵
Let us say it's Alice's turn. If Alice moves some coins from an even step to an odd step, then Bob can move _exactly those same coins_ from that odd step back to an even step. After this transition, once again it's Alice's move. In fact, we realize that Bob can "revert" every even->odd move by Alice.↵
↵
Therefore, if Alice wants to win, she has to play at least one odd->even move. Moves that go from even->odd do not affect the game state at all, as the other player can always play another move that reverts them. Therefore, we can say that any coins present on even steps will not change the game state.↵
</spoiler>↵
↵
↵
<spoiler summary="How to solve tree nim game for one rooting if K = 1: how bad are even steps">↵
Let us now analyze what happens if some coins move from the odd steps to even steps. We know that any coins on even nodes will not contribute to the game state. In fact, we realize that it does not matter whether these coins were present on the even nodes before the game started or whether they came by on the even nodes during the game. Once they are on an even step, they no longer contribute to the game state.↵
↵
Hence, we can conclude that moving a coin from odd step to an even step is as good as taking a coin from the odd step and throwing it away.↵
</spoiler>↵
↵
↵
<spoiler summary="Reducing tree nim game to linear array: the stair case nim">↵
As we can see, we have effectively reduced the Nim game on tree to a linear nim game where only the odd positioned nodes determine the winning strategy.↵
This is known as the staircase nim. The result is the following: the current player has a winning strategy if xorsum of all values at the odd steps is non-zero.↵
</spoiler>↵
↵
<spoiler summary="Extending to general K">↵
In general $K$, we can extend this idea to: parity of $d' = \lfloor \frac d K \rfloor$ where $d$ is the depth of this node (zero-indexed). All nodes — such that $d'$ is odd for them — will contribute to the final xorsum. Take a moment to digest this.↵
↵
How to calculate these xors? At each node $x$, we store a vector of size $D(n) = 2\cdot K$ where $D(n)_i$ is the xorsum of all nodes having their depth = $i$ — relative to node $x$ — in the subtree of node $n$.↵
</spoiler>↵
↵
<spoiler summary="Calculating the answer for all roots">↵
How to propagate these values in a DFS? We know that the nodes at depth `i` is at depth `i + 1` for my child nodes. So, we can simply cycle through them and update the values. Check the implementation for details.↵
</spoiler>↵
↵
<spoiler summary="C++ solution">↵
~~~~~↵
↵
#include <iostream>↵
#include <vector>↵
↵
using namespace std;↵
↵
const int N = 1e5 + 1;↵
const int K = 20 + 1;↵
↵
using VI = vector<unsigned int>;↵
using VVI = vector<vector<unsigned int>>;↵
↵
VVI dp(N, VI(2 \* K));↵
VI a(N);↵
vector<bool> win(N);↵
int n, k, k2;↵
↵
void dfs(VVI &adj, int node, int prev) {↵
dp[node][0] = a[node];↵
↵
for (auto neigh : adj[node]) {↵
if (neigh == prev) continue;↵
dfs(adj, neigh, node);↵
↵
for (int rem = 1; rem < k2; rem++)↵
dp[node][rem] ^= dp[neigh][rem - 1];↵
↵
dp[node][0] ^= dp[neigh][k2 - 1];↵
}↵
↵
}↵
↵
void dfs2(const VVI &adj, const int node, const int prev, const vector<unsigned int> &my_xors) {↵
vector<unsigned int> final_xors(k2);↵
for (int i = 0; i < k2; i++)↵
final_xors[i] = my_xors[i] ^ dp[node][i];↵
↵
unsigned int odd_layer_xor = 0;↵
for (int i = k; i < k2; i++)↵
odd_layer_xor ^= final_xors[i];↵
win[node] = odd_layer_xor > 0;↵
↵
for (auto neigh : adj[node]) {↵
if (neigh == prev) continue;↵
↵
auto xor_send = final_xors;↵
↵
// remove all contribution of this subtree↵
for (int i = 1; i < k2; i++)↵
xor_send[i] ^= dp[neigh][i - 1];↵
↵
xor_send[0] ^= dp[neigh][k2 - 1];↵
↵
// whatever was depth k for me is depth k+1 for my child node↵
int last = xor_send.back();↵
for (int i = k2 - 1; i > 0; i--)↵
xor_send[i] = xor_send[i - 1];↵
↵
xor_send[0] = last;↵
↵
dfs2(adj, neigh, node, xor_send);↵
}↵
↵
}↵
↵
int main() {↵
cin >> n >> k;↵
k2 = 2 \* k;↵
↵
VVI adj(n + 1);↵
↵
for (int i = 0; i < n - 1; i++) {↵
int x, y; cin >> x >> y;↵
adj[x].push_back(y);↵
adj[y].push_back(x);↵
}↵
↵
for (int i = 1; i <= n; i++) cin >> a[i];↵
↵
dfs(adj, 1, 0);↵
dfs2(adj, 1, 0, vector<unsigned int>(k2));↵
↵
for (int i = 1; i <= n; i++) cout << (win[i] ? 1 : 0) << " ";↵
↵
return 0;↵
↵
}↵
↵
```↵
</spoiler>↵
↵
<spoiler summary="PyPy3 solution">↵
```↵
↵
n, k = list(map(int, input().split()))↵
k2 = 2 \* k↵
↵
dp = [[0 for _ in range(k2)] for _ in range(n + 1)]↵
adj = [[] for _ in range(n + 1)]↵
↵
for \_ in range(n — 1):↵
\_x, \_y = list(map(int, input().split()))↵
adj[_x].append(\_y)↵
adj[_y].append(\_x)↵
↵
a = [0] + list(map(int, input().split()))↵
↵
win = [0 for _ in range(n + 1)]↵
↵
parent = [0 for _ in range(n + 1)]↵
↵
def dfs(node):↵
global parent↵
global dp↵
↵
stack = [node]↵
↵
pass_num = [0 for _ in range(n + 1)]↵
↵
while stack:↵
node = stack[-1]↵
↵
pass_num[node] += 1↵
↵
if pass_num[node] == 1:↵
for neigh in adj[node]:↵
if neigh == parent[node]:↵
continue↵
↵
parent[neigh] = node↵
stack.append(neigh)↵
↵
continue↵
↵
dp[node][0] = a[node]↵
stack.pop()↵
↵
for neigh in adj[node]:↵
if neigh == parent[node]:↵
continue↵
↵
for rem in range(1, k2):↵
dp[node][rem] ^= dp[neigh][rem - 1]↵
↵
dp[node][0] ^= dp[neigh][k2 - 1]↵
↵
def dfs2(node):↵
global win↵
↵
stack = [[node, [0 for __ in range(k2)]]]↵
↵
global dp↵
↵
while stack:↵
node, prev_xors = stack[-1]↵
stack.pop()↵
↵
final_xors = [x ^ y for x, y in zip(prev_xors, dp[node])]↵
↵
for neigh in adj[node]:↵
if neigh == parent[node]:↵
continue↵
↵
send_xors = [x for x in final_xors]↵
for i in range(1, k2):↵
send_xors[i] ^= dp[neigh][i - 1]↵
send_xors[0] ^= dp[neigh][k2 - 1]↵
↵
send_xors = [send_xors[-1]] + send_xors[:-1]↵
stack.append([neigh, send_xors])↵
↵
odd_xor = 0↵
for i in range(k, k2):↵
odd_xor ^= final_xors[i]↵
↵
win[node] = 1 if odd_xor > 0 else 0↵
↵
dfs(1)↵
dfs2(1)↵
print(" ".join(list(map(str, win[1:]))))↵
↵
```↵
</spoiler>↵
```↵
↵
==================↵
Author and Problemsetting: [user:ninja_28,2021-03-29] ↵
Editorialist: [user:sigma_g,2021-03-29]↵
↵
<spoiler summary="Hint">↵
Can you think of the simplest properties that relate a number and its sum of digits?↵
</spoiler>↵
↵
<spoiler summary="Hint 2">↵
Note that if $X$ is a multiple of 3, then **both** $X$ as well as the sum of digits of $X$ are a multiple of 3! Can you put this property to use here?↵
</spoiler>↵
↵
<spoiler summary="Hint 3">↵
If $X$ is a multiple of 3, then $\texttt{gcd-sum}(X) \ge 3$. Therefore, we are guaranteed that at least every third number will satisfy the constraints required by our problem $(\texttt{gcd-sum}(X) > 1)$.↵
</spoiler>↵
↵
<spoiler summary="Solution">↵
Therefore, for the input $n$, we can simply check which one of $n$, $n+1$, and $n+2$ has its gcd-sum $> 1$, and print the lowest of them.↵
</spoiler>↵
↵
<spoiler summary="Corner cases">↵
Note that you must take `long long`, as input integers exceed the range of int.↵
↵
Moreover, simply outputting $\text{ceil}((n / 3) \times 3)$ is incorrect as some non-multiples of three may also may have gcd-sum $> 1$, for example, 26.↵
</spoiler>↵
↵
<spoiler summary="C++ solution">↵
~~~~~↵
#include <bits/stdc++.h>↵
using namespace std;↵
↵
long long int gcd_sum(long long int num) {↵
// returns gcd-sum↵
long long int tmp = num, digitsum = 0;↵
↵
while (tmp > 0) {↵
digitsum += tmp % 10;↵
tmp /= 10;↵
}↵
↵
long long int gcd = __gcd(num, digitsum);↵
return gcd;↵
↵
}↵
↵
int main(void) {↵
int t;↵
cin >> t;↵
↵
while (t--) {↵
long long int n;↵
cin >> n;↵
if (gcd_sum(n) != 1) {↵
cout << n << endl;↵
} else if (gcd_sum(n + 1) != 1) {↵
cout << n + 1 << endl;↵
} else if (gcd_sum(n + 2) != 1) {↵
cout << n + 2 << endl;↵
}↵
}↵
return 0;↵
↵
}↵
↵
</spoiler>↵
↵
↵
<spoiler summary="Python solution">↵
~~~~~↵
from math import gcd↵
↵
def valid(x):↵
return gcd(x, sum([ord(c) — ord('0') for c in str(x)])) != 1↵
↵
t = int(input())↵
↵
while t > 0:↵
t -= 1↵
↵
n = int(input())↵
while not valid(n):↵
n += 1↵
↵
print(n)↵
↵
</spoiler>↵
↵
↵
↵
Problem B: Box Fitting↵
==================↵
Author and Editorialist: [user:sigma_g,2021-03-29] ↵
Problemsetting: [user:ninja_28,2021-03-29]↵
↵
<spoiler summary="Hint">↵
There can exist multiple optimal packings for a given set of rectangles. However, all of them can always be rearranged to follow a specific pattern, based on the rectangles' sizes.↵
</spoiler>↵
↵
↵
<spoiler summary="Hint 2">↵
Can you show that it is always possible to replace a set of consecutive small blocks with a single large block? (of same or larger size)↵
</spoiler>↵
↵
↵
<spoiler summary="Solution summary">↵
We follow a greedy strategy:↵
↵
0. Initialize height of box as 1.↵
1. Initialize current layer size as $W$.↵
2. Pick the largest available rectangle that can fit into the current layer, and place it there. Repeat until this layer cannot fit any more rectangles.↵
3. If more rectangles remain, increment height by 1 and now repeat the last three steps. Else, output the current value of height.↵
</spoiler>↵
↵
↵
↵
<spoiler summary="Solution implementation">↵
First count sort the given rectangles based on their widths. There can only be 20 distinct rectangle widths in the range $[1, 10^9]$, so the following works:↵
↵
counts = [0 for w in range(0, 20)]↵
for w in widths:↵
counts[log2(w)] += 1↵
↵
The solution can be implemented by iterating $N$ times.↵
↵
At each iteration, step through the `counts` array and take the largest available rectangle that can fit into the current space. If no rectangle was able to fit, increment height by 1 and reset box width to $W$.↵
↵
This has complexity $\mathcal{O}(n\log(w_\text{max}))$.↵
</spoiler>↵
↵
<spoiler summary="Another implementation">↵
It is also possible to implement the solution with a `multiset` and `upper_bound`, for a complexity of $\mathcal{O}(n\log(n))$.↵
↵
Store all rectangle sizes in a multiset. At each iteration, find using `upper_bound` the largest rectangle that can fit into the current space we have, and fit it in. If no rectangle can fit in this space, reset the space to maximum, increment height, and repeat.↵
</spoiler>↵
↵
<spoiler summary="Proof of correctness - brief">↵
It is always possible to replace several smaller blocks with a single larger block if it is available. Because all blocks are powers of two, it must so happen that smaller powers of two will sum to a larger power. Therefore, we can always place a larger block first if it can be placed there.↵
</spoiler>↵
↵
<spoiler summary="Proof of correctness - elaborate">↵
This elaborate proof isn't actually required for solving the problem. The intuition in the brief proof is sufficient for solving the problem. This proof is for correctness purpose only.↵
↵
Let's first note a property: if $a_1 + \ldots + a_n>a_0$, then there exists some $i$ such that $a_1+ \ldots +a_i=a_0$, when all $a_i$ are powers of 2 AND $a_1$ to $a_n$ is a non-increasing sequence (AND $a_1 <= a_0$, of course). Why is this so? You can take an example and observe this intuitively, this is a common property of powers of two. For example, $4+2+2+1+1>8$, but $4+2+2$ (prefix) $=8$. Formally: if $a_1 =a_0$, this property is trivially true. If $a_1 < a_0$, we can splilt $a_0=2^ka_1$ for some $k$ into $2^k$ parts and — by strong induction — claim this property holds.↵
↵
Let us now compare some optimal solution and our greedy solution. Before comparing, we first sort all blocks in each layer of the optimal solution in decreasing order. This does not affect the final answer but helps in our comparison. This comparison goes from bottom to top, left to right. Let us look at the first point of difference: when the block placed by us ($B_G$) is different from the block placed by the optimal solution ($B_O$). There are three cases.↵
↵
If $B_O > B_G$: this is impossible, as in our greedy solution, we are always placing the largest possible block. We wouldn't place $B_G$ in there if $B_O$ was also possible. If $B_O == B_G$: we have nothing to worry about (this isn't a point of difference)↵
↵
If $B_O < B_G$: let us assume that the optimal solution places several consecutive small blocks, and not just one $B_O$. Since the blocks are in decreasing order, none of them would be bigger than $B_G$. Note that either all of these blocks will sum to less than $B_G$ or a prefix of them will be exactly equal to $B_G$. In either case, we can swap them with one single large block $B_G$ (swapping with a $B_G$ which was placed in some higher layer in the optimal solution)↵
↵
Hence, in the last case, we have shown that any optimal solution (an arrangement of blocks) can be rearranged such that each layer fits the largest possible block first. This is also what is done in our greedy strategy. Therefore, with this proof by rearrangement, we conclude that our greedy strategy gives the same minimum height as that of the optimal answer.↵
</spoiler>↵
↵
<spoiler summary="Does this solution work when block widths are not a power of two?">↵
As we understood in the proof, this solution only works when it's guaranteed that smaller blocks will always exactly sum to any larger block. Therefore, if the blocks are not powers of two, this guarantee does not hold.↵
↵
The following counterexample suffices:↵
↵
6 13↵
6 6 4 4 3 3↵
↵
As you can see here the smaller blocks are not guaranteed to sum to the larger block (no prefix of $4,3,3$ sums to $6$). Our greedy solution gives minimum height 3, but best possible minimum height is $2$ (first layer: $6,4,3$, second layer: $6,4,3$)↵
</spoiler>↵
↵
<spoiler summary="C++ solution">↵
~~~~~↵
#include <cmath>↵
#include <iostream>↵
#include <vector>↵
using namespace std;↵
↵
int main() {↵
int t;↵
cin >> t;↵
↵
while (t--) {↵
int n, box_width, w;↵
cin >> n >> box_width;↵
↵
vector<int> counts(20);↵
for (int i = 0; i < n; i++) {↵
cin >> w;↵
counts[log2(w)]++;↵
}↵
↵
int height = 1, space_left = box_width;↵
↵
for (int iter = 0; iter < n; iter++) {↵
int largest = -1;↵
↵
for (int size = 19; size >= 0; size--) {↵
if (counts[size] and (1 << size) <= space_left) {↵
largest = size;↵
break;↵
}↵
}↵
↵
if (largest == -1) {↵
space_left = box_width;↵
height++;↵
for (int size = 19; size >= 0; size--) {↵
if (counts[size] and (1 << size) <= space_left) {↵
largest = size;↵
break;↵
}↵
}↵
}↵
↵
counts[largest] -= 1;↵
space_left -= 1 << largest;↵
}↵
↵
cout << height << endl;↵
}↵
↵
}↵
↵
</spoiler>↵
↵
<spoiler summary="Python solution">↵
↵
from math import log2↵
↵
def solve_tc():↵
n, w = list(map(int, input().split()))↵
widths = list(map(int, input().split()))↵
↵
counts = [0 for _ in range(20)]↵
↵
for width in widths:↵
counts[int(log2(width))] += 1↵
↵
space = w↵
height = 1↵
↵
for it in range(n):↵
largest = -1↵
↵
for size, count_width in list(enumerate(counts))[::-1]:↵
if counts[size] > 0 and (2 ** size) <= space:↵
largest = size↵
break↵
↵
if largest == -1:↵
space = w↵
height += 1↵
↵
for size, count_width in list(enumerate(counts))[::-1]:↵
if counts[size] > 0 and (2 ** size) <= space:↵
largest = size↵
break↵
↵
assert(largest != -1)↵
counts[largest] -= 1↵
space -= 2 ** largest↵
↵
print(height)↵
↵
t = int(input())↵
↵
while t > 0:↵
t -= 1↵
↵
solve_tc()↵
↵
</spoiler>↵
↵
<spoiler summary="C++ solution - multiset">↵
↵
#include <bits/stdc++.h>↵
using namespace std;↵
↵
int main() {↵
int t;↵
cin >> t;↵
↵
while (t--) {↵
int n, w;↵
int box_width;↵
cin >> n >> box_width;↵
↵
multiset<int> st;↵
for (int i = 0; i < n; i++) {↵
cin >> w;↵
st.insert(w);↵
}↵
↵
int height = 1, space_left = box_width;↵
↵
while (!st.empty()) {↵
auto it = st.upper_bound(space_left);↵
if (it != st.begin()) {↵
it--;↵
int val = *it;↵
space_left -= val;↵
st.erase(it);↵
} else {↵
space_left = box_width;↵
height++;↵
}↵
}↵
↵
cout << height << endl;↵
}↵
return 0;↵
↵
}↵
↵
</spoiler>↵
↵
Problem C: Planar Reflections↵
==================↵
↵
Author, Problemsetter and Editorialist: [user:sigma_g,2021-03-29]↵
↵
<spoiler summary="Hint 1">↵
We can use dynamic programming to store the state of the simulation at a given time. Therefore, we can simulate the entire situation by reusing the dp states.↵
</spoiler>↵
↵
<spoiler summary="Solution idea">↵
I will describe the most intuitive solution. Naturally, looking at the constraints as well as at the output that is required, we can store a 3-state dp: `dp[n][k][2]`. Here, `dp[n][k][d]` indicates the total number of particles (at the end of the simulation) when one particle of decay age $k$ hits the $n$-th plane in the direction $d$. ($d$ is either $0$ or $1$, and indicates the direction (left/right) in which the particle is going)↵
↵
There can be two directions, $N$ planes and maximum decay age is $K$. So, this dp requires $2\times1000\times1000\times4$ bytes $\approx 24MB$ which is well within our memory constraints.↵
</spoiler>↵
↵
<spoiler summary="Solution details">↵
How to update the DP states? If $k = 1$, the value of `dp[n][1][d]` for any $n$ or $d$ is obviously $1$ (as no copies are produced).↵
↵
So, let us consider a particle $P$ with $k>1$. This particle $P$ produces a copy $P'$ going in the opposite direction. We can count the number of particles produced by $P'$ as `dp[n - 1][k - 1][1 - d]`, as it hits the previous plane with a smaller decay age and in the opposite direction. Moreover, the particle $P$ itself hits the next plane and continues to produce more stuff. We can calculate its number of particles produced as `dp[n + 1][k][d]`, as it hits the next plane with the same decay age and the same direction!↵
↵
Finally, we can combine these two values to get the value of `dp[n][k][d]`.↵
</spoiler>↵
↵
<spoiler summary="Implementation details">↵
We can implement this is easily as a recursive dp with memoization. At each state, we only need to iterate in the correct order (in one case, from right to left, and in the other, from left to right), and update states as required. Look at the implementation for more details. The total complexity of this algorithm is $\mathcal{O}(nk)$↵
</spoiler>↵
↵
<spoiler summary="Note">↵
Obviously, there exist much better solutions which do not use a third state and are much sleaker. However, I wanted to describe the simplest idea required to solve the problem.↵
</spoiler>↵
↵
<spoiler summary="C++ solution">↵
~~~~~↵
#include <cstring>↵
#include <iostream>↵
#include <vector>↵
↵
using namespace std;↵
↵
const int N = 1001;↵
const int K = 1001;↵
↵
int n, k;↵
const int mod = 1e9 + 7;↵
↵
int dp[N][k][2];↵
↵
int solve(int curr, int k, int dir) {↵
if (k == 1) {↵
return 1;↵
}↵
↵
if (dp[curr][k][dir] != -1) {↵
return dp[curr][k][dir];↵
}↵
↵
int ans = 2; // me and my copy↵
↵
if (dir == 1) {↵
if (curr < n)↵
ans += solve(curr + 1, k, dir) - 1;↵
↵
ans %= mod;↵
↵
if (curr > 1)↵
ans += solve(curr - 1, k - 1, 1 - dir) - 1;↵
↵
ans %= mod;↵
dp[curr][k][dir] = ans;↵
} else {↵
if (curr > 1)↵
ans += solve(curr - 1, k, dir) - 1;↵
↵
ans %= mod;↵
↵
if (curr < n)↵
ans += solve(curr + 1, k - 1, 1 - dir) - 1;↵
↵
ans %= mod;↵
dp[curr][k][dir] = ans;↵
}↵
↵
return ans;↵
↵
}↵
↵
int main() {↵
int t;↵
cin >> t;↵
↵
while (t--) {↵
cin >> n >> k;↵
memset(dp, -1, sizeof(dp));↵
cout << solve(1, k, 1) << endl;↵
}↵
↵
}↵
↵
</spoiler>↵
↵
<spoiler summary="Python solution">↵
~~~~~↵
</spoiler>↵
↵
↵
↵
Problem D: Bananas in a Microwave↵
==================↵
Author: [user:shash42,2021-03-29]↵
Problemsetting and Editorialist: [user:sigma_g,2021-03-29]↵
↵
<spoiler summary="Brute force solution">↵
We have a brute force $\mathcal{O}(N\cdot M^2)$ solution.↵
↵
At every timestep $t$, for each banana $b_i$ that has already been reached previously, apply this timestep's operation $y_t$ times on $b_i$. For all the $y_t$ bananas reachable from $b_i$, update their minimum reachability time if they hadn't been reached previously.↵
↵
Why is this correct? Simply because we are simulating each possible step of the algorithm exactly as it is described. Therefore, we cannot get an answer that's better or worse than that of the optimal solution.↵
</spoiler>↵
↵
<spoiler summary="Optimizing the brute force: hint">↵
Observe if we can reduce our search space when we visit nodes that have already been visited previously.↵
</spoiler>↵
↵
<spoiler summary="Optimizing the brute force: hint 2">↵
Let us take an example. We have some timestep $t,x,y = 1,5,10$. If we start visiting from $k=10$, we will visit $k=15,20,\ldots,55,60$. Let us say that $k=40$ was an already visited state. Do we now need to continue visiting $k=45,\ldots,60$ or can we stop our search here? ↵
</spoiler>↵
↵
<spoiler summary="Optimizing the brute force: details">↵
We can actually stop our search as soon as we reach a previouly visited node! Why is this so? This is because — within the same iteration — that already visited node will once again start its own search, and will hit all the points that we were going to hit, and some more!↵
↵
For example, let us say we would reach points $a, a\cdot x, a\cdot x^2, \ldots, a\cdot x^{y-1}$. Now, assume that $a\cdot x^5$ had been previously reached, then it is better to stop at $a\cdot x^5$, as this node itself will reach $a\cdot x^5, a\cdot x^6, \ldots, a\cdot x^{y-2}, a\cdot x^{y-1},a\cdot x^{y}, \ldots, a\cdot x^{5+y-1}$. Note the latter range includes as a prefix the entire remaining suffix of the former range! Therefore, in this approach, nodes that would _have_ been visited, will eventually be visited, and get assigned the minimum timestamp.↵
</spoiler>↵
↵
<spoiler summary="Optimized solution implementation">↵
We can implement the optimized solution by simply adding an extra `if already_visited[k]: break` to our inner loop. Yup, really, that's all it takes to solve this question!↵
↵
Complexity analysis: We can show that each node is visited at most twice: an unvisited node is reached atmost once, whereas a visited node is reached atmost twice (once during the main loop and once when searching from the immediately previous visited node) There are $N$ iterations, and in each iteration, each of the $M$ nodes is visited at most twice. Therefore, the total complexity is $\mathcal{O}(2NM)$.↵
</spoiler>↵
↵
<spoiler summary="Common mistakes">↵
1. Integer overflows: $x'$ does not fit in integer range↵
2. Out of bounds access: simulating the $y_t$ steps of the algorithm even when $k$ exceeds $M$ prematurely↵
</spoiler>↵
↵
~~~~~↵
#include <iostream>↵
#include <vector>↵
↵
using namespace std;↵
using LL = long long;↵
↵
const int DIV = 1e5;↵
↵
inline LL ceil(LL x, LL y) {↵
return (x + y — 1) / y;↵
}↵
↵
int main() {↵
int T, M;↵
cin >> T >> M;↵
↵
vector<bool> is_seen(M + 1, false);↵
is_seen[0] = true;↵
vector<int> answer(M + 1, -1);↵
answer[0] = 0;↵
↵
for (int timestep = 1; timestep <= T; timestep++) {↵
auto new_is_seen = is_seen;↵
↵
int t, y;↵
LL x;↵
cin >> t >> x >> y;↵
↵
auto operation = [&](long long &curr) {↵
if (t == 1) {↵
curr = curr + ceil(x, DIV);↵
} else {↵
curr = ceil(curr * x, DIV);↵
}↵
};↵
↵
for (int k = 0; k <= M; k++) {↵
if (is_seen[k]) {↵
long long curr = k;↵
operation(curr);↵
↵
for (int a = 1; a <= y;) {↵
if (curr > M) break;↵
if (is_seen[curr]) break;↵
↵
new_is_seen[curr] = true;↵
answer[curr] = timestep;↵
↵
a++;↵
operation(curr);↵
}↵
}↵
}↵
↵
is_seen = new_is_seen;↵
}↵
↵
for (int i = 1; i <= M; i++)↵
cout << answer[i] << " ";↵
↵
cout << endl;↵
↵
}↵
↵
```↵
</spolier>↵
↵
<spoiler summary="Python solution">↵
```↵
↵
```↵
</spolier>↵
```↵
↵
==================↵
Author: [user:dixitgarg,2021-03-29]↵
Problemsetting and Editorialist: [user:dixitgarg,2021-03-29]↵
↵
In this problem we have to output two nodes $a$ and $b$ such that there is a path from $a$ to $b$ and $b$ to $a$ and the absolute value of the difference of the indegree $(|k_a - k_b|)$ should be maximum. First of all let us think of bireachability only i.e how to find two nodes $a$ and $b$ such that they are both reachable from each other? How can we verify this from the judge? Because if we ask $"? \ a \ b"$ i.e whether there is a path from $a$ to $b$ or not, then if the judge answers "Yes", we can't ask further queries. So we have to ask queries for those pairs $(a,b)$ for which we are sure that there is a path from $b$ to $a$. So how to ensure whether there is a path from $b$ to $a$ or not?↵
↵
The answer lies in the fact that the given graph is not an ordinary graph, it is a special one. For every pair of vertices in this graph, there is a directed edge. So this type of graph has some interesting properties which we are going to see now.↵
↵
Image how the compressed SCC of the graph will look like. For every pair of nodes of compressed SCC, there will be an edge, so it will have exactly one source, one sink and there would be only one possible topological sorting of this compressed SCC.↵
↵
<spoiler summary="Proof of unique topological sorting">↵
↵
Since there is an edge between every pair of nodes, for every pair of nodes in the compresses SCC also, there would be an edge. And we know that if there is an edge between node $a$ to node $b$, then node $a$ comes before node $b$ in the topological sort. So for every pair of nodes of compressed SCC, we know which node would come first in the topological sorting, so it would result in a unique topological sorting.↵
</spoiler>↵
↵
Now we'll see one interesting and important property of this graph.↵
↵
Property : Consider two consecutive strongly connneted components in the topological sorting, then all nodes present in the left component will have indegree strictly less than all nodes present in the right component. Here left denotes lower enumeration in the topological sorting.↵
↵
<spoiler summary="Proof">↵
Consider two nodes $u$ and $v$ from left component and right component respectively. Since the contribution to the indegree from the nodes which don't lie in these two components would be the same for both $u$ and $v$ (because $u$ and $v$ lie in adjacent components), we are ignoring it as we have to just compare their values. If we consider all the edges between the nodes of left component and right component, then all of them would be directed from the node in the left component to the node in the right component. So the node $v$ would have minimum indegree of $Size of Left Component$. The node $u$ would have the maximum indegree of $Size of Left Component - 1$. This is because there is no edge direted from the right component to the node $u$ and the maximum indegree will be when all other nodes in the left component have an edge directed towards node $u$. In that case it would be $Size of Left Component -1$. So the indegree of $u$ is strictly less than the indegree of $v$.↵
Since $u$ and $v$ are arbitrary nodes, it is true for all pairs of nodes.↵
</spoiler>↵
↵
Using the above property we can argue that if indegree of node $a$ is greater than the indegree of node $b$, then node $a$ is reachable from node $b$. Because either node $a$ lies in the same SCC or to the SCC of higher enumeration in topological sorting. In both cases $a$ is reachable from $b$.↵
↵
So we can store all pairs of nodes $(a,b), 1 \leq a \leq n, 1 \leq b \leq n, a < b$ in an array and sort it according to decreasing order of absolute value of difference of indegrees i.e $|k_a - k_b|$. And if we pick a pair, let it be ($a$,$b$) and $indegree[b] > indegree[a]$, then we are sure that $b$ is reachable from $a$ so we need to check whether $a$ is reachable from $b$ or not, so we ask $"? \ b \ a"$ and if the judge responds by "Yes", then it means both $a$ and $b$ are reachable from each other. Since we were iterating in the dereasing order of $|k_a - k_b|$, we get the optimal answer.If judge never outputs "Yes" in the whole process, then there is no pair of nodes which are reachable from each other. So we will output $"? \ 0 \ 0"$↵
↵
Overall Complexity : $O(n^2 \log_2 n)$↵
↵
~~~~~↵
#include <bits/stdc++.h>↵
using namespace std;↵
↵
int main() {↵
int n;↵
cin >> n;↵
↵
vector<int> indegree(n);↵
↵
for (int i = 0; i < n; i++) {↵
cin >> indegree[i];↵
}↵
↵
vector<pair<int, pair<int, int>>> vec;↵
↵
for (int i = 0; i < n; ++i) {↵
for (int j = i + 1; j < n; ++j) {↵
if (indegree[i] > indegree[j])↵
vec.push_back({indegree[i] - indegree[j], {i, j}});↵
else↵
vec.push_back({indegree[j] - indegree[i], {j, i}});↵
}↵
}↵
↵
sort(vec.rbegin(), vec.rend());↵
↵
for (auto it : vec) {↵
if (it.first < 0)↵
break;↵
↵
int a = it.second.first, b = it.second.second;↵
cout << "? " << a + 1 << " " << b + 1 << endl;↵
cout.flush();↵
Problem F: Christmas Game↵
==================↵
Author: [user:nikhil_c,2021-03-29]↵
Problemsetting and editorialist: [user:sigma_g,2021-03-29]↵
↵
<spoiler summary="How do we solve a standard Nim game on arrays?">↵
By the Sprague-Grundy theorem, we know that the current player has a winning strategy if $a_1 \oplus a_2 \oplus \ldots \oplus a_n$ (xorsum of sizes of the exist
cin >> str;↵
if (str == "Yes") {↵
cout << "! " << a + 1 << " " << b + 1 << endl;↵
cout.flush();↵
return 0;↵
}↵
}↵
↵
cout << "! 0 0" << endl;↵
cout.flush();↵
↵
return 0;↵
↵
}↵
↵
```↵
</spolier>↵
↵
<spoiler summary="Python solution">↵
```↵
↵
import sys↵
input = lambda: sys.stdin.readline().rstrip()↵
↵
n = int(input())↵
↵
indegs = list(map(int, input().split()))↵
↵
pairs = []↵
for i in range(n):↵
for j in range(i + 1, n):↵
if indegs[i] > indegs[j]:↵
pairs.append((indegs[i] — indegs[j], (i, j)))↵
else:↵
pairs.append((indegs[j] — indegs[i], (j, i)))↵
↵
pairs = sorted(pairs, reverse=True)↵
l = len(pairs)↵
↵
for \_, nodes in pairs:↵
ni, nj = nodes↵
↵
sys.stdout.write(f"? {ni + 1} {nj + 1}\n")↵
sys.stdout.flush()↵
↵
s = input()↵
if s == "Yes":↵
sys.stdout.write(f"! {ni + 1} {nj + 1}\n")↵
exit(0)↵
↵
sys.stdout.write("! 0 0\n")↵
↵
```↵
</spolier>↵
```↵
↵
Problem F: Christmas Game↵
==================↵
Author: [user:nikhil_c,2021-03-29]↵
Problemsetting and editorialist: [user:sigma_g,2021-03-29]↵
↵
<spoiler summary="How do we solve a standard Nim game on arrays?">↵
By the Sprague-Grundy theorem, we know that the current player has a winning strategy if $a_1 \oplus a_2 \oplus \ldots \oplus a_n$ (xorsum of sizes of the existing piles) is non-zero. For a proof, read details on [CP-algorithms](https://cp-algorithms.com/game_theory/sprague-grundy-nim.html).↵
</spoiler>
</spoiler>↵
↵
↵
<spoiler summary="How to solve tree nim game for one rooting if K = 1: classifying odd/even steps">↵
Let us classify nodes into odd or even depending on their depth relative to the root. Note that even steps do not affect the game state. Let us prove how:↵
↵
Let us say it's Alice's turn. If Alice moves some coins from an even step to an odd step, then Bob can move _exactly those same coins_ from that odd step back to an even step. After this transition, once again it's Alice's move. In fact, we realize that Bob can "revert" every even->odd move by Alice.↵
↵
Therefore, if Alice wants to win, she has to play at least one odd->even move. Moves that go from even->odd do not affect the game state at all, as the other player can always play another move that reverts them. Therefore, we can say that any coins present on even steps will not change the game state.↵
</spoiler>↵
↵
↵
<spoiler summary="How to solve tree nim game for one rooting if K = 1: how bad are even steps">↵
Let us now analyze what happens if some coins move from the odd steps to even steps. We know that any coins on even nodes will not contribute to the game state. In fact, we realize that it does not matter whether these coins were present on the even nodes before the game started or whether they came by on the even nodes during the game. Once they are on an even step, they no longer contribute to the game state.↵
↵
Hence, we can conclude that moving a coin from odd step to an even step is as good as taking a coin from the odd step and throwing it away.↵
</spoiler>↵
↵
↵
<spoiler summary="Reducing tree nim game to linear array: the stair case nim">↵
As we can see, we have effectively reduced the Nim game on tree to a linear nim game where only the odd positioned nodes determine the winning strategy.↵
This is known as the staircase nim. The result is the following: the current player has a winning strategy if xorsum of all values at the odd steps is non-zero.↵
</spoiler>↵
↵
<spoiler summary="Extending to general K">↵
In general $K$, we can extend this idea to: parity of $d' = \lfloor \frac d K \rfloor$ where $d$ is the depth of this node (zero-indexed). All nodes — such that $d'$ is odd for them — will contribute to the final xorsum. Take a moment to digest this.↵
↵
How to calculate these xors? At each node $x$, we store a vector of size $D(n) = 2\cdot K$ where $D(n)_i$ is the xorsum of all nodes having their depth = $i$ — relative to node $x$ — in the subtree of node $n$.↵
</spoiler>↵
↵
<spoiler summary="Calculating the answer for all roots">↵
How to propagate these values in a DFS? We know that the nodes at depth `i` is at depth `i + 1` for my child nodes. So, we can simply cycle through them and update the values. Check the implementation for details.↵
</spoiler>↵
↵
~~~~~↵
↵
#include <iostream>↵
#include <vector>↵
↵
using namespace std;↵
↵
const int N = 1e5 + 1;↵
const int K = 20 + 1;↵
↵
using VI = vector<unsigned int>;↵
using VVI = vector<vector<unsigned int>>;↵
↵
VVI dp(N, VI(2 \* K));↵
VI a(N);↵
vector<bool> win(N);↵
int n, k, k2;↵
↵
void dfs(VVI &adj, int node, int prev) {↵
dp[node][0] = a[node];↵
↵
for (auto neigh : adj[node]) {↵
if (neigh == prev) continue;↵
dfs(adj, neigh, node);↵
↵
for (int rem = 1; rem < k2; rem++)↵
dp[node][rem] ^= dp[neigh][rem - 1];↵
↵
dp[node][0] ^= dp[neigh][k2 - 1];↵
}↵
↵
}↵
↵
void dfs2(const VVI &adj, const int node, const int prev, const vector<unsigned int> &my_xors) {↵
vector<unsigned int> final_xors(k2);↵
for (int i = 0; i < k2; i++)↵
final_xors[i] = my_xors[i] ^ dp[node][i];↵
↵
unsigned int odd_layer_xor = 0;↵
for (int i = k; i < k2; i++)↵
odd_layer_xor ^= final_xors[i];↵
win[node] = odd_layer_xor > 0;↵
↵
for (auto neigh : adj[node]) {↵
if (neigh == prev) continue;↵
↵
auto xor_send = final_xors;↵
↵
// remove all contribution of this subtree↵
for (int i = 1; i < k2; i++)↵
xor_send[i] ^= dp[neigh][i - 1];↵
↵
xor_send[0] ^= dp[neigh][k2 - 1];↵
↵
// whatever was depth k for me is depth k+1 for my child node↵
int last = xor_send.back();↵
for (int i = k2 - 1; i > 0; i--)↵
xor_send[i] = xor_send[i - 1];↵
↵
xor_send[0] = last;↵
↵
dfs2(adj, neigh, node, xor_send);↵
}↵
↵
}↵
↵
int main() {↵
cin >> n >> k;↵
k2 = 2 \* k;↵
↵
VVI adj(n + 1);↵
↵
for (int i = 0; i < n - 1; i++) {↵
int x, y; cin >> x >> y;↵
adj[x].push_back(y);↵
adj[y].push_back(x);↵
}↵
↵
for (int i = 1; i <= n; i++) cin >> a[i];↵
↵
dfs(adj, 1, 0);↵
dfs2(adj, 1, 0, vector<unsigned int>(k2));↵
↵
for (int i = 1; i <= n; i++) cout << (win[i] ? 1 : 0) << " ";↵
↵
return 0;↵
↵
}↵
↵
```↵
</spoiler>↵
↵
<spoiler summary="PyPy3 solution">↵
```↵
↵
n, k = list(map(int, input().split()))↵
k2 = 2 \* k↵
↵
dp = [[0 for _ in range(k2)] for _ in range(n + 1)]↵
adj = [[] for _ in range(n + 1)]↵
↵
for \_ in range(n — 1):↵
\_x, \_y = list(map(int, input().split()))↵
adj[_x].append(\_y)↵
adj[_y].append(\_x)↵
↵
a = [0] + list(map(int, input().split()))↵
↵
win = [0 for _ in range(n + 1)]↵
↵
parent = [0 for _ in range(n + 1)]↵
↵
def dfs(node):↵
global parent↵
global dp↵
↵
stack = [node]↵
↵
pass_num = [0 for _ in range(n + 1)]↵
↵
while stack:↵
node = stack[-1]↵
↵
pass_num[node] += 1↵
↵
if pass_num[node] == 1:↵
for neigh in adj[node]:↵
if neigh == parent[node]:↵
continue↵
↵
parent[neigh] = node↵
stack.append(neigh)↵
↵
continue↵
↵
dp[node][0] = a[node]↵
stack.pop()↵
↵
for neigh in adj[node]:↵
if neigh == parent[node]:↵
continue↵
↵
for rem in range(1, k2):↵
dp[node][rem] ^= dp[neigh][rem - 1]↵
↵
dp[node][0] ^= dp[neigh][k2 - 1]↵
↵
def dfs2(node):↵
global win↵
↵
stack = [[node, [0 for __ in range(k2)]]]↵
↵
global dp↵
↵
while stack:↵
node, prev_xors = stack[-1]↵
stack.pop()↵
↵
final_xors = [x ^ y for x, y in zip(prev_xors, dp[node])]↵
↵
for neigh in adj[node]:↵
if neigh == parent[node]:↵
continue↵
↵
send_xors = [x for x in final_xors]↵
for i in range(1, k2):↵
send_xors[i] ^= dp[neigh][i - 1]↵
send_xors[0] ^= dp[neigh][k2 - 1]↵
↵
send_xors = [send_xors[-1]] + send_xors[:-1]↵
stack.append([neigh, send_xors])↵
↵
odd_xor = 0↵
for i in range(k, k2):↵
odd_xor ^= final_xors[i]↵
↵
win[node] = 1 if odd_xor > 0 else 0↵
↵
dfs(1)↵
dfs2(1)↵
print(" ".join(list(map(str, win[1:]))))↵
↵
```↵
</spoiler>↵
```↵