This is the editorial for a recent contest Teamscode. The problems are open for upsolving on this gym. Problem credits are on the statements themselves.
A. What do you do when the contest starts? Are you busy? Will you solve Bingo?
- WorldEnd/SukaSuka
- Bocchi the Rock
- Thomas
- Lycoris Recoil
- Bokuben
- A Certain Scientific Railgun
- Quintessential Quintluplets
- Tensura
- Re:Zero
- Hayate the Combat Butler
- WorldEnd/SukaSuka
- Hunter x Hunter
- Onimai
- Smartphone Isekai
- Tonikawa
No code is provided. There are $$$49$$$ accepted answers to this problem. Try to find them all!
B. Mountain Climbing Easy
Iterate through the mountain from left to right and maintain a counter for the number of times the altitude increases consecutively. Make sure to reset this counter whenever the altitude does not increase. Also, ensure that each sequence of consecutive altitude increases is counted only once to avoid overcounting mountains.
#include <iostream>
#include <fstream>
using namespace std;
int main() {
int N;
cin >> N;
int cnt = 0;
int prev = -1;
int ans = 0;
for (int i = 0; i < N; i++)
{
int a;
cin >> a;
if (a > prev && i) cnt++;
else if (a <= prev) cnt = 0;
if (cnt == 2) ans++;
prev = a;
}
cout << ans << "\n";
return 0;
}
C. No Sweep
There is only $$$1$$$ scenario in any game where Thomas sweeps. So we should do complementary counting and count the total number of ways to assign winners and subtract $$$1$$$ from that. Each game there $$$(k + 1)$$$ possible winners, the $$$k$$$ problemsetters and Thomas. As there are $$$n$$$ rounds, the final result is that in $$$(k + 1)^n - 1$$$ games Thomas will not sweep.
n,k=input().split()
print((int(k) + 1) ** int(n) - 1)
D. Multiplication Table
We can guess that the number is the smallest prime $$$ > n$$$. Our intuition is right but the proof is a little harder that that.
Bertands Postulate states that there is at least one prime $$$p$$$ between $$$n$$$ and $$$2n$$$. We can know that $$$n < p < 2n$$$ and that $$$[n+1, p)$$$ are all composite. Consider for the sake of contradiction that there existed some $$$x$$$ such that $$$n < x < p$$$ and $$$x$$$ was not present on the multiplication table. We know that $$$x$$$ is composite and its lowest prime factor $$$l$$$ is at least $$$2$$$. As $$$x < 2n$$$, $$$\max(\frac{x}{l}, l) \leq n$$$ so $$$x$$$ will be written on cell $$$(\frac{x}{l}, l)$$$. As $$$x$$$ cannot exist, the answer will have to be the smallest prime number greater than $$$n$$$.
This gives us an $$$O(72 \sqrt{n})$$$ solution. The $$$72$$$ comes from the fact that for any integer $$$n \leq 10^5$$$ there is a prime that is at most $$$n + 72$$$.
#include <cstdio>
int isp(int x){
for(int i = 2; i*i <= x; i++){
if(x%i == 0) return 0;
}
return 1;
}
int main(){
int n;
scanf("%d", &n);
do{
n++;
}while(isp(n) == 0);
printf("%d\n", n);
return 0;
}
E. Cyclic Shifts
How many elements change between two adjacent cyclic shifts?
First, notice that there always exists an optimal series of operations where the cycle operation, if performed at all, is done first. There are only $$$\mathcal{O}(n)$$$ possible different cycle operations, and the answer afterwards is the sum over the absolute differences between $$$a$$$ and $$$b$$$ for each index.
Consider the first and second such cyclic shift of length $$$5$$$ on an array $$$a=[1, 2, \dots, 8]$$$:
Observe that between the first and second shift, only a small number of elements change positions. In fact, for a cyclic shift of length $$$k$$$ starting at $$$i$$$, only the elements at positions $$$i$$$, $$$i+k$$$, and $$$i+k-1$$$ change positions when transitioning to the cyclic shift at $$$i+1$$$. Thus, we can loop over all possible cyclic shifts, maintaining the sum and updating the indicies accordingly. Because there are only a constant number of updates for each cyclic shift, this runs in $$$\mathcal{O}(n)$$$.
#include <bits/stdc++.h>
long long calculate_cost(int n, const std::vector<long long> &a, const std::vector<long long> &b) {
long long ret = 0;
for (int i = 0; i < n; i++)
ret += llabs(a[i] - b[i]);
return ret;
}
template<typename I>
void rotate_left(I l, I r) { std::rotate(l, std::next(l), r); }
int main() {
using namespace std;
int TC;
cin >> TC;
while (TC--) {
int N, K;
cin >> N >> K;
vector<long long> A(N), B(N);
for (auto &a : A)
cin >> a;
for (auto &b : B)
cin >> b;
long long ans = calculate_cost(N, A, B);
rotate_left(A.begin(), A.begin() + K);
long long cur = calculate_cost(N, A, B);
ans = min(ans, cur + 1);
for (int i = 1; i + K - 1 < N; i++) {
cur -= llabs(A[i - 1] - B[i - 1]) +
llabs(A[i + K - 2] - B[i + K - 2]) +
llabs(A[i + K - 1] - B[i + K - 1]);
auto tem = A[i - 1];
A[i - 1] = A[i + K - 2];
A[i + K - 2] = A[i + K - 1];
A[i + K - 1] = tem;
cur += llabs(A[i - 1] - B[i - 1]) +
llabs(A[i + K - 2] - B[i + K - 2]) +
llabs(A[i + K - 1] - B[i + K - 1]);
ans = min(ans, cur + 1);
}
cout << ans << '\n';
}
}
F. Great Common Multiple
Let's define $$$a, b, c, d$$$ as they were defined in the statement. We first must pick $$$d$$$ as a multiple of $$$l = lcm(a, b)$$$. This is to guarantee that $$$d$$$ can be divided by both $$$a$$$ and $$$b$$$. Then instead of thinking of taking some number $$$p \cdot l$$$ mod $$$c$$$, we can instead imagine it as $$$p \cdot l - q \cdot c$$$. By Bezout's identity it is always possible to represent $$$g = gcd(c, l)$$$ with some choice of $$$p$$$ and $$$q$$$. As a consequence, $$$g, 2g, 3g, \dots, (\frac{c}{g}-1)g$$$ are possible results modulus $$$c$$$ for some choice of $$$d$$$ and the maximal of these will be $$$ (\frac{c}{g}-1)g$$$.
This gives us a $$$O(\log(\max(a, b, c))$$$ sol per testcase as just outputting $$$c - gcd(c, lcm(a, b))$$$.
#include <iostream>
#include <cmath>
using ll = long long;
using namespace std;
ll gcd(ll a, ll b){
if(b == 0) return a;
return gcd(b, a%b);
}
int main(){
cin.tie(0) -> sync_with_stdio(0);
int T;
cin >> T;
while(T--){
ll a, b, c;
cin >> a >> b >> c;
ll lcm = a*b/gcd(a, b);
ll step = gcd(lcm, c);
cout << c - step << "\n";
}
}
G. Daggers
Can you calculate what is the complexity of simulating the process greedily?
You can use harmonic series to calculate it.
We will use a greedy algorithm. We will always go the farthest shield at a distance $$$\le t$$$ from where we are and stay there until a dagger is thrown. We will repeat the same process until we are at a distance $$$\le t$$$ from $$$n$$$, then we can just arrive at $$$n$$$ and pass the level, or until there is no shield at a distance $$$\le t$$$ from where we are, then we can't pass the level. This is obviously optimal because we spend the least possible amount of time waiting in the same position.
The next step is to calculate on how many shields do we actually stop at for each level. We claim that the answer is at most $$$2\cdot \lceil \frac{n}{t} \rceil$$$.
Let's group the points into chunks of size $$$t$$$, there are $$$\lceil \frac{n}{t} \rceil$$$ such groups. Then we can see that we will visit at most $$$2$$$ shields from each group. That is because when we are at any shield we will either go to the last shield in that same group or go to a shield in the next group, visiting at most $$$2$$$ shields from each group.
Now we know that the number of shields that we will visit after the $$$n$$$ levels is at most $$$2 \cdot \sum \limits_{t=1}^{n} \lceil \frac{n}{t} \rceil$$$. Using harmonic series we get to the conclusion that after the $$$n$$$ levels we will have stopped at around $$$n \ln n$$$ shields, which actually isn't much and can lead to fast solutions.
Everything that's left now is to be able to quickly get the location of the farthest shield at a distance $$$\le t$$$ from where we are. We can do this with sets and the function upper_bound(), which has a complexity of $$$\mathcal{O}(\log{n})$$$. That results in the complexity of the solution being $$$\mathcal{O}(n \log^2{n})$$$.
#include <bits\stdc++.h>
using namespace std;
int main(){
ios_base::sync_with_stdio(0);
cin.tie(0);
int n, q; cin >> n >> q;
set<int> shields;
for(int t = 1; t <= q; t++){
int x; cin >> x;
shields.insert(x);
long long ans = 0;
int curr = 0;
bool done = 0;
while(!done){
if(n - curr <= t){
cout << ans + n - curr << "\n";
done = 1;
}else{
auto it = shields.upper_bound(curr + t);
if(it == shields.begin() || *(--it) == curr){
cout << -1 << "\n";
done = 1;
}else{
curr = *it;
ans += t;
}
}
}
}
}
H. A Certain Scientific Tree Problem
There are many solutions to this problem, some simpler than others, but I'll present the intended solution which involves the distance formula between two nodes.
$$$dist(i, j) = dep[i] + dep[j] - 2 \cdot dep[lca(i, j)]$$$
How does splitting into $$$3$$$ terms help? Can we count each one individually?
Let's define $$$d$$$ as the depth of the tree and $$$n = 2^d - 1$$$ or the number of nodes on the tree.
We want to find
Let's try to count for each node $$$u$$$ how many times $$$dep[u]$$$ will be counted by this sum. There are $$$2n$$$ times from when $$$i = u$$$ or $$$j = u$$$ but counting $$$lca(i, j) = u$$$ will take a little more work.
For $$$lca(i, j) = u$$$ we need $$$i$$$ and $$$j$$$ to be in different subtrees of $$$u$$$ (or if $$$i = u$$$ or $$$j = u$$$). Define $$$sub[u] = 2^{(d - dep[u] + 1)} - 1$$$ or $$$sub[u] = $$$ the number of nodes in node $$$u$$$'s subtree. We can also define $$$chsub[u] = \frac{sub[u]-1}{2}$$$ or $$$chsub[u] = $$$ the number of nodes in the subtree of node $$$u$$$'s children.
Then the number of pairs of $$$(i, j)$$$ with $$$lca(i, j) = u$$$ is
chsub[u] \cdot (sub[u] - chsub[u]) + sub[u]
$$$
The final observation is that all nodes with the same depth behave the same so it suffices to find the total contribution of one node of some depth and multiply that by the number of nodes on that depth.
Our final sum will be
This formula can be evaluated in $$$O(n)$$$ or $$$O(n \log n)$$$ per test case depending on whether or not you precomputed powers of $$$2$$$.
There is another, much more beautiful solution to this problem that involves counting the contribution of each edge but I won't explain it here :).
#include <iostream>
using ll = long long;
const ll mod = 1e9 + 7;
const int MX = 2e5 +10;
using namespace std;
int two[MX];
void solve(){
int d; cin >> d;
ll ans = 0;
ll n = two[d] - 1;
for(int i = 0; i<d; i++){
ll ch = (i == d-1) ? 0 : 2;
ll ch_sub = two[d-1 - i]-1;
ll sub = ch*ch_sub + 1;
ll cnt = two[i];
ans += cnt*((n*i%mod + n*i%mod + (mod - 2)*i%mod*(sub + ch*ch_sub%mod*(sub - ch_sub)%mod))%mod)%mod;
ans %= mod;
}
cout << ans << "\n";
}
signed main(){
cin.tie(0) -> sync_with_stdio(0);
two[0] = 1;
for(int i = 1; i<=1e5; i++){
two[i] = 2*two[i-1]%mod;
}
int T = 1;
cin >> T;
for(int i = 1; i<=T; i++){
solve();
}
return 0;
}
I. Mountain Climbing Hard
Lets first solve the subtask with negligible fog to build up to our final solution. We need to find the nearest point to the left and right of a given point $$$p$$$ with an altitude at least as high as $$$p$$$. There are a couple ways of doing this, but using a stack, we can precompute these points efficiently. We iterate through the points from left to right, and at each point $$$p$$$, we remove all stack elements with altitude at most $$$p$$$ before adding $$$p$$$ to the stack. This process identifies points $$$q$$$ to the left of $$$p$$$ for which $$$p$$$ is the first non-visible point from $$$q$$$. We repeat this process, iterating from right to left, to find the nearest points to the left and right of each point $$$p$$$ with altitude greater than or equal to $$$p$$$. Note that the stack remains sorted throughout the process, ensuring each point is visited at most twice, resulting in linear complexity.
To incorporate the fog in our solution, we need to find the number of points within visible range for point $$$p$$$ excluding points below the fog altitude (given that the fog is not above $$$p$$$). Note that we are essentially finding the number of elements in a subarray which are less than a certain value. A Binary Index Tree might come to mind, but it's not entirely clear how updates would work. Here, it's important to make the observation that the queries can be done offline. If we answer each query in order of the fog altitude, we can maintain a BIT such that all points below the fog have a value of 1, while all other points have value 0. We can then query on the visible range for a point $$$p$$$ to see how many points are not visible due to the fog, which is subtracted from the result from the stack method, to get the final solution. The overall complexity is $$$O(N \log N)$$$.
#include <iostream>
#include <vector>
#include <algorithm>
#include <cassert>
using namespace std;
#define MAXN 1000001
#define MAXQ 100001
vector<pair<int, int> > points;
vector<pair<pair<int, int>, int> > qs;
int N, Q, a[MAXN], BIT[MAXN], r[MAXN], l[MAXN], sols[MAXQ];
int getSum(int index)
{
int sum = 0;
index = index + 1;
while (index > 0)
{
sum += BIT[index];
index -= index & (-index);
}
return sum;
}
void updateBIT(int index)
{
index = index + 1;
while (index <= N)
{
BIT[index] += 1;
index += index & (-index);
}
}
int main()
{
ios_base::sync_with_stdio(0);
cin.tie(0);
cin >> N >> Q;
for (int i = 0; i < N; i++)
{
cin >> a[i];
points.push_back({ a[i], i }); // used to iterate in increasing altitude
}
for (int i = 0; i < Q; i++)
{
int p, f;
cin >> p >> f;
qs.push_back({ {f, p - 1}, i }); //offline queries
}
for (int i = 0; i < N; i++) r[i] = N, l[i] = -1;
vector<pair<int, int> > sr, sl;
for (int i = 0; i < N; i++) {
int alt = a[i];
while (sr.size() && sr.back().first <= alt)
{
r[sr.back().second] = i;
sr.pop_back();
}
sr.push_back(points[i]);
int ind = N - i - 1;
alt = a[ind];
while (sl.size() && sl.back().first <= alt)
{
l[sl.back().second] = ind;
sl.pop_back();
}
sl.push_back(points[ind]);
}
sort(qs.begin(), qs.end());
sort(points.begin(), points.end());
int ind = 0;
for (int i = 0; i < Q; i++)
{
int fog = qs[i].first.first;
while (ind < N && points[ind].first < fog) {
updateBIT(points[ind].second);
ind++;
}
//point index of query
int qp = qs[i].first.second;
//no fog
int ans = r[qp] - l[qp] - 1;
//amt can't see cuz fog
int sub = getSum(r[qp] - 1) - getSum(l[qp]);
if (a[qp] >= fog) ans -= sub;
sols[qs[i].second] = ans;
}
for (int i = 0; i < Q; i++) cout << sols[i] << "\n";
return 0;
}