1842A - Tenzing and Tsondu
Let's view it as when monsters $$$x$$$ and $$$y$$$ fight, their health changes into $$$\max(x-y,0)$$$ and $$$\max(y-x,0)$$$ respectively. So any monster with $$$0$$$ health is considered dead. Therefore, a player loses when the health of his monsters are all $$$0$$$.
Notice that $$$\max(x-y,0)=x-\min(x,y)$$$ and $$$\max(y-x,0)=y-\min(x,y)$$$. Therefore, after each step, the sum of the health of monsters decrease by the same amount for both players.
Therefore, we only need to know $$$\sum a_i$$$ and $$$\sum b_i$$$ to determine who wins. If $$$\sum a_i > \sum b_i$$$, Tsondu wins. Else if $$$\sum a_i < \sum b_i$$$, Tenzing wins. Else, it is a draw.
#include <iostream>
using namespace std;
int main() {
ios::sync_with_stdio(false), cin.tie(nullptr);
int T;
cin >> T;
while (T--) {
int n, m, a[50], b[50];
long long sumA = 0, sumB = 0;
cin >> n >> m;
for (int i = 0; i < n; i++)
cin >> a[i], sumA += a[i];
for (int i = 0; i < m; i++)
cin >> b[i], sumB += b[i];
if (sumA > sumB) cout << "Tsondu\n";
if (sumA < sumB) cout << "Tenzing\n";
if (sumA == sumB) cout << "Draw\n";
}
}
1842B - Tenzing and Books
Observe the bitwise OR: if a bit of the knowledge changes to $$$1$$$, it will never become $$$0$$$.
It tells us, if a book has difficulty rating $$$y$$$, and $$$x|y \neq x$$$, Tenzing will never read this book because it will change a $$$0$$$ bit in $$$x$$$ to $$$1$$$.
We called a number $$$y$$$ valid if $$$x|y=x$$$. For each sequence, we can find a longest prefix of it such that all numbers in this prefix are valid. Find the bitwise OR of the three prefix and check whether it equals to $$$x$$$.
Time complexity: $$$O(n)$$$ per test case.
A naive approach is to enumerate the prefixes of the three stacks, which is an enumeration of $$$n^3$$$. For each stack, the bitwise OR of the prefix has at most $$$31$$$ different values (including empty prefix), because the bitwise OR of the prefix is non-decreasing, and each change will increase the number of 1s in binary. Since the number of 1s in binary cannot exceed $$$30$$$, it can be changed at most 30 times. Therefore, the enumeration is reduced to $$$\min(n,31)^3$$$. In the worst case, the time complexity is $$$O(\sum n * 31^2)$$$.
#include <iostream>
using namespace std;
int main() {
ios::sync_with_stdio(false), cin.tie(nullptr);
int T;
cin >> T;
while (T--) {
int n, x, a[100000];
cin >> n >> x;
int s = 0;
for (int i = 0; i < 3; i++) {
for (int j = 0; j < n; j++) cin >> a[j];
for (int j = 0; j < n; j++) {
if ((x | a[j]) != x) break;
s |= a[j];
}
}
if (s == x) cout << "YES\n";
else cout << "NO\n";
}
}
#include <bits/stdc++.h>
using namespace std;
int main() {
ios::sync_with_stdio(false), cin.tie(nullptr);
int T;
cin >> T;
while (T--) {
int n, x, ai;
cin >> n >> x;
vector<int> pre[3];
for (int i = 0; i < 3; i++) {
int s = 0;
pre[i].push_back(s);
for (int j = 0; j < n; j++) {
cin >> ai;
if ((s | ai) != s)
s |= ai, pre[i].push_back(s);
}
}
bool ans = 0;
for (int A : pre[0]) for (int B : pre[1]) for (int C : pre[2])
ans |= (A | B | C) == x;
cout << (ans ? "YES\n" : "NO\n");
}
}
1842C - Tenzing and Balls
Let us write down the original index of each range we delete.
Firstly, it is impossible to delete ranges $$$(a,c)$$$ and $$$(b,d)$$$ where $$$a<b<c<d$$$.
Secondly, if we delete ranges $$$(a,d)$$$ and $$$(b,c)$$$ where $$$a<b<c<d$$$, we must have deleted range $$$(b,c)$$$ before deleting range $$$(a,d)$$$. Yet, the effect of deleting range $$$(b,c)$$$ and then $$$(a,d)$$$ is the same as only deleting $$$(a,d)$$$.
Therefore, we can assume that in an optimal solution, the ranges we delete are all disjoint. Therefore, we want to find some disjoint range $$$[l_1,r_1],[l_2,r_2],...,[l_m,r_m]$$$ such that $$$a_{l_i}=a_{r_i}$$$ and $$$\sum (r_i-l_i+1)$$$ is maximized.
We can write a DP. $$$dp_i$$$ denotes the minimum number of points we do not delete when considering the subarray $$$a[1\ldots i]$$$. We have $$$dp_i=\min(dp_{i-1}+1,\min\{dp_j|a_{j+1}=a_i,j+1<i\})$$$.
This dp can be calculated in $$$O(n)$$$ since we can store for each $$$x$$$ what the minimum $$$dp_j$$$ which satisfy $$$a_{j+1}=x$$$.
#include <bits/stdc++.h>
using namespace std;
int main() {
ios::sync_with_stdio(false), cin.tie(nullptr);
int T;
cin >> T;
while (T--) {
const int N = 200000 + 5;
int n, a[N], dp[N], buc[N];
cin >> n;
dp[0] = 0;
for (int i = 1; i <= n; i++) buc[i] = 0x3f3f3f3f;
for (int i = 1; i <= n; i++) {
cin >> a[i];
dp[i] = min(dp[i - 1] + 1, buc[a[i]]);
buc[a[i]] = min(buc[a[i]], dp[i - 1]);
}
cout << n - dp[n] << '\n';
}
}
1842D - Tenzing and His Animal Friends
Consider the restrictions on $$$u$$$, $$$v$$$, and $$$y$$$ as a weighted edge between $$$u$$$ and $$$v$$$ with weight $$$y$$$. Obviously, the final answer will not exceed the shortest path from $$$1$$$ to $$$n$$$.
One possible approach to construct the solution is to start with the set $$${1}$$$ and add vertices one by one. If $$$i$$$ is added to the set at time $$$T_i$$$, then we need to ensure that $$$|T_u-T_v|\leq y$$$ for any edge between $$$u$$$ and $$$v$$$ in the set. This can be modeled as a system of difference constraints problem and solved using shortest path algorithms.
To be more specific, we can add vertices in increasing order of their distances from $$$1$$$. The time for each set can be calculated as the difference between the distances from $$$1$$$ to the two adjacent vertices in the sorted order.
#include <bits/stdc++.h>
using namespace std;
int n, m;
long long dis[100][100];
int main() {
ios::sync_with_stdio(false), cin.tie(nullptr);
cin >> n >> m;
memset(dis, 0x3f, sizeof dis);
while (m--) {
int u, v, w;
cin >> u >> v >> w, u--, v--;
dis[u][v] = dis[v][u] = w;
}
for (int i = 0; i < n; i++) dis[i][i] = 0;
for (int k = 0; k < n; k++)
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
dis[i][j] = min(dis[i][j], dis[i][k] + dis[k][j]);
if (dis[0][n - 1] > 1e18)
cout << "inf", exit(0);
int ord[100];
iota(ord, ord + n, 0);
sort(ord + 1, ord + n, [](int a, int b) {
return dis[0][a] < dis[0][b];
});
string s(n, '0');
vector<pair<string, int>> ans;
for (int i = 0; i < n - 1; i++) {
int u = ord[i], v = ord[i + 1];
s[u] = '1';
ans.emplace_back(s, dis[0][v] - dis[0][u]);
if (v == n - 1) break;
}
cout << dis[0][n - 1] << ' ' << ans.size() << '\n';
for (auto [s, t] : ans)
cout << s << ' ' << t << '\n';
}
1842E - Tenzing and Triangle
Observe that all triangles will be disjoint, if two triangle were not disjoint, we can merge them together to such that the cost used is less. Therefore, we can consider doing DP.
The oblique side of the triangle is a segment on the line $$$y=k-x$$$. Therefore, we use the interval $$$[L,R]$$$ to represent the triangle with the upper left corner at $$$(L,k-L)$$$ and the lower right corner at $$$(R,k-R)$$$.
First, suppose that all points will generate costs. After covering points with a triangle, the costs can be reduced. Let $$$f(L,R)$$$ represent the sum of costs of points covered by triangle $$$[L,R]$$$ minus $$$A\times (R-L)$$$. We need to find several intervals $$$[l_1,r_1],[l_2,r_2],\cdots,[l_m,r_m]$$$ without common parts and maximize $$$\sum f(l_i,r_i)$$$.
Let $$$dp_i$$$ represent the maximum value of $$$\sum f(l_i,r_i)$$$ when considering the prefix $$$[1,i]$$$. There are two transitions:
- If $$$i$$$ is not covered by any interval, then $$$dp_i\leftarrow dp_{i-1}$$$.
- If $$$i$$$ is covered by interval $$$[j+1,i]$$$, then $$$dp_i\leftarrow dp_j+f(j+1,i)$$$.
Enumerate from small to large for $$$i$$$, maintain $$$g_j=dp_j+f(j+1,i)$$$. When $$$i-1\rightarrow i$$$, $$$g$$$ will change as follows:
- Subtract $$$A$$$ from $$$g_{0\dots i-1}$$$.
- For each point $$$(x,k-i)$$$, add the cost of the point to $$$g_{0\dots x}$$$.
$$$g$$$ needs to support interval addition and global maximum value (assuming that illegal positions are 0), which can be achieved using a segment tree.
Time complexity is $$$O((n+k)\log k)$$$.
Please first understand the approach in the tutorial. A triangle $$$[L, R]$$$ can cover a point $$$(x, y)$$$ iff $$$L\le x$$$ and $$$k-y\le R$$$. Therefore, point $$$(x,y)$$$ can be regarded as interval $$$[x,k-y]$$$. Now the problem is transformed into one where some intervals $$$[l,r]$$$ have a cost of $$$w$$$, and you can place some non-overlapping intervals. If $$$[l,r]$$$ is not included in any placed interval, then you need to pay a cost of $$$w$$$. Further transformation: you can pay a cost of $$$A$$$ to cover $$$[i,i+1]$$$, and for an interval $$$[l,r]$$$ with a cost of $$$w$$$, if interval $$$[l,r]$$$ is not completely covered, then you need to pay a cost of $$$w$$$. Try to use minimum cut to solve this problem:
- First establish a bipartite graph with $$$n$$$ left nodes representing the points in the original problem and $$$k$$$ right nodes representing intervals $$$[i,i+1]$$$.
- The source node is connected to each left node with an edge capacity equal to the node's cost.
- Each right node is connected to the sink node with an edge capacity of $$$A$$$.
- For each left node representing interval $$$[l,r]$$$, it is connected to right nodes $$$l,l+1,l+2,...,r-1$$$ with an edge capacity of infinity.
According to the maximum flow minimum cut theorem, let's consider how to find the maximum flow of this graph.
This is basically a maximum matching problem of a bipartite graph. Each left node can match an interval on the right side, and each node has matching frequency restriction. A greedy algorithm: First sort all intervals in increasing order of the right endpoint, then consider each interval in turn and match it with positions within the interval from left to right. Specifically, let $$$cnt_i$$$ represent how many times the $$$i$$$-th point on the right side can still be matched. Initially, $$$cnt_{0\dots k-1}=A$$$. For each interval $$$[l,r]$$$ that can be matched at most $$$w$$$ times, each time find the smallest $$$i$$$ in $$$[l,r-1]$$$ such that $$$cnt_i\ne 0$$$, and match $$$\min(cnt_i,w)$$$ times with $$$i$$$.
Use a Disjoint Set Union to query the smallest $$$i\ge l$$$ such that $$$cnt_i\ne 0$$$. The time complexity is $$$O(n\alpha(n))$$$.
The Method of Four Russians can also be used to achieve $$$O(n)$$$ time complexity. Divide the sequence into blocks of $$$64$$$, use bit operations and __builtin_ctzll
for searching within the block, and use Disjoint Set Union to skip blocks where $$$cnt_i$$$ is all $$$0$$$. In this way, the union operation only needs $$$O(\frac n{\log n})$$$ times and the find operation only needs $$$O(n)$$$ times. It can be proved that in this case, the time complexity of Disjoint Set Union is $$$O(n)$$$ instead of $$$O(n\alpha(n))$$$.
#include <bits/stdc++.h>
using namespace std;
const int N = 2e5 + 8, M = 1 << 18;
int n, k, A, c[M << 1][2], dp[N];
vector<pair<int, int>> p[N];
void insert(int i, int v) {
for (i += M; i; i >>= 1) c[i][0] = v, v = max(v, c[i ^ 1][0]);
}
void add(int i, int v) {
for (i += M + 1; i > 1; i >>= 1) {
int v1 = i & 1 ? v : 0;
c[i ^ 1][0] += v1, c[i ^ 1][1] += v1;
c[i >> 1][0] = max(c[i][0], c[i ^ 1][0]) + c[i >> 1][1];
}
}
int main() {
ios::sync_with_stdio(false), cin.tie(nullptr);
cin >> n >> k >> A;
int sum = 0;
while (n--) {
int x, y, w;
cin >> x >> y >> w;
p[k - y].emplace_back(x, w);
sum += w;
}
for (int i = 1; i <= k; i++) {
add(i - 1, -A);
for (auto [x, w] : p[i]) add(x, w);
dp[i] = max(dp[i - 1], c[1][0]);
insert(i, dp[i]);
}
cout << sum - dp[k] << '\n';
}
#include <bits/stdc++.h>
using namespace std;
const int N = 2e5 + 8;
int n, k, A, cnt[N], pa[N];
vector<pair<int, int>> v[N];
int find(int x) {
return !pa[x] ? x : pa[x] = find(pa[x]);
}
int main() {
ios::sync_with_stdio(false), cin.tie(nullptr);
cin >> n >> k >> A;
while (n--) {
int x, y, w;
cin >> x >> y >> w;
v[k - y].emplace_back(x, w);
}
for (int i = 0; i < k; i++) cnt[i] = A;
int ans = 0;
for (int r = 1; r <= k; r++)
for (auto [l, w] : v[r])
while (l < r) {
int tmp = min(w, cnt[l]);
ans += tmp, cnt[l] -= tmp, w -= tmp;
if (!cnt[l]) l = pa[l] = find(l + 1);
if (!w) break;
}
cout << ans << '\n';
}
#include <bits/stdc++.h>
struct IO {
static const int inSZ = 1 << 17;
char inBuf[inSZ], *in1, *in2;
inline __attribute((always_inline))
int read() {
if(__builtin_expect(in1 > inBuf + inSZ - 32, 0)) {
auto len = in2 - in1;
memcpy(inBuf, in1, len);
in1 = inBuf, in2 = inBuf + len;
in2 += fread(in2, 1, inSZ - len, stdin);
if(in2 != inBuf + inSZ) *in2 = 0;
}
int res = 0;
unsigned char c = *in1++;
while(res = res * 10 + (c - 48), (c = *in1++) >= 48);
return res;
}
IO() {
in1 = inBuf;
in2 = in1 + fread(in1, 1, inSZ, stdin);
}
} IO;
inline int read() { return IO.read(); }
using namespace std;
const int N = 2e5 + 8, N2 = N / 64 + 8;
int n, k, A, pts[N][3], buc[N], LW[N][2];
int cnt[N], pa[N2], Rp[N];
uint64_t mask[N2];
int find(int x) {
return pa[x] < 0 ? x : pa[x] = find(pa[x]);
}
int main() {
n = read(), k = read(), A = read();
for (int i = 0; i < n; i++) {
pts[i][1] = read();
pts[i][0] = k - read();
pts[i][2] = read();
buc[pts[i][0]]++;
}
for (int i = 1; i <= k; i++) buc[i + 1] += buc[i];
for (int i = 0; i < n; i++) {
int t = --buc[pts[i][0]];
memcpy(LW[t], pts[i] + 1, 8);
}
for (int i = 0; i < k; i++) cnt[i] = A;
memset(mask, 0xff, sizeof mask);
memset(pa, -1, sizeof pa);
iota(Rp, Rp + N2, 0);
int ans = 0;
for (int r = 1; r <= k; r++)
for (int i = buc[r]; i < buc[r + 1]; i++) {
int l = LW[i][0], w = LW[i][1];
ans += w;
int lb = Rp[find(l >> 6)], rb = r >> 6;
auto S0 = mask[lb];
if (lb == l >> 6) S0 &= ~0ULL << (l & 63);
while (lb < rb) {
auto S = S0;
for (; S; S &= S - 1) {
int p = lb * 64 + __builtin_ctzll(S);
int tmp = min(w, cnt[p]);
cnt[p] -= tmp, w -= tmp;
if (!w) break;
}
mask[lb] ^= S0 ^ S;
if (!w) break;
int nxt = find(lb + 1);
if (!mask[lb]) {
lb = find(lb);
if (pa[nxt] > pa[lb]) swap(nxt, lb), Rp[nxt] = Rp[lb];
pa[nxt] += pa[lb], pa[lb] = nxt;
}
lb = Rp[nxt], S0 = mask[lb];
}
if (w != 0 & lb == rb) {
S0 &= (1ULL << (r & 63)) - 1;
auto S = S0;
for (; S; S &= S - 1) {
int p = lb * 64 + __builtin_ctzll(S);
int tmp = min(w, cnt[p]);
cnt[p] -= tmp, w -= tmp;
if (!w) break;
}
mask[lb] ^= S0 ^ S;
}
ans -= w;
}
cout << ans << '\n';
}
1842F - Tenzing and Tree
Let $$$root$$$ be the centroid of all black vertices and $$$size_i$$$ be the number of black vertices in the subtree of node $$$i$$$.
Then the value is $$$\sum_{i\neq root} k-2\cdot size_i=(n-1)\cdot k-2\cdot\sum_{i\neq root} size_i$$$.
Consider painting node $$$i$$$ black, the total contributio to all of its ancestors is $$$2\cdot depth_i$$$, where $$$depth_i$$$ is the distance from $$$root$$$ to $$$i$$$.
Since we want to maximize the value, we can greedily select the node with the smallest $$$depth_i$$$.
But how do we ensure that $$$root$$$ is the centroid after selecting other black vertices? We can simply take the maximum of all possible $$$root$$$ because if a node is not the centroid, some edges will have a negative weight, making it worse than the optimal answer.
Enumerate all possible $$$root$$$ and use BFS to add vertices based on their distance to $$$root$$$. The complexity is $$$O(n^2)$$$.
#include <bits/stdc++.h>
using namespace std;
const int N = 5000 + 8;
int n, ans[N];
vector<int> G[N];
void bfs(int u) {
static int q[N], dis[N];
memset(dis, -1, sizeof dis);
q[1] = u, dis[u] = 0;
for (int l = 1, r = 2; l < r; l++) {
u = q[l];
for (int v : G[u]) if (dis[v] < 0)
dis[v] = dis[u] + 1, q[r++] = v;
}
int sum = 0;
for (int i = 1; i <= n; i++) {
sum += dis[q[i]];
ans[i] = max(ans[i], (n - 1) * i - sum * 2);
}
}
int main() {
ios::sync_with_stdio(false), cin.tie(nullptr);
cin >> n;
for (int i = 0; i < n - 1; i++) {
int u, v;
cin >> u >> v, u--, v--;
G[u].push_back(v), G[v].push_back(u);
}
for (int i = 0; i < n; i++) bfs(i);
for (int i = 0; i <= n; i++)
cout << ans[i] << ' ';
}
1842G - Tenzing and Random Operations
Before starting to solve this problem, let's establish two basic properties:
- For two completely independent random variables $$$x_1,x_2$$$, we have $$$E(x_1x_2) = E(x_1)E(x_2)$$$.
- For $$$(a+b)\times (c+d)$$$, we have $$$E((a+b)\times (c+d)) = E(ac) + E(ad) + E(bc) + E(bd)$$$.
Returning to this problem, let $$$x_{i,j}$$$ be a random variable: its value is $$$v$$$ when the $$$i$$$-th operation sets $$$a_j$$$ to $$$a_j + v$$$, otherwise it is $$$0$$$.
Then note that the answer is the expected value of $$$\prod_{i=1}^{n}(a_i+\sum_{j=1}^{m}x_{j,i})$$$.
Applying the second property above to split the product, each term is a product of some $$$a_i$$$ and $$$x$$$.
Specifically, each term has $$$n$$$ factors, and for each $$$i\in [1,n]$$$, either $$$a_i$$$ is one of its factors, or some $$$x_{j,i}$$$ is one of its factors.
Let's investigate the expectation of a specific term. Note that if $$$i_1\lt i_2$$$, then $$$E(x_{j,i_1}\times x_{j,i_2}) = E(x_{j,i_1})\times v$$$, that is, if $$$x_{j,i_1}$$$ is $$$0$$$ then the whole product is $$$0$$$, and if $$$x_{j,i_1}$$$ is $$$v$$$ then $$$x_{j,i_2}$$$ must be $$$v$$$.
Therefore, for all the $$$x$$$ factors in a term, we categorize them by the first index, i.e. we group all $$$x_{j,...}$$$ into category $$$j$$$. For each category, we only need to focus on the first variable. If it's $$$v$$$, then the remaining variables take value $$$v$$$, otherwise the result is $$$0$$$. Note that the variables in different categories are completely independent (because their values are determined by operations in two different rounds), so the expected product of the variables in two categories can be split into the product of the expected products of the variables within each category.
Our goal is to compute the expected sum of all the terms, which can be nicely combined with DP:
Let $$$dp(i,j)$$$ be the value that we have determined the first $$$i$$$ factors of each term and there are $$$j$$$ categories that have appeared at least once (if adding the variable at position $$$i+1$$$ brings contribution $$$v$$$, otherwise the contribution is $$$\frac{i+1}{n}\times v$$$). The transition can be easily calculated with $$$O(1)$$$, depending on whether to append $$$a_{i+1}$$$ or $$$x_{...,i+1}$$$ to each term, and if it's the latter, we discuss whether the variable belongs to one of the $$$j$$$ categories that have appeared or the other $$$m-j$$$ categories. The time complexity is $$$O(n\times \min(n,m))$$$.
#include <bits/stdc++.h>
using namespace std;
const int N = 5000 + 8, P = 1e9 + 7;
int n, m, v, a[N], coef[N], dp[N][N];
long long Pow(long long a, int n) {
long long r = 1;
while (n) {
if (n & 1) r = r * a % P;
a = a * a % P, n >>= 1;
}
return r;
}
int main() {
ios::sync_with_stdio(false), cin.tie(nullptr);
cin >> n >> m >> v;
for (int i = 1; i <= n; i++) cin >> a[i];
dp[0][0] = 1;
for (int i = 1; i <= n; i++) {
auto coef = i * Pow(n, P - 2) % P * v % P;
for (int j = 0; j < i; j++) {
dp[i][j + 1] = dp[i - 1][j] * coef % P * (m - j) % P;
dp[i][j] = (dp[i][j] + dp[i - 1][j] * (a[i] + 1LL * j * v % P)) % P;
}
}
int ans = 0;
for (int i = 0; i <= n; i++)
(ans += dp[n][i]) %= P;
cout << ans << '\n';
}
1842H - Tenzing and Random Real Numbers
Ignoring time complexity for now, how do we calculate the answer? The first step is to enumerate which variables are $$$<0.5$$$ and which variables are $$$>0.5$$$.
Sort all variables $$$x_i$$$ by $$$\min(x_i,1-x_i)$$$ and enumerate the sorted result (referring to the permutation).
Suppose no variable equals $$$0.5$$$, because the probability of being equal to $$$0.5$$$ is $$$0$$$, variables less than $$$0.5$$$ are called white vertices, and those greater than $$$0.5$$$ are called black vertices. Each black and white coloring is equiprobable, so we can calculate the probability that satisfies all conditions for each black and white coloring, and then take the average.
For two variables less than $$$0.5$$$, the condition of $$$\le 1$$$ is always satisfied, and the condition of $$$\ge 1$$$ is never satisfied. Therefore, we do not need to consider the conditions between same-colored points. The condition between white vertex $$$u$$$ and black vertex $$$v$$$, $$$x_u+x_v\le 1$$$, is satisfied only when $$$x_u\le 1-x_v$$$. Let $$$y_u=\min(x_u,1-x_u)=\begin{cases}x_u&(u\text{ is white})\\1-x_u&(u\text{ is black})\end{cases}$$$, then $$$y_u$$$ can be regarded as a random variable in $$$[0,0.5)$$$, for $$$\le 1$$$ condition, the white vertex's $$$y$$$ must be less than or equal to the black vertex's $$$y$$$, so we add an edge from the white vertex to the black vertex; for $$$\ge 1$$$ condition, we add an edge from the black vertex to the white vertex.
We get a directed graph that restricts the size relationship of $$$y_{1\cdots n}$$$. Suppose that sorting $$$y_{1\cdots n}$$$ from small to large is $$$y_{p_1},y_{p_2},\cdots,y_{p_n}$$$, then each permutation $$$p$$$ is equiprobable, and this $$$p$$$ contributes if and only if it is a topological sort, so the probability that satisfies all conditions is the number of topological sorts divided by $$$n!$$$.
Now the problem has been transformed into a counting problem. For each coloring, count the total number of topological sorts. Now we do not enumerate coloring directly but enumerate topological sorts directly by enumerating a permutation $$$p$$$ such that $$$y_{p_1}<y_{p_2}<\cdots<y_{p_n}$$$ and count the number of colorings that satisfy the conditions. It can be found that $$$\le 1$$$ condition limits variablesin in the front position of $$$p$$$ to be less than $$$0.5$$$, and $$$\ge 1$$$ condition limits variables in the front position of $$$p$$$ to be greater than $$$0.5$$$.
Then we can use bit-mask DP. Let $$$dp_{mask}$$$ represent that we have added all vertices in mask into topological sort. We enumerate new added vertex u for transition. If all variables with $$$\le 1$$$ conditions between it are included in mask, it can be colored black; if all variables with $$$\ge 1$$$ conditions between it are included in mask, it can be colored white.
Time complexit is $$$O(2^nn)$$$.
#include <iostream>
const int P = 998244353;
long long Pow(long long a, int n) {
long long r = 1;
while (n) {
if (n & 1) r = r * a % P;
a = a * a % P, n >>= 1;
}
return r;
}
inline void inc(int& a, int b) {
if((a += b) >= P) a -= P;
}
int n, m, G[20][2], f[1 << 20];
int main() {
std::cin >> n >> m;
while (m--) {
int t, i, j;
std::cin >> t >> i >> j;
i--, j--;
G[i][t] |= 1 << j;
G[j][t] |= 1 << i;
}
f[0] = 1;
for (int S = 0; S < 1 << n; S++)
for (int i = 0; i < n; i++) if (~S >> i & 1) {
if ((G[i][0] | S) == S) inc(f[S | 1 << i], f[S]);
if ((G[i][1] | S) == S) inc(f[S | 1 << i], f[S]);
}
long long t = 1;
for (int i = 1; i <= n; i++) t = t * i * 2 % P;
std::cout << f[(1 << n) - 1] * Pow(t, P - 2) % P << '\n';
}
1842I - Tenzing and Necklace
How to solve it if you want to cut exactly $$$m$$$ edges ?
DP+divide and conquer
Add a constraint: "you must cut off $$$m$$$ edges".
Consider enumerating the minimum cut edges from small to large.
Suppose the minimum cut edge chosen is $$$a_1$$$, and the subsequent optimal solution is $$$a_2, a_3, ..., a_m$$$.
If another minimum cut edge is selected: $$$b_1$$$, and the subsequent optimal solution is $$$b_2, b_3, ..., b_m$$$.
Assume that $$$a_i<a_{i+1}, b_i<b_{i+1}, b_1>a_1$$$.
1. It is possible to only adjust $$$b_2,b_3, ..., b_m$$$, so that $$$\forall_{1\leq i\leq m} b_i\geq a_i$$$, and the total cost after adjustment remains unchanged.
The adjustment method is as follows:
Find the smallest $$$i$$$ such that $$$b_i<a_i$$$, and find the first $$$j$$$ such that $$$b_j\geq a_j$$$ after $$$i$$$, if it does not exist, let $$$j=m+1$$$.
It can be observed that $$$(b_i,b_{i+1},b_{i+2},...,b_{j-1})$$$ can be replaced with $$$(a_i,a_{i+1},a_{i+2},...,a_{j-1})$$$, which is still a valid solution. Moreover, the solution $$$(a_i,a_{i+1},a_{i+2},...,a_{j-1})$$$ can also be replaced with $$$(b_i,b_{i+1},b_{i+2},...,b_{j-1})$$$, because $$$b_{i-1}\geq a_{i-1}$$$ and $$$b_j\geq a_j$$$.
Since $$$a$$$ and $$$b$$$ are both optimal solutions with fixed $$$a_1$$$ and $$$b_1$$$, $$$w_{b_i}+w_{b_{i+1}}+...+w_{b_{j-1}}=w_{a_i}+w_{a_{i+1}}+...+w_{a_{j-1}}$$$. Therefore, replacing $$$(b_i,b_{i+1},b_{i+2},...,b_{j-1})$$$ with $$$(a_i,a_{i+1},a_{i+2},...,a_{j-1})$$$ does not increase the total cost.
Repeat the above adjustment until there is no $$$b_i<a_i$$$.
Similarly, it can be proven that only adjusting $$$a_2,a_3,...,a_m$$$ is feasible, so that $$$\forall_{1\leq i\leq m} b_i\geq a_i$$$, and the total cost after adjustment remains unchanged.
2. If $$$\forall_{1\leq i\leq m} b_i\geq a_i$$$ is already satisfied, it is possible to only adjust $$$b_2,b_3, ..., b_m$$$, so that $$$\forall_{1\leq i<m}a_i\leq b_i\leq a_{i+1}$$$, and the total cost after adjustment remains unchanged. Assume that $$$a_1<b_1\leq a_2$$$.
The adjustment method is as follows:
Find the smallest $$$i$$$ such that $$$b_i> a_{i+1}$$$, and find the first $$$j$$$ such that $$$b_j\leq a_{j+1}$$$ after $$$i$$$ (let $$$a_{m+1}=+\infty$$$).
$$$(b_i,b_{i+1},b_{i+2},...,b_{j-1})$$$ can be replaced with $$$(a_{i+1},a_{i+2},a_{i+3},...,a_{j})$$$, which is still a valid solution. Moreover, the solution $$$(a_{i+1},a_{i+2},a_{i+3},...,a_{j})$$$ can also be replaced with $$$(b_i,b_{i+1},b_{i+2},...,b_{j-1})$$$, because $$$b_{i-1}\leq a_{i}$$$ and $$$b_j\leq a_{j+1}$$$.
Since $$$a$$$ and $$$b$$$ are both optimal solutions with fixed $$$a_1$$$ and $$$b_1$$$, $$$w_{b_i}+w_{b_{i+1}}+...+w_{b_{j-1}}=w_{a_{i+1}}+w_{a_{i+2}}+...+w_{a_j}$$$. Therefore, replacing $$$(b_i,b_{i+1},b_{i+2},...,b_{j-1})$$$ with $$$(a_{i+1},a_{i+2},a_{i+3},...,a_{j})$$$ does not increase the total cost.
Similarly, it can be proven that only adjusting $$$a_2,a_3,...,a_m$$$ is feasible, so that $$$\forall_{1\leq i<m}a_i\leq b_i\leq a_{i+1}$$$, and the total cost after adjustment remains unchanged.
3. If $$$b_1> a_2$$$, it is possible to adjust $$$b_1,b_2,...,b_m$$$, so that $$$b_1\leq a_2$$$, and the total cost does not increase.
The adjustment method is as follows:
Find the smallest $$$j$$$ such that $$$b_j\leq a_{j+1}$$$ (let $$$a_{m+1}=+\infty$$$).
It can be observed that $$$(a_{2},a_{3},a_{4},...,a_{j})$$$ can be replaced with $$$(b_1,b_{2},b_{3},...,b_{j-1})$$$, which is still a valid solution. Moreover, the solution $$$(b_1,b_{2},b_{3},...,b_{j-1})$$$ can also be replaced with $$$(a_{2},a_{3},a_{4},...,a_{j})$$$, because $$$b_j\leq a_{j+1}$$$.
Since $$$a$$$ is the optimal solution with fixed $$$a_1$$$ and $$$b_1$$$, $$$w_{b_1}+w_{b_{2}}+...+w_{b_{j-1}}\geq w_{a_{2}}+w_{a_{3}}+...+w_{a_j}$$$. Therefore, replacing $$$(b_1,b_{2},b_{3},...,b_{j-1})$$$ with $$$(a_{2},a_{3},a_{4},...,a_{j})$$$ does not increase the total cost.
Combining the above conclusions, we can obtain a solution that must cut off $$$m$$$ edges:
Let $$$a_1=1$$$, find the optimal solution $$$a_1,a_2,a_3,...,a_m$$$.
Then, it can be assumed that all $$$b_i$$$ satisfy $$$a_i\leq b_i\leq a_{i+1}$$$.
A divide-and-conquer algorithm can be used. Let $$$solve((l_1,r_1),(l_2,r_2),(l_3,r_3),...,(l_m,r_m))$$$ represent the optimal solution for all $$$l_i\leq b_i\leq r_i$$$.
If $$$l_1>r_1$$$, then we are done. Otherwise, let $$$x=\lfloor\frac{l_1+r_1}{2}\rfloor$$$, we can use DP to calculate the cost and solution for $$$b_1=x$$$ in $$$O(\sum r_i-l_i+1)$$$ time complexity. Then, recursively calculate $$$solve((l_1,b_1-1),(l_2,b_2),(l_3,b_3),...,(l_m,b_m))$$$ and $$$solve((b_1+1,r_1),(b_2,r_2),(b_3,r_3),...,(b_m,r_m))$$$.
Time complexity analysis: $$$\sum r_i-l_i+1=(\sum r_i-l_i)+m$$$. If the sum of adjacent parts is $$$\leq k$$$, it can be merged, but it is definitely not the optimal solution. Therefore, $$$m\leq 2\lceil\frac{n}{k}\rceil$$$. Assuming that the length of the first segment is $$$r_1-l_1+1=O(k)$$$, the time complexity is $$$O(n\log k+mk)=O(n\log k)$$$.
Finally, we need to calculate the solution for all possible $$$m$$$ and take the $$$\min$$$ as the final answer. After pruning the first edge, if the optimal solution requires cutting off $$$m'$$$ edges, similar to the previous proof, other solutions can be adjusted to satisfy $$$|m-m'|\leq 1$$$ and the total cost does not increase.
#include <bits/stdc++.h>
using namespace std;
const int N = 5e5 + 8;
int n, K, a[N], pre[N * 2];
long long dp[N * 2], ans;
vector<int> trim(vector<int> a, int L, int R) {
return vector(a.begin() + L, a.end() - R);
}
vector<int> init() {
static int q[N];
q[1] = 0;
for (int i = 1, l = 1, r = 1; i <= n; i++) {
if (q[l] < i - K) l++;
dp[i] = dp[q[l]] + a[i], pre[i] = q[l];
while (l <= r && dp[i] < dp[q[r]]) r--;
q[++r] = i;
}
ans = dp[n];
vector<int> res;
for (int i = n; i; i = pre[i]) res.push_back(i);
res.push_back(0), reverse(res.begin(), res.end());
return res;
}
vector<int> solve(int first, vector<int> L, vector<int> R) {
dp[first] = a[first];
int l = first, r = first;
long long val; int p;
auto checkMin = [&](int i) {
if (dp[i] < val) val = dp[i], p = i;
};
for (int i = 0; i < L.size(); i++) {
val = 1e18, p = 0;
for (int j = R[i]; j >= L[i]; j--) {
for (; r >= max(l, j - K); r--) checkMin(r + i);
dp[j + i + 1] = val + a[j];
pre[j + i + 1] = p;
}
l = L[i], r = R[i];
}
val = 1e18, p = 0;
for (int i = max(L.back(), n - K + first); i <= R.back(); i++)
checkMin(i + L.size());
ans = min(ans, val);
vector<int> res;
for (int i = L.size(); i; i--) res.push_back(p - i), p = pre[p];
reverse(res.begin(), res.end());
return res;
}
void divide(int l, int r, vector<int> L, vector<int> R) {
if (l > r) return;
int mid = (l + r) >> 1;
auto M = solve(mid, L, R);
divide(l, mid - 1, L, M), divide(mid + 1, r, M, R);
}
void divide(vector<int> p) {
p.push_back(n), divide(1, p[0], trim(p, 0, 1), trim(p, 1, 0));
}
int main() {
ios::sync_with_stdio(false), cin.tie(nullptr);
int T;
cin >> T;
while (T--) {
cin >> n >> K;
for (int i = 1; i <= n; i++) cin >> a[i];
a[0] = a[n];
auto p = init();
divide(trim(p, 1, 1));
divide(solve(0, trim(p, 0, 1), trim(p, 1, 0)));
if ((p.size() - 2) * K >= n)
divide(solve(0, trim(p, 1, 2), trim(p, 2, 1)));
cout << ans << '\n';
}
}
Problem D :|
What's wrong with it?
Because the idea — do some greedy. I am pretty sure, that 90% who solved it write like 100-200 lines greedy, without(ANY) proof why it works.
Isn't that the case with 90% of the greedy problems?
Editorial of D makes it seem easy but could not get it during contest
gawd bro
Can anyone explain problem E in detail?
I didn't understand the tutorial either.
it's a standard segtree + dp problem. my approach was moving in x-axis from k to 0, let
dp[i]
be optimum cost for points fromi to k
, to optimizedp[i]
we can either draw a triangle fromi to k-y
where0<=y<k-i
(obviously), or we can just dodp[i] = dp[i+1] + c[i]
,c[i]
is total cost of points having x coordinate asi
. letj (j>i)
be the x coordinate till which we draw the triangle, sodp[i] = (j-i)*A + dp[j] + c[i<=x<j,0<=y<k-j]
, last term is the cost of points strictly below the triangle and strictly to the left ofj
. we can writedp[i] = (-i*A) + (dp[j]+j*A + c[i<=x<j,0<=y<k-j])
, so as we move to left we must precompute the second term for all the coordiates afteri
so that we can do a min query ati
, the termdp[j] + j*A
can be added just after computingdp[j]
but cost of points inc[i<=x<j,0<=y<k-j]
added using lazy propagation whenever we encounter a new pointMy submission: 211064886
you can find similar problems at, problem1 problem2 they don't involve lazy tree so comparatively easier
eklavya_k sorry but can you explain how to do the desired stuff with segment tree? whenever we encounter a point (x, y) we will add it's cost to the interval [i, j] but how will we maintain the constraint of y that is 0 <= y < k — j. It might be possible that some points in range [i, j] violates this.
if a point is at
(i,y)
you should update the cost in range[i+1,k-y-1]
, i.e. tillj = k-y-1
, sok-j = y+1 > y
, other positions which are left ofj
will certainly have their y-axis range greater thany+1
, no range is violated. Draw graph to understand better rather than equationsSeveral fun facts:
Thanks for the round; the problems were good even if there were some issues with the tests (fwiw I've heard rumors that systests are weak for C and H, but I haven't had the chance to look at the submissions myself). Rough way to lose LGM, though ;-;
To add some context re: D, I believe my solution should FST anytime (a) vertices $$$1$$$ and $$$n$$$ are connected (i.e., the answer is not infinity) and (b) there exists a vertex connected to $$$n$$$ with weight $$$0$$$. This seems like the sort of thing that could easily be covered by multitesting, rather than a niche edge case that isn't likely to come up in pretests (in fact, even without multitesting I'm somewhat surprised it doesn't happen sooner).
My suggestion (for CF policy in general, not specific to this round) is that there should be a valid documented justification anytime multitesting is not used. The most common justification I can imagine is that pretests and systests are identical (and ideally, it would be nice if this could be enforced within Polygon if this is the reason multitesting is not being used, to prevent extra tests from accidentally being added to systests later). I could imagine allowing non-multitesting if e.g. multitesting makes the complexity analysis more confusing (though in such cases it seems like the problem would usually already be difficult enough to allow pretests = systest).
I think problem A is difficult too.
me too
In Problem 'A' solution , sum(ai) < sum(bi) , then anser is Tenzing...typing mistake in tutorial(sorry for bad English).platelet
Thank you, it has been fixed.
How to solve ghi
Here is how I solve G:
First, somehow I defined
where $$$X_i, Y_i, Z_i, \ldots \sim \textsf{Ber}(i / n)$$$. Note that $$$X_i, Y_j$$$ are independent, but $$$X_i, X_j$$$ are dependent.
It is clear that once we expand everything (and have $$$(m + 1)^n$$$ terms), we can use linearity of expectation to find the answer. Then I attempted to expand that on small $$$n$$$ and $$$m$$$, hoping to see the pattern.
I observed that for $$$i < j$$$, $$$\textsf{E}[X_j | X_i] = X_i$$$. That is, once we have some earlier term $$$X_i$$$, we don't have to worry about $$$X_j$$$ at all.
Then I came up with a dynamic programming solution, trying to calculate the answer for each suffix. By the symmetry of $$$X, Y, Z, \ldots$$$, I can define a state to be the length of the suffix and the number of variables ($$$X, Y, Z$$$) that were previously in our expression before. (a.k.a. "free") at that point. There are $$$\mathcal{O}(n^2)$$$ states, and all transitions can be done in $$$\mathcal{O}(1)$$$, as there are three cases to handle, the case where the contribution comes from $$$a_i$$$, "free" $$$X_i$$$, and "new" $$$Y_i$$$.
Nvm
For G: I am aware of two ways. Someone in the AC discord mentioned this blog: https://codeforces.net/blog/entry/98624 for a good approach that does not use that much machinery.
My way used generating functions. I 1-index $$$a$$$ in the math below. Let $$$dp_{i}(m)$$$ (the naive dp, which can be easily done in something like $$$nm^2$$$) be sum of the values of the product of the first $$$i$$$ elements given that $$$m$$$ operations land there. We have that
It is easy enough to write this as a convolution: let $$$y_i = \sum_k dp_i(k)x^k/k!$$$
so that
(I am skipping some steps, this is just a rough sketch--the idea that I didn't see in contest was to send $$$vm$$$ to $$$v(m-c)+vc$$$, haha)
Now substitute: $$$z_i = e^{-ix}y_i$$$, and we have
Conveniently $$$z_0=1$$$. It is enough to compute the polynomials $$$z_i$$$ for $$$i\in [n]$$$ and just take
where the falling factorial on the right is easily maintained.
Beautiful solution! I have learned a lot from this. However, it should be $$$z_i=(a_i+ivx)z_{i-1}+vxz'_{i-1}$$$ (:
Oops! Fixed.
Try hacking this submission
Upd: Uphacked, yay!
For problem D, I brute forced the problem. I first noticed that if node 1 and node n weren't in the same component, than the game would last for an infinite amount of time. So from now on, we can assume that node 1 and n are in the same component. I started by looking at the set of all the nodes in the component except node n. If you look at the weighted edges between the set of current nodes and node n, then you can see that we can set the minimum of those edges to be the amount of time spent playing with the set of current nodes. Then we have to update those edges by decrementing all of them by that value. This will force some new edges to have a value of 0. The new nodes connecting to those edges will have to be removed from the set of current nodes. In this way, you can keep simulating the process by looking at the edges between the set of current nodes and the set of removed nodes. Once node 1 is removed, you can end the process and print the answer. I'm not quite confident on how to prove that this will give the best answer but I guess it worked. 210933986
can you tell me reason why if node 1 and node n are not in same component then game will last for infinite amount of time?
we can use all nodes that are in the same component with node 1 and that can last inf time.
I nearly got D — used int instead of long long :P
What a pity!
For A:
If \sum a_i > \sum b_i, Tsondu wins. Else if \sum a_i < \sum b_i, Tsondu wins.
I think the first "Tsondu" should be "Tenzing". Is it a typo?
Thank you, it has been fixed.
There is another solution for E. Let
dp[i]
be the answer for all points withx < i
. Also, letf(i, j)
be a sum of costs of points withj <= x < i
andy < k - i
. As triangles do not intersect,dp[i] = min(dp[i - 1] + f(i, i - 1), min(dp[j] + (i - j)A + f(i, j)))
for allj < i
. First part of the formula means simply adding points withx = i - 1
todp[i - 1]
. Second part means there is a triangle with a vertex(j, k - i)
.f(i, j)
can be simply calculated with Persistent Segment Tree forO(nlogn)
precalculation andO(logn)
for a query.min(dp[j] + (i - j)A + f(i, j))
can be calculated with Li Chao Tree. So total time complicity isO(nlog^2n)
. Code: 210975752Sadly,i misread E and do not know that k is given. Can the problem be solved in polylog time then?
As in choose some k then solve it? If so, then the min possible k is most likely optimal.
I mean you can choose any K in one operation
then you chose k=x+y at each step and remove point at 0 cost
Yes. I am stupid.
A version where you're limited to some amount of triangles or that each triangle has a base cost added to the edge cost is interesting. It's easy to solve it in polynomial time using max flow I think, but it's quite a big polynomial. I wonder if there's some way to solve this one fast.
Video solutions A. Tenzing and Tsondu
B. Tenzing and books
c. Tenzing and balls
:(
I actually really liked problem D. I don't know how to explain it, it's just fun to implement problems like this one (At least in the messy way I did it).
Buuut the statement is just toooo unclear. It also seems like many people had their solutions broken by system tests.
:(
Yes, I still can not understand problem D at all!
which part of the statement was unclear to you (im kinda curious). I thought the statement was perfect and defined everything well.
For me, the ambiguous part is restriction No 3. I can read the sentence more than a dozen times without really getting it. I imagine if the problem is described more mathematically, it will be easier for readers to get the precise meanings in shorter time.
really? is sum of time among all sets such that u \in S and v \notin S + time of all sets such that u \notin S and v \in S easier to understand than time of exactly one of u, v?
I read it as exactly one of --> "The total time that u plays" OR "The total time that v plays" must not exceed yi. Ofc this is incorrect, even the test cases throw that under the bus, but if one has to experiment with testcases to decipher the meaning of a problem, it clearly isn't clear. Thank you for your explanation of the statement, ironically it really was simpler and better and hence my upvote for putting this information.
Note that I am relatively inexperienced, and English is my 1.5-ish language.
That saved me lot of time, but I really thing that it was better, if it was defined mathematically in problems, like you did. Otherwise, I lost many times think in the wrong way.
I just forgot visited array in D and still passed.
Why am I the one to always get stuck on the weak pretests.
Anyways next round rated lol
Like these problems and +118 rating points:)
I did D 1842D - Tenzing and His Animal Friends using simulation. 210925507
Approach:
yes
andno
yes={1, 2, 3, ... , n-1}
andno={n}
i. Query
mn: minimum edge connecting vertices inside different sets
ii. If no such edge exists then output
inf
and exitiii. Else if
mn == 0
then we need to move corresponding vertices fromyes
tono
iv. Else we 'play' a game with
length=mn
with all people insideyes
and subtractmn
from all edgesv. Break if
1
gets removed fromyes
This means each iteration we remove
k>=1
friends fromyes
, which means at mostn
games are needed, and at mostmax(n)*max(yi) == 1e11
total play time.Implementation wise, I used adjacency matrix for the graph and a binary string to indicate the
yes
set. resuling inO(n**3)
Shouldn't it be O(n^2) as done here 211001107
I managed to get full point on problem A by literally implementing the game. I took the monsters from both Tenzing and Tsondu, quicksort them, take the maximum of each array. If one monster dies, decrease the number of monsters. Rinse and repeat.
For the second question, I am extremely unfamiliar with bitmasks, so I wasn't be able to solve it :(
Oh wow. I just read the A tutorial. I also simply implemented the game. :)
As for problem B, well, here's a reason for you to study/practice bitmasks more!
Dear MikeMirzayanov, platelet, errorgorn, Alexdat2000
I have pinged you to report some suspicious submissions from 3 Indian guys from the same college (IIT Roorkee) in this contest. Their submissions are very similar in style and it looks like they've just changed variable names. Linking the submissions for reference:
Numinous — 210931590
fyre — 210935593
nano_nish — 210924576
all of them got similar ranks. Even their code for Problem C is same.
in C I did a not optmized DP and it work for 998ms I hope you fix the test cases my soultion is trying first 100 element and last 100 element of the number this is my code : https://codeforces.net/contest/1842/submission/210953786
Video Solutions for A-E
Hi ladies/bros I am facing difficulty understanding the editorial and the DP approach of $$$G$$$. Maybe because I am dumb... But would you please help me? I posted a blog on the MSE site.
Wow, I love E's code, it is elegant and precise! I will learn this style for fast coding and debug-free. Thank you authors ^^
For Problem E, I defined dp[i] as the cost of removing all elements from the triangle of ht 'i' (ending at (k,0)).
The transitions will look like
dp[i] = min((i-j-1)*A + dp[j] + sum [ X = {k-i,k-j-1} : Y = {0 : j} ]
(assuming that from(i) -> (j+1)
is a hypotenuse of a type 1 triangle). We can store this information in a segment tree and query it, later we can update the i'th index with(dp[i]-i*A)
and for thesum [ X = {k-i,k-j-1} : Y = {0 : j} ]
, we can update it when we go fromi -> i+1
and get some new points to take care of using lazy propagation.Video Solution — https://www.youtube.com/watch?v=l1j3dwvopZs
Submission — https://codeforces.net/contest/1842/submission/211025258
Can someone explain task B in more detail. Where and why x|y!=x. Thank you.
If x|y != x , then y has a bit that x doesn’t have
I still can't understand solution to $$$E$$$. The editorial for it is bad. Can someone explain it in understandable way? ("It is possible to do it with segment tree" is not understandable way)
Alright, I solved it and will try to explain.
Firstly, let's mirror picture via line $$$y = k / 2$$$, so point $$$(x, y)$$$ will become $$$(x, k - y)$$$. It will be easier to work with. Triangles can't intersect — it is better to merge them, as if some point belongs to one of the original two, it will also belong to the merged one.
We also will do one more preparation step. In original problem each point either belongs to one of triangles, or its cost has to be added to the answer. Change problem: for every point if it belongs to one of triangles, we will subtruct its cost from answer, otherwise — not. Then we will simply add to this answer sum of costs of all points.
Let $$$dp[i]$$$ be the cost for all points, which $$$y <= i$$$ (don't forget, we flipped $$$y$$$ coordinate) (all these dp values will be non-positive, as we can just not make any triangles). Now we are choosing some $$$j \le i$$$, to which $$$y$$$ coordinate we will draw triangle (if $$$j = i$$$ the triangle is "empty", there will be no problems with it later). So the triangle is bounded by lines $$$x = j$$$ and $$$y = i$$$. We can't make any additional triangle between $$$i$$$ and $$$j$$$. So we can add to result only $$$dp[j]$$$. And finally, there are some points, which are covered by this new triangle. Let $$$C[i][j]$$$ be the $$$-$$$ (negative) sum of all costs of all points $$$(x, y)$$$, such that $$$x \ge j$$$ and $$$y \le i$$$. We are ready to write initial transition formula: $$$dp[i] = min_{j \le i}(dp[j] + (i - j) \cdot A + C[i][j])$$$. Rewrite: $$$dp[i] = i \cdot A + min_{j \le i}(dp[j] - j \cdot A + C[i][j])$$$.
How to find that $$$min_{j \le i}(dp[j] - j \cdot A + C[i][j])$$$. Let's store these values in segment tree. Then for $$$i$$$ we do following. Firstly, we add all points with $$$y = i$$$ to the structure. Assume, we add point $$$(x, i)$$$. Then by definition of $$$C$$$ its cost has to be added to (subtracted from) $$$C[i][j]$$$ for all $$$j \le x$$$ (We will not store $$$i$$$ dimension of $$$C[i][j]$$$). Now we ask for minimum on segment $$$[0, i]$$$. Let it be $$$X$$$. We update global answer by $$$X + i \cdot A$$$. And finally, we set on position $$$i$$$ the value to $$$X + i \cdot A - i \cdot A = X$$$.
Hope, this is understandable. I'm quite confused by number of submissions to this problem, it looks for me difficult.
Upd. Ough, they updated editorial. Now it is understandable.
I did 1842D — Tenzing and His Animal Friends using Dijkstra which feels very intuitive. 211041880
1-Initialy you only have 1 and you propagate the edges you have and push them into priority queue
2- once you find a new friend at time T you push the old string and the time that string is going to played is just T — last, after that you make this friend '1' on your string because you can no longer play wihtout him and not violate the rules
3- terminate when you find the nth friend or the pq is empty
4- if you didn't visit nth friend print inf else print the sum of time and strings played
I think that this approach is more intuitive since you are essentialy simulating the process and playing with the lowest amount of friends until you can't any more
For problem D: can anybody explain why this greedy solution works?
If 1 and n are not connected then answer is inf as Tenzing could always play with 1 for eternity
Using two vectors bad and good. Bad stores all the friends that cannot play any further and good stores the friends that can play now (Thus, at the start bad will contain only the last element n and good will have other elements)
Link to AC code
I believe we can prove it with induction. Let me know if I'm missing something, I found the problem quite hard for the number of solves.
Let the move $$$(S, t)$$$ represent the move "Select the subset $$$S$$$ with time $$$t$$$". We model the problem as in the editorial (with a weighted graph $$$G = (\{1,\dots, n\}, E)$$$)
Set $$$S_0 := \{1,\dots, n-1\}$$$. All the edges of the form $$$e = (\cdot, n)$$$ will only be affected with moves of the form $$$(S_0, \cdot)$$$. Thus we can't make optimal moves without doing the operation $$$(S_0, \cdot)$$$ as most as we can. So in other words in the optimal moves there will be the move(s) $$$(S_0, \cdot)$$$ somewhere (notice that the order in a valid moveset does not matter). So let's do the operations first as much as we can (in the simulation/algorithm).
Now since we exhausted all the edges $$$e = (\cdot, n)$$$ (if one of them gets affected, all of them get affected) we can't make any more moves $$$(S_0, \cdot)$$$. This means that in the new moves $$$S'$$$, if $$$(u, n)\in E$$$ then $$$u\not\in S'$$$ (we can't pick such $$$u$$$ anymore).
The current state of the graph is equivalent to the graph state where all the edges connecting $$$v$$$ and $$$u$$$ instead connected $$$v$$$ and $$$n$$$ (with the same weight). This is because any valid move would still be valid and any non-valid move would, again, still be non-valid. So we can delete vertices $$$u$$$ such that $$$(u, n)\in E$$$ (since they cannot get picked) and for every edge $$$(v, u)$$$ ($$$v\neq n$$$) add $$$(v, n)$$$ (with the same weight). We can then move the induction step to the new graph (which will have fewer vertices).
I can't seem to understand problem G at all. Which are the factors of each term? Won't some of the terms products consist of different ai or different x? Please can someone explain the solution in more detail?
I'd stuck in F for long... I feel a little confused now.
For a case that the $$$root$$$ is not actually centroid, the subset of $$$\{1, 2, \cdots, n\}$$$ we choose as black nodes itself is valid, but we used a wrong way to calculate the value of the tree.
Because, for every subset we choose, the result from a wrong calculation is absolutely less than the result from a proper calculation, so the wrong result doesn't affect the final answer.
Above is the part I can understand. And here is my problem:
We've proved wrong result is no need to be worried about, but how to prove we can reach the optimal result at last?
Because if we pick the correct centroid the answer is correct
sry。where is "Statements and editorials will be available in Chinese (Simplified) after the contest.",please,thank you. QAQ
On the announcement
thank you!
What is the meaning of dp(i)=min(dp(i−1)+1,min{dp(j)|a[j+1]=a[i],j+1<i}) in C tutorial
If we keep the $$$i-th$$$ ball , then the value is $$$dp(i−1)+1$$$.
If we erase $$$(j+1,j+2,j+3,...,i)-th$$$ bals , then the value is $$$\min\{dp(j)|a[j+1]=a[i],j+1<i\}$$$.
To determine j + 1 index we have to traverse from 0 to i — 2 or through every duplicate in the prefix but this is O(N2) right?
We can save the min $$$dp_j$$$ such that $$$a_{j+1}=x$$$ for all x.
Didn't get it like how tho?
refer to the standard solution for better understanding
why do we add plus one to dp[i-1], isn't dp[i] storing max number of balls that can be possibly removed at index i?
no dp[i] stores the min number of balls that can be kept.
It appears that the time limit of 2 seconds for problem 1842F is too tight for other languages except C++. I have tried all the optimizations I can think of in a Java implementation, but still see TLEs (and bunch of tests more than 1900 ms). Another evidence is, as of now, there are 10 pages of Accepted submissions, all in C++.
I managed to get Accepted once with https://codeforces.net/contest/1842/submission/211112289. There are at least 5 tests completed with 1996 ms, under the limit only by 4 ms! The problem will be more friendly to have a time limit of 3 seconds or so instead.
Upd: Hacked.
In fact, there is an $$$O(n)$$$ solution for problem E(submission). I have explained it in in the "Alternative Solution" section.
Who can translate problem D in a clear way for me? I can not understand it at all.
The solution is that we can construct the sets $$$s_1,s_2,s_3,...,s_k$$$, such that $$$s_i\subset s_{i+1}$$$.
So we only care about when each element is added to the set.
problem D is too difficult to understand:
Can any one explain for me the #3 restriction ? 1842D - Tenzing and His Animal Friends
In the third condition "total time that exactly one of $$$u_i$$$ and $$$v_i$$$ is playing with Tenzing" refers to the sum (time that $$$u_i$$$ is playing and $$$v_i$$$ is not playing) + (time that $$$v_i$$$ is playing and $$$u_i$$$ is not playing).
For Problem C:
Maybe we can directly calculate the balls that are selected.
We can do DP in the following way:
$$$f[i]=max(f[i-1],max(f[j-1]+i-j+1)|a[i]=a[j]))$$$
The problem then turns into calculating the second part in the first max function.
By observing the part we can see that only $$$f[j-1]-j+1$$$ is needed.
So we can use $$$g[i]$$$ to store the position $$$j$$$ of number $$$i$$$ that make $$$f[j]-j+1$$$ the maximum
Actually it's similar to the tutorial, but just happy to share a maybe different solution!
The Disjoint Set Union used in the alternative solution of E is special (at least I do think so). I didn't understand how to use it at first sight. I use set instead.
Disjoint Set Union is used in this way, making the representative element of each position equal to the first non-zero position after this position. You can check out the Alternative Code $$$O(n\alpha (n))$$$.
I solved C, by maximizing balls selected.
dp[i] = max balls we can select till ith index.
We keep track of the last index with value a[i], unordered_map<int, int> last.
We iterate the balls. dp[i] = dp[i-1]
if last.count(a[i]) > 0 then
let's j is the last index of a[i] value.
We have to 2 options for choosing balls:
Take all balls from j to i + dp[j-1]
Take all balls from j+1 to i + dp[j]
Example:
a[] = {1, 4, 1, 2, 1}
index = 1, 2, 3, 4, 5
at index 5:
We can take balls at index 4, 5 + dp[3].
We can take balls at index 3, 4, 5 + dp[2]
Hey, I know I am replying to a 6 month old comment, but I was solving this problem now. I thought of exactly the same approach, but I don't get why you considered j to i and j+1 to i, I only considered j to i, and hence was failing. Here is my code for your reference. Glad, if you could explain why you took the latter case. Thanks in advance!
Bad round! Problem B is harder then 1100, like 1300. I think C easier than B! Geometry in E? Are you kidding me? Random in G and H? They are impossible! Codeton4 was better:(
In editorial for H:
I think here the limitation should be on the previous variables.
ok, fixed.
Elaborating on problem F:
Firstly, observe that all the black nodes will form a single connected component, otherwise the tree value is suboptimal.
Let $$$\mathrm{sub}_v$$$ denote the number of black nodes in the subtree of $$$v$$$. The value of the tree is defined as $$$\sum |(k - \mathrm{sub}_v) - \mathrm{sub}_v|$$$.
Life would be easier if it were just $$$\sum (k - \mathrm{sub}_v) - \mathrm{sub}_v$$$ (without absolute value). But there isn't always such a case, is there? It turns out that $$$(k - \mathrm{sub}_v) \ge \mathrm{sub}_v$$$ is always true if your root is the centroid of the component of black nodes (since by definition, for every subtree of a centroid — $$$\mathrm{size}_i \le \frac{n}{2}$$$).
So now assuming the root of the tree is this centroid, the value is given by $$$\sum k - 2 \cdot \mathrm{sub}_v$$$, which is $$$(n - 1) \cdot k - 2 \cdot \sum \mathrm{sub}_v$$$.
Given this, lets try every node as said root and lets fill black nodes around it one at a time and update the answer for each $$$k$$$. Each black node $$$v$$$ contributes $$$-2 \cdot \mathrm{depth}_v$$$ to the answer. To maximize this, we choose the next black node to be the one with minimum depth at each stage.
In doing so, for every $$$k$$$ we would have found the answer for when the root also happens to be the centroid and this value will be better (greater) than the value in all cases when it is not. So we don't have to worry about them.
Isn't the decimal value of 0x3f = 63, how is that used to initialize the dis matrix in problem D? Is it related to LLMAX in any way?
memset
works not by setting all integers equal to the given value, but instead it sets all bytes to the given value. Along long
variable is 64-bit and thus, it contains 8 bytes. Each of these bytes are set to0x3f = 0b00111111
. This means that eachlong long
in the array will be set to0b0011111100111111001111110011111100111111001111110011111100111111 = 0x3f3f3f3f3f3f3f3f
or $$$4\,557\,430\,888\,798\,830\,399$$$ in decimal. This is not related toLLMAX
but it's large enough to be considered "infinity".Thanks for putting it out so clearly
I think the solution to D is too vague, and doesn't explain the specific meaning of the dist array!
Can somebody tell me why does my code give TLE on 2nd test in C?: 216987462. Shouldn't time complexity of the code be O(3*N + N*sqrt(N)) (input — N, calc — N, reset — N, solve — N*sqrt(n))?
UPD: I just shouldn't have used sqrt decomposition
Can u help why this gives TLE https://codeforces.net/contest/1842/submission/243680975
Please help me understand problem D. I think I have misunderstood the question.
Why is this a wrong solution to the sample case ? The total times of all animals are -
S[1] = 5, S[2] = 1, S[3] = 2, S[4] = 2
For constraints -
Exactly one of the two satisfy. What am I understanding wrongly?
Even though it's simple, in problem F, the proof for the black nodes being a connected component in an optimal coloring should have been mentioned in the editorial.
Let's assume we have two black nodes $$$u$$$ and $$$v$$$ in an optimal coloring such that the path between them has only white nodes.
Observation 1: Uncoloring either of $$$u$$$ or $$$v$$$ and coloring some other white node on the path from $$$u$$$ to $$$v$$$ in place of them only affects the value of edges lying on the path from $$$u$$$ to $$$v$$$.
An informal proof for this observation is that one can imagine all other nodes of the tree to be parts of trees hanging from the nodes on path from $$$u$$$ to $$$v$$$. All other edges will be part of these trees, and their value depends only on the number of black nodes in the subtree of their lower node, which is not affected by uncoloring/recoloring nodes on the path from $$$u$$$ to $$$v$$$.
Now, let's consider the edge $$$(u, w)$$$ such that $$$w$$$ is the first node on the path from $$$u$$$ to $$$v$$$. Let $$$x$$$ be the number of black nodes in the component on the side of $$$u$$$ and $$$y$$$ be the number of black nodes in the component on the side of $$$v$$$.
So, in both cases (which are mutually exclusive and exhaustive), we have shown that the coloring was not optimal, hence our assumption was false, and there cannot exist any two black nodes in an optimal coloring which have only white nodes on the path between them.
It follows that all black nodes must form a connected component in an optimal coloring.
Note: Even though this is simpler, another proof (which I used when solving this) is to root the tree and consider the lowest node in the optimal coloring at which the number of black nodes in its subtree becomes $$$\geq k/2$$$. This proof immediately leads to the rest of the solution.
Is there a problem in checker's code for problem D?
Even the output and answer values are matching but it's showing the wrong answer.
Here is my submission:- https://codeforces.net/contest/1842/submission/238336885
Could anyone please help why it is coming wrong answer even with correct output of total time?
The checker is correct. It says that the total time does not match, so if you sum up the second number on each row, it won't be equal to the total time you printed at the start.
D greed: keep PQ of the constraints where one friend from the constraint is out of the set. Then you can always find the next constraint that will be met and update $$$s$$$, $$$t$$$, and the PQ accordingly. 239816163
problem D : it's statement is literally too much ambiguous
Can anyone explain the m' of problem I in detail?
i can't understand the last sentence of the official editorial