1875A - Jellyfish and Undertale
We can use one tool each time the timer reaches to $$$1$$$, then the answer will be $$$\sum_{i=1}^n \min(a - 1, x_i) + b$$$. This can prove to be optimal. Because for each tool, if we use it when the timer is $$$c$$$, its contribution to the answer is $$$\min(x_i, a - c)$$$. We can't use the tool when the timer is less than or equal to $$$0$$$ because the bomb will explode before that, so $$$c=1$$$ is the optimal.
Time complexity: $$$O(n)$$$ per test case.
Memory complexity: $$$O(1)$$$ per test case.
#include<bits/stdc++.h>
using namespace std;
int n = 0, a = 0, b = 0;
long long ans = 0;
inline void solve(){
scanf("%d %d %d", &a, &b, &n);
ans = b;
for(int i = 0, x = 0 ; i < n ; i ++){
scanf("%d", &x);
ans += min(a - 1, x);
}
printf("%lld\n", ans);
}
int T = 0;
int main(){
scanf("%d", &T);
for(int i = 0 ; i < T ; i ++) solve();
return 0;
}
1874A - Jellyfish and Game
Let us define $$$\min(a)$$$ to be the minimum value of $$$a$$$ in the current round, $$$\max(a)$$$ to be the maximum value of $$$a$$$ in the current round, $$$\min(b)$$$ to be the minimum value of $$$b$$$ in the current round, $$$\max(b)$$$ to be the maximum value of $$$b$$$ in the current round, $$$\text{MIN}$$$ to be the minimum value of all the apples, $$$\text{MAX}$$$ to be the maximum value of all the apples.
By greedy and induction, we would come to the following conclusion:
- If Jellyfish is the one operating this round: If $$$\min(a) < \max(b)$$$, she will swap this two apples, otherwise she will do nothing.
- If Gellyfish is the one operating this round: If $$$\max(a) > \min(b)$$$, he will swap this two apples, otherwise she will do nothing.
We consider who $$$\text{MAX}$$$ and $$$\text{MIN}$$$ will belong to after the first round.
In the first round:
- If $$$\max(a) < \max(b)$$$, $$$\text{MAX} = \max(b)$$$. And because $$$\min(a) < \max(b)$$$, Jellyfish will swap this two apples. So $$$\text{MAX}$$$ belongs to Jellyfish.
- If $$$\max(a) > \max(b)$$$, $$$\text{MAX} = \max(a)$$$. If $$$\min(a)=\max(a)$$$, then $$$\min(a) > \max(b)$$$, Jellyfish will do nothing. Otherwise Jellyfish won't swap the apple with value $$$\text{MAX}$$$. In conclusion $$$\text{MAX}$$$ belongs to Jellyfish
We can also show that $$$\text{MIN}$$$ belongs to Gellyfish, and the proof is symmetric with the above.
So in the second round, $$$\min(b) = \text{MIN}, \max(a) = \text{MAX}$$$, We have $$$\text{MIN}<\text{MAX}$$$. So Gellyfish will swap this two apples, in the third round, $$$\min(a)=\text{MIN}, \max(b) = \text{MAX}$$$, Jellyfish will swap this two apples.
So after the first round, the game will go two rounds at a time, with two people swapping two apples with the minimum value and the maximum value back and forth.
So we only need to know the answer for $$$k = 1$$$ and $$$k = 2$$$.
Time complexity: $$$O(n + m)$$$ per test case.
Memory complexity: $$$O(n + m)$$$ per test case.
#include<bits/stdc++.h>
using namespace std;
const int N = 1000 + 5;
int n = 0, m = 0, k = 0, x = 0, y = 0, a[N] = {}, b[N] = {};
inline void solve(){
scanf("%d %d %d", &n, &m, &k); k --;
for(int i = 0 ; i < n ; i ++) scanf("%d", &a[i]);
for(int i = 0 ; i < m ; i ++) scanf("%d", &b[i]);
x = y = 0;
for(int i = 1 ; i < n ; i ++) if(a[i] < a[x]) x = i;
for(int i = 1 ; i < m ; i ++) if(b[i] > b[y]) y = i;
if(b[y] > a[x]) swap(a[x], b[y]);
if(k & 1){
x = 0, y = 0;
for(int i = 1 ; i < n ; i ++) if(a[i] > a[x]) x = i;
for(int i = 1 ; i < m ; i ++) if(b[i] < b[y]) y = i;
swap(a[x], b[y]);
}
long long ans = 0;
for(int i = 0 ; i < n ; i ++) ans += a[i];
printf("%lld\n", ans);
}
int T = 0;
int main(){
scanf("%d", &T);
for(int i = 0 ; i < T ; i ++) solve();
return 0;
}
1875C - Jellyfish and Green Apple
Firstly, if $$$n \geq m$$$, we can make the problem transform $$$n < m$$$ by giving each person an apple at a time until there are not enough apples.
We can calculate the mass of apples that each person ends up with as $$$\frac n m$$$.
Since it's cut in half every time, if $$$\frac m {\gcd(n, m)}$$$ is not an integral power of $$$2$$$, there's no solution.
Since the number of apple pieces is added to exactly $$$1$$$ for each cut, So we just need to minimize the final number of apple pieces.
As the problem is transformed, $$$\frac n m$$$ is less than $$$1$$$, and $$$\frac m {\gcd(n, m)}$$$ is an integral power of $$$2$$$. So we can uniquely find a set of positive integers $$$S$$$ satisfying $$$\frac n m = \sum_{i \in S} \frac 1 {2^i}$$$. And this method can be proven to be optimal, if we find another multiset $$$T$$$ satisfying $$$S \neq T$$$, for every element $$$x$$$ that appears twice or more, We can make the answer better by removing two $$$x$$$ at a time from $$$T$$$ and adding one $$$x - 1$$$ to $$$T$$$. By repeating this process, eventually $$$T$$$ will become $$$S$$$.
We can use $$$\text{std::__builtin_popcount()}$$$ to get $$$|S|$$$, the answer is $$$m \times |S|-n$$$.
Time complexity: $$$O(\log m)$$$ per test case.
Memory complexity: $$$O(1)$$$ per test case.
#include<bits/stdc++.h>
using namespace std;
int n = 0, m = 0;
inline void solve(){
scanf("%d %d", &n, &m); n %= m;
int a = n / __gcd(n, m), b = m / __gcd(n, m);
if(__builtin_popcount(b) > 1) printf("-1\n");
else printf("%lld\n", 1ll * __builtin_popcount(a) * m - n);
}
int T = 0;
int main(){
scanf("%d", &T);
for(int i = 0 ; i < T ; i ++) solve();
return 0;
}
1875D - Jellyfish and Mex
We only care about the operation before $$$\text{MEX}(a)$$$ reaches $$$0$$$, because after that, $$$m$$$ will never change.
Lemma. Before $$$\text{MEX}(a)$$$ reaches $$$0$$$, we will choose a positive integer $$$x$$$ at a time that satisfies $$$x < \text{MEX}(a)$$$, and delete all $$$x$$$ from $$$a$$$, the $$$\text{MEX}(a)$$$ will become $$$x$$$.
Proof. Because if $$$x > \text{MEX}(a)$$$, we can place this operation after the $$$\text{MEX}(a)$$$ becomes $$$0$$$, if we don't delete all of $$$x$$$, $$$\text{MEX}(a)$$$ won't change, we can also put this operation later.
So before $$$\text{MEX}(a)$$$ reaches $$$0$$$, the $$$x$$$ we delete is non-increasing.
It means we can solve this problem by dynamic programming. Let $$$dp_i$$$ represents when $$$\text{MEX}(a)=i$$$, and we haven't delete any $$$x$$$ satisfying $$$x < i$$$,the minimum value of $$$m$$$.
Let $$$c_i$$$ represents the number of times $$$i$$$ appears in $$$a$$$, the transition is: $$$\forall j < i, dp_j \leftarrow dp_i + i \times (c_j - 1) + j$$$.
Time complexity: $$$O(n^2)$$$ per test case.
Memory complexity: $$$O(n)$$$ per test case.
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll N = 5000 + 5, Inf = 0x3f3f3f3f3f3f3f3f;
ll n = 0, m = 0, a[N] = {}, dp[N] = {};
inline void init(){
for(ll i = 0 ; i <= n ; i ++) a[i] = 0, dp[i] = Inf;
n = m = 0;
}
inline void solve(){
scanf("%lld", &n);
for(ll i = 1, x = 0 ; i <= n ; i ++){
scanf("%lld", &x);
if(x < n) a[x] ++;
}
while(a[m]) m ++;
dp[m] = 0;
for(ll i = m ; i >= 1 ; i --) for(ll j = 0 ; j < i ; j ++) dp[j] = min(dp[j], dp[i] + i * a[j]);
printf("%lld\n", dp[0] - m);
}
ll T = 0;
int main(){
memset(dp, 0x3f, sizeof(dp));
scanf("%lld", &T);
for(ll i = 0 ; i < T ; i ++) init(), solve();
return 0;
}
1874B - Jellyfish and Math
First of all, since $$$\text{and}, \text{or}, \text{xor}$$$ are all bitwise operations, each bit is independent of the other.
We define $$$a_i$$$ as the $$$i$$$-th bit of $$$a$$$, $$$b_i$$$ as the $$$i$$$-th bit of $$$b$$$, $$$c_i$$$ as the $$$i$$$-th bit of $$$c$$$, $$$d_i$$$ as the $$$i$$$-th bit of $$$d$$$, $$$m_i$$$ as the $$$i$$$-th bit of $$$m$$$, $$$x_i$$$ as the $$$i$$$-th bit of $$$x$$$, $$$y_i$$$ as the $$$i$$$-th bit of $$$y$$$. (in binary)
Lemma. For all $$$i \neq j$$$, if $$$(a_i, b_i, m_i) = (a_j, b_j, m_j)$$$ and $$$(c_i, d_i) \neq (c_j, d_j)$$$, the goal cannot be achieved.
Proof. Because after every operation we will have $$$(x_i, y_i) = (x_j, y_j)$$$, so we can't achieve he goal.
Since $$$(a_i, b_i, m_i)$$$ has only $$$2^3=8$$$ cases, and $$$(c_i, d_i)$$$ has only $$$2^2=4$$$ cases, and there are some $$$(0/1, 0/1, 0/1)$$$ that do not appear in $$$\{(a_i, b_i, m_i)\ |\ 0 \leq i \leq \log \max(a, b, c, d, m)\}$$$, so there are only $$$(4+1)^8<4\times 10^5$$$ cases in this problem. We can use BFS(breadth-first search) for preprocessing.
Time complexity: $$$O(5^8)$$$ for preprocessing and $$$O(\log \max(a, b, c, d, m))$$$ per test case.
Memory complexity: $$$O(5^8)$$$ for preprocessing and $$$O(1)$$$ per test case.
#include<bits/stdc++.h>
using namespace std;
const int S = 4e5 + 5, Inf = 0x3f3f3f3f;
int pw5[10] = {}, dp[S] = {};
queue<int> Q;
inline void checkmin(int &x, int y){
if(y < x) x = y;
}
inline int w(int mask, int i){
return (mask / pw5[i]) % 5;
}
inline int f(int a, int b, int m){
return (a << 2) | (b << 1) | m;
}
inline int g(int c, int d){
return (c << 1) | d;
}
inline int work(int mask, int opt){
int ret = 0;
for(int a = 0 ; a < 2 ; a ++) for(int b = 0 ; b < 2 ; b ++) for(int m = 0 ; m < 2 ; m ++){
int x = w(mask, f(a, b, m)), c = x >> 1, d = x & 1;
if(opt == 1) c = c & d;
else if(opt == 2) c = c | d;
else if(opt == 3) d = c ^ d;
else d = m ^ d;
ret += pw5[f(a, b, m)] * g(c, d);
}
return ret;
}
inline void init(){
pw5[0] = 1;
for(int i = 1 ; i <= 8 ; i ++) pw5[i] = pw5[i - 1] * 5;
memset(dp, 0x3f, sizeof(dp));
int mask = 0;
for(int a = 0 ; a < 2 ; a ++) for(int b = 0 ; b < 2 ; b ++) for(int m = 0 ; m < 2 ; m ++) mask += pw5[f(a, b, m)] * g(a, b);
dp[mask] = 0, Q.push(mask);
while(Q.size()){
int s = Q.front(); Q.pop();
for(int opt = 0 ; opt < 4 ; opt ++){
int t = work(s, opt);
if(dp[t] == Inf) dp[t] = dp[s] + 1, Q.push(t);
}
}
for(int mask = 0 ; mask < pw5[8] ; mask ++) for(int i = 0 ; i < 8 ; i ++) if(w(mask, i) == 4){
for(int x = 1 ; x <= 4 ; x ++) checkmin(dp[mask], dp[mask - x * pw5[i]]);
break;
}
}
inline void solve(){
int A = 0, B = 0, C = 0, D = 0, M = 0, mask = pw5[8] - 1;
scanf("%d %d %d %d %d", &A, &B, &C, &D, &M);
for(int i = 0 ; i < 30 ; i ++){
int a = (A >> i) & 1, b = (B >> i) & 1, c = (C >> i) & 1, d = (D >> i) & 1, m = (M >> i) & 1;
if(w(mask, f(a, b, m)) == 4) mask -= (4 - g(c, d)) * pw5[f(a, b, m)];
else if(w(mask, f(a, b, m)) != g(c, d)){
printf("-1\n");
return;
}
}
printf("%d\n", (dp[mask] < Inf) ? dp[mask] : -1);
}
int T = 0;
int main(){
init();
scanf("%d", &T);
for(int i = 0 ; i < T ; i ++) solve();
return 0;
}
1874C - Jellyfish and EVA
Let's solve this problem by dynamic programming, Let $$$f_u$$$ represents the max probability of reaching city $$$n$$$ starting from city $$$u$$$.
The problem is how to transition. for city $$$u$$$, we assume that there are $$$d$$$ roads from city $$$u$$$, the $$$i$$$-th road from city $$$u$$$ is to city $$$v_i$$$.
Let's sort $$$v$$$ by using $$$f_v$$$ as the keyword, with $$$v$$$ with the larger $$$f_v$$$ coming first.
Let's define an array $$$a$$$ of $$$d$$$ elements equals to $$$[f_{v_1}, f_{v_2}, \dots, f_{v_d}]$$$, according to the definition, $$$a$$$ is non-increasing.
That is, next we want to find an optimal solution such that the probability of going to city $$$v_i$$$ is $$$p_i$$$, maximizing the value of $$$\sum_{i=1}^d{a_i \times p_i}$$$.
Tips: the sum of $$$p_i$$$ may not be $$$1$$$, because it is possible to stay at city $$$u$$$ when $$$d$$$ is even.
For two choices of $$$p$$$, $$$p_1$$$ and $$$p_2$$$, Sometimes we can't which is better, for example:
$$$p_1 = [0.6, 0.2, 0.2], p_2 = [0.5, 0.5, 0]$$$
when $$$a=[1.0, 0.2, 0.2]$$$, $$$p_1$$$ is better than $$$p_2$$$. But when $$$a = [1.0, 0.8, 0]$$$, $$$p_2$$$ is better than $$$p_1$$$.
Tips. In fact, when $$$d=3$$$, $$$p$$$ has only one choice $$$[\frac 1 3, \frac 1 3, \frac 1 3]$$$, the above is just a hypothetical to illustrate the problem.
So when can we say, $$$p_1$$$ is always not worse than $$$p_2$$$?
Lemma. If there are two arrays $$$p_1$$$ and $$$p_2$$$, satisfying $$$\forall i \leq d, \sum_{j=1}^i{p_1}_j \geq \sum_{j=1}^i{p_2}_j$$$, $$$p_1$$$ is always not worse than $$$p_2$$$.
Proof. Let's consider $$$a_{d+1}$$$ as $$$0$$$ and define $$$a'_i = a_i - a_{i+1}$$$, $$${p_1'}_i = \sum_{j=1}^i{p_1}_j$$$ and $$${p_2'}_i = \sum_{j=1}^i{p_2}_j$$$, then $$$\sum_{i=1}^d a_i \times {p_1}_i = \sum_{i=1}^d a'_i \times {p_1'}_i, \sum_{i=1}^d a_i \times {p_2}_i = \sum_{i=1}^d a'_i \times {p_2'}_i$$$, Because $$$a$$$ is non-increasing, so $$$a'_i \geq 0$$$, and we have $$$\forall i \leq d, {p_1}_i \geq {p_2}_i$$$, so $$$\sum_{i=1}^d a_i \times {p_1}_i \geq \sum_{i=1}^d a_i \times {p_2}_i$$$.
Now the problem is, there is whether an array $$$p$$$ not worse than any other arrays for for all $$$d$$$.
For $$$d = 1$$$, $$$p = [1.0]$$$ satisfies the condition.
For $$$d = 2$$$, $$$p = [0.5, 0]$$$ satisfies the condition.
What about $$$d>2$$$ ? After they choose the roads for the first time, The size of the problem will become $$$d - 2$$$.
For example, when $$$d = 4$$$, Let's assume Jellyfish choose $$$v_1$$$ for the first time, then there will be $$$4$$$ situations:
- When Asuka chooses $$$v_1$$$ they will go to $$$v_1$$$.
- When Asuka chooses $$$v_2$$$, the problem changes into the subproblem with $$$[v_3, v_4]$$$.
- When Asuka chooses $$$v_3$$$, the problem changes into the subproblem with $$$[v_2, v_4]$$$.
- When Asuka chooses $$$v_4$$$, the problem changes into the subproblem with $$$[v_2, v_3]$$$.
By calculating, $$$p = [0.25, 0.25, 0.125, 0]$$$, it also satisfies the condition.
Let's define $$$g_k$$$ as the best array $$$p$$$ when $$$d=k$$$, $$$g'_k$$$ as the the probabilities of going to cities except the city Jellyfish choose for the first time. By recursion, we can get $$$g'_k$$$ from $$$g_{k-2}$$$. After that, we insert $$$\frac 1 k$$$ into $$$g'_k$$$, keeping it non-increasing, we will get $$$g_k$$$.
By calculating, we will find $$$\frac 1 k$$$ is always the first element in $$$g_k$$$. So when $$$k > 2$$$, we have the following transition:
$$$g_{k, 1} = \frac 1 k$$$
$$$\forall 1 < i \leq k, g_{k, i} = g_{k-2, i - 2} \times \frac {i - 2} k + g_{k-2, i - 1} \times \frac {k - i} k$$$
In the above we consider $$$g_{k, x}$$$ as $$$0$$$ for all $$$x = 0$$$ or $$$x > k$$$.
Because $$$g_{k-2}$$$ satisfies the condition, by greedy and induction we can show that $$$g_k$$$ satisfies the condition.
Time complexity: $$$O(\max(n)^2)$$$ for preprocessing and $$$O(m \log n)$$$ per test case.
Memory complexity: $$$O(\max(n)^2)$$$ for preprocessing and $$$O(m)$$$ per test case.
#include<bits/stdc++.h>
using namespace std;
const int N = 5000 + 5;
int n = 5000, m = 0;
vector<vector<int> > G(N), Gx(N);
long double f[N] = {}, g[N][N] = {};
inline bool cmp(int u, int v){
return f[u] > f[v];
}
inline void work(int u){
if(u == n){
f[u] = 1.00;
return;
}
sort(G[u].begin(), G[u].end(), cmp);
int k = G[u].size();
for(int i = 0 ; i < k ; i ++){
int v = G[u][i];
f[u] += f[v] * g[k][i + 1];
}
}
inline void init(){
for(int u = 1 ; u <= n ; u ++){
f[u] = 0.00;
G[u].clear(), Gx[u].clear();
}
n = m = 0;
}
inline void solve(){
scanf("%d %d", &n, &m);
for(int i = 1, u = 0, v = 0 ; i <= m ; i ++){
scanf("%d %d", &u, &v);
if(u != n){
G[u].push_back(v);
Gx[v].push_back(u);
}
}
for(int u = n ; u >= 1 ; u --) work(u);
printf("%.12Lf\n", f[1]);
}
int T = 0;
int main(){
for(int i = 1 ; i <= n ; i += 2) for(int j = 1 ; j <= i ; j ++) g[i][j] = 1.00 / i;
for(int i = 2 ; i <= n ; i += 2){
g[i][1] = 1.00;
for(int j = 1 ; j <= i ; j ++) g[i][j] /= i;
if(i + 2 <= n) for(int j = 1 ; j <= i ; j ++) g[i + 2][j + 1] += g[i][j] * (i - j + 1), g[i + 2][j + 2] += g[i][j] * j;
}
scanf("%d", &T);
for(int i = 1 ; i <= T ; i ++) init(), solve();
return 0;
}
1874D - Jellyfish and Miku
Let's assume that $$$a$$$ is given. We can use dynamic programming to solve the problem.
Let's define $$$f_i$$$ as the expected number of days Jellyfish needs to reach city $$$i$$$ from city $$$0$$$.
We will have:
$$$f_0=0, f_1=1$$$
$$$\forall i > 1, f_i=f_{i-1}+1+\frac {a_{i-1}} {a_{i-1}+a_i} \times (f_i - f_{i-2})$$$
Let's make it better:
- $$$f_i = f_{i-1} + 1 + \frac {a_{i-1}} {a_i} \times (f_{i-1}-f_{i-2}+1)$$$
What does this inspire us to do? Let's define $$$g_i=f_i-f_{i-1}$$$, then we will get:
$$$g_1=1$$$
$$$\forall i > 1, g_i = 1 + \frac {a_{i-1}} {a_i} \times (g_{i-1}+1)$$$
By induction, we will find $$$g_i = 1 + 2 \times \sum_{j=1}^{i-1} \frac {a_j} {a_i}$$$. According to the definition, $$$f_n = \sum_{i=1}^n g_n = n + 2 \times \sum_{i=1}^n\sum_{j=1}^{i-1} \frac{a_j}{a_i}$$$.
Then we can use dynamic programming to solve the problem itself.
Let's define $$$s_i = \sum_{j=1}^i a_j$$$, then $$$f_n = n + 2 \times \sum_{i=1}^n \frac{s_{i-1}}{a_i}$$$.
Let's define $$$dp_{i, x}$$$ as the minimum value of $$$\sum_{i=1}^n \frac{s_{i-1}}{a_i}$$$ when $$$s_i = x$$$.
We transit by enumerating the values of $$$a_{i+1}$$$, The transition is: $$$dp_{i+1, x+y} \leftarrow dp_{i, x} + \frac x y$$$.
But the time complexity is $$$O(n m^2)$$$, how can it becomes faster?
Let's take a closer look at $$$\sum_{i=1}^n\sum_{j=1}^{i-1} \frac{a_j}{a_i}$$$. If there exists $$$i < n$$$ satisfying $$$a_i > a_{i+1}$$$. We can swap $$$a_i$$$ and $$$a_{i+1}$$$, the answer will be better! so $$$a$$$ is a non-decreasing array, which means $$$a_i \leq \frac m {n - i + 1}$$$.
Because $$$\sum_{i=1}^n \frac n i$$$ is $$$O(n \log n)$$$, so if we only enumerate the possible values of $$$a_i$$$ the time complexity will be $$$O(m^2 \log m)$$$.
Time complexity: $$$O(m^2 \log m)$$$
Memory complexity: $$$O(nm)$$$
#include<bits/stdc++.h>
using namespace std;
const int N = 3000 + 5;
const long double Inf = 1e18;
int n = 0, m = 0;
long double dp[N][N] = {};
int main(){
scanf("%d %d", &n, &m);
for(int i = 0 ; i <= n ; i ++) for(int x = 0 ; x <= m ; x ++) dp[i][x] = Inf;
dp[0][0] = 0.00;
for(int i = 1 ; i <= n ; i ++) for(int x = 0 ; x <= m ; x ++) for(int y = 1 ; x + y * (n - i + 1) <= m ; y ++) dp[i][x + y] = min(dp[i][x + y], dp[i - 1][x] + 1.00 * x / y);
printf("%.12Lf", 2 * dp[n][m] + n);
return 0;
}
1874E - Jellyfish and Hack
Firstly, if $$$lim > \frac {n(n+1)} 2$$$, the answer will be $$$0$$$. So we only need to solve the problem which satisfies $$$lim \leq \frac{n(n+1)} 2$$$.
We can use dynamic programming to solve the problem:
Let's define $$$dp_{i, a}$$$ means the number of the permutations $$$P$$$ of $$$[1, 2, \dots, i]$$$, satisfying $$$\mathrm{fun(P)} = a$$$.
Since only relative rankings are useful, we have the following transition:
$$$dp_{0, 0} = 1$$$
$$$\forall i > 0, dp_{i, a} = \sum_{j=1}^i \binom {i-1} {j-1} \sum_{b=0}^{a - i} dp_{j - 1, b} \times dp_{i - j, a - b - i}$$$
The time complexity is $$$O(n^6)$$$, because $$$a \leq \frac {i(i+1)} 2$$$, how can it becomes faster?
FFT (Fast Fourier Transform) might come to mind first. If we use FFT instead of enumerating $$$t$$$, The time complexity will become $$$O(n^4 \log n)$$$. This is not enough because $$$n$$$ is large and $$$10^9+7$$$ is not a modulus suitable for NTT (Number-theoretic Transform).
Tips. Also, if you use divide and conquer and FFT, The time complexity will be $$$O(n^3 \text{poly}(\log n))$$$, but because FFT have a big constant factor, this still can't pass the problem.
But we can do something similar to what the FFT does. We define $$$F_i = \sum_{a=1}^{\frac {n(n + 1)} 2} dp_{i, a} \times x^a$$$, then we will have the following transition:
$$$F_0=1$$$
$$$F_i = x^i \times \sum_{j=1}^i \binom{i - 1}{j - 1} F_{j-1} \times F_{i-j}$$$
For all $$$1 \leq i \leq n$$$, The degree of $$$F_i$$$ won't exceed $$$\frac {n(n+1)} 2$$$. So if we have $$$\frac{n(n+1)} 2 + 1$$$ points on $$$F_n$$$, We can get $$$F_n$$$ by in time complexity $$$O(n^4)$$$ using Lagrange Interpolation. The only thing we need to do is the dot product of the functions, so the time complexity of the transition is also $$$O(n^4)$$$.
Time complexity: $$$O(n^4)$$$
Memory complexity: $$$O(n^3)$$$
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll N = 200 + 5, Mod = 1e9 + 7;
inline ll power(ll x, ll y){
ll ret = 1;
while(y){
if(y & 1) ret = ret * x % Mod;
x = x * x % Mod, y >>= 1;
}
return ret;
}
ll n = 0, k = 0, lim = 0;
ll C[N][N] = {}, pw[N * N][N] = {}, iv[N * N] = {}, ifac[N * N] = {}, dp[N][N * N] = {};
ll a[N * N] = {}, b[N * N] = {}, ans = 0;
int main(){
scanf("%lld %lld", &n, &k); lim = n * (n + 1) / 2;
for(ll i = 0 ; i <= n ; i ++){
C[i][0] = 1;
for(ll j = 1 ; j <= i ; j ++) C[i][j] = (C[i - 1][j - 1] + C[i - 1][j]) % Mod;
}
ifac[0] = 1;
for(ll x = 0 ; x <= lim ; x ++){
pw[x][0] = 1;
if(x) iv[x] = power(x, Mod - 2), ifac[x] = ifac[x - 1] * iv[x] % Mod;
for(ll i = 1 ; i <= n ; i ++) pw[x][i] = pw[x][i - 1] * x % Mod;
}
for(ll x = 0 ; x <= lim ; x ++) dp[0][x] = 1;
for(ll i = 1 ; i <= n ; i ++){
for(ll j = 0 ; j < i ; j ++) for(ll x = 0 ; x <= lim ; x ++) dp[i][x] = (dp[i][x] + dp[j][x] * dp[i - 1 - j][x] % Mod * C[i - 1][j]) % Mod;
for(ll x = 0 ; x <= lim ; x ++) dp[i][x] = dp[i][x] * pw[x][i] % Mod;
}
a[0] = 1;
for(ll i = 0 ; i <= lim ; i ++){
for(ll x = lim ; x >= 1 ; x --) a[x] = (a[x - 1] + a[x] * (Mod - i)) % Mod;
a[0] = a[0] * (Mod - i) % Mod;
}
for(ll i = 1 ; i <= lim ; i ++) if(dp[n][i]){
ll w = dp[n][i] * ifac[i] % Mod * ifac[lim - i] % Mod;
if((lim - i) & 1) w = Mod - w;
b[0] = a[0] * (Mod - iv[i]) % Mod;
for(ll x = 1 ; x <= lim ; x ++) b[x] = (a[x] - b[x - 1] + Mod) * (Mod - iv[i]) % Mod;
for(ll x = k ; x <= lim ; x ++) ans = (ans + b[x] * w) % Mod;
}
printf("%lld\n", ans);
return 0;
}
1874F - Jellyfish and OEIS
Let's call a section $$$[l, r]$$$ bad if $$$[p_l, p_{l+1}, \dots, p_{r-1}, p_r]$$$ is a permutation of $$$[l, l + 1, \dots, r - 1, r]$$$ and $$$l \leq r \leq m_l$$$.
Let's call a section $$$[l, r]$$$ primitive if it's a bad section and there are no section $$$[l', r']$$$ satisfying $$$[l', r']$$$ is also a bad section and $$$[l, r]$$$ covers $$$[l', r']$$$.
Lemma. For all primitive sections, none two of them intersect.
Proof. If $$$[l_1, r_1]$$$ is a bad section, $$$[l_2, r_2]$$$ is a bad section and $$$l_1 < l_2 < r_1 < r_2$$$, the section $$$[l_2, r_1]$$$ will also be a bad section, but both $$$[l_1, r_1]$$$ and $$$[l_2, r_2]$$$ covers $$$[l_2, r_1]$$$, so $$$[l_1, r_1]$$$ and $$$[l_2, r_2]$$$ can't be primitive at the same time.
Let's use the principle of inclusion-exclusion and dynamic programming to solve the problem.
Let's define $$$f(l, r)$$$ as the number of $$$p_l, p_{l+1}, \dots, p_{r-1}, p_r$$$ which is a primitive section. By the principle of inclusion-exclusion, we count the ways to fix $$$k$$$ primitive sections in range $$$[l, r]$$$, and arrange the rest arbitrarily.
Then we can define $$$g_{1/2}(l, r, x)$$$ as the number of ways to fix $$$k$$$ ($$$k$$$ is odd/even) primitive sections in range $$$[l, r]$$$, and the number of the positions which doesn't cover by any primitive section is $$$x$$$.
Finally we define $$$g(l, r, x) = g_2(l, r, x) - g_1(l, r, x)$$$, We will have the following transition:
$$$\forall l \leq r \leq m_l, f(l, r) = \sum_{x=0}^{r-l+1} g(l, r, x) \times x! + f(l, r)$$$
$$$g(l, r, x) = g(l, r - 1, x - 1) - \sum_{mid=l}^r g(l, mid - 1, x )\times f(mid, r)$$$
Tips. In the transition of $$$f(l, r)$$$, We add $$$f(l, r)$$$ because $$$f(l, r)$$$ contributes to $$$g(l, r, 0)$$$, but it should not contribute to $$$f(l, r)$$$, This problem can be solved by ordering the transitions.
According to the definition, $$$\sum_{x=0}^{n} g(1, n, x) \times x!$$$ is the answer.
Time complexity: $$$O(n^4)$$$
Memory complexity: $$$O(n^3)$$$
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
const ll N = 200 + 5, Mod = 1e9 + 7;
ll n = 0, m[N] = {}, fac[N] = {}, f[N][N] = {}, g[N][N][N] = {};
int main(){
scanf("%lld", &n);
for(ll i = 1 ; i <= n ; i ++) scanf("%lld", &m[i]);
if(m[1] == n){
printf("0");
return 0;
}
fac[0] = 1;
for(ll i = 1 ; i <= n ; i ++) fac[i] = fac[i - 1] * i % Mod;
for(ll i = 1 ; i <= n ; i ++) g[i][i - 1][0] = 1;
for(ll l = n ; l >= 1 ; l --) for(ll r = l ; r <= n ; r ++){
for(ll x = 1 ; x <= r - l + 1 ; x ++) g[l][r][x] = g[l][r - 1][x - 1];
for(ll mid = l ; mid < r ; mid ++) if(r <= m[mid + 1]) for(ll x = 0 ; x <= mid - l + 1 ; x ++) g[l][r][x] = (g[l][r][x] + g[l][mid][x] * (Mod - f[mid + 1][r])) % Mod;
for(ll x = 0 ; x <= r - l + 1 ; x ++) f[l][r] = (f[l][r] + g[l][r][x] * fac[x]) % Mod;
if(r <= m[l]) g[l][r][0] = (g[l][r][0] + (Mod - f[l][r])) % Mod;
}
printf("%lld", f[1][n]);
return 0;
}
1874G - Jellyfish and Inscryption
For convenience, let's define $$$k = \max(a, b, x, y)$$$, use operation 1 for "You will get a card with $$$a$$$ HP, and $$$b$$$ damage", operation 2 for "If you have at least one card, choose one of your cards and increase its HP by $$$x$$$" and operation 3 for "If you have at least one card, choose one of your cards and increase its damage by $$$y$$$", operation 4 for "You will get a prop with $$$w$$$ power", operation 5 for "You can choose at most one of your cards and multiply its damage by $$$10^9$$$", "the $$$i$$$-th card" for the card we get from vertex $$$i$$$ (It must be operation 1 on vertex $$$i$$$), "do operation 2/3/5 onto card $$$i$$$" for "When we do this operation 2/3/5, we will choose the card $$$i$$$ and increase his HP or damage".
Let us consider the problem without the operation 5, what's the maximum possible answer. If you want to maximize the the sum of the power of your cards, the answer will not exceeding $$$(100 \times 200)^2 = 4 \times 10^8$$$; If you you want to maximize the the sum of the power of your props, the answer will not exceeding $$$200 \times 10^6 = 2 \times 10^8$$$. Because $$$4 \times 10^8 + 2 \times 10^8 = 6 \times 10^8 < 10^9$$$, so the operation $$$n$$$ is the most important. Let's called the card we wil do the operation 5 onto it the "flash card". Let's use meet-in-the-middle, the problem is divided into two subproblems — the game before we get the flash card and the game after we get the flash card.
1. The game after we get the flash card
It's a easy problem, we can use dynamic programming to solve this problem.
Since the most important thing is the HP and damage of the flash card, so we define the following dynamic programming state:
- $$$dp_b(u, a)$$$ means we are current at vertex $$$u$$$, the current HP of the flash card is $$$a$$$, the maximum damage of the flash card.
- $$$dp_c(u, a)$$$ means we are current at vertex $$$u$$$, the current HP of the flash card is $$$a$$$, the damage of the flash card is $$$dp_b(u, a)$$$, the maximum sum of the power of all the other cards and props.
Since $$$a \leq nk$$$, the time complexity of the transition is $$$O(m \times n \times k)$$$.
2. The game before we get the flash card
This is the key of the problem, and since it's much more difficult, we first consider the subproblems of this problem.
I. If the graph is a chain, and all the operation 2 and operation 3 is after the operation 1
Lemma. We will do all the operation 2 onto one of the cards, symmetrically we will also do all the operation 3 onto one of the cards.
Proof. We consider a sequence of operations, let's consider if we do all the operation 1 on the card with the max damage after doing all the operation 2, the answer won't be worse, we can do all the operation 1 onto this card instead, then we make a symmetrical adjustment, the answer won't be worse, and all the operation 2 is done onto one of the cards, all the operation 3 is done onto one of the cards.
II. If the graph is a chain
It's similar to the subproblem I. If we say subproblem I is like a "global max value", then subproblem II is like a "prefix max value".
Let's define $$$p_i$$$ means we will do the operation 2/3 on vertex $$$i$$$ onto the $$$p_i$$$-th card.
Lemma1. If there is a operation 2 on vertex $$$i$$$ and a operation 2 on vertex $$$j$$$, if $$$p_i < j < i$$$, we will have $$$p_j = p_i$$$.
Proof. Let's consider the final HP and damage of the cards after all the operations. Because we do the operation 2 on vertex $$$i$$$ onto the $$$p_i$$$-th card, for all $$$i' < i$$$, the damage of the $$$i'$$$-th card is not larger than the $$$p_i$$$-th card. So for $$$j$$$, if we don't do the operation 2 on vertex j onto the $$$p_i$$$-th card, we can do it onto the $$$p_i$$$-th card instead, the answer won't be worse.
Symmetrically, Lemma1 is also correct for operation 3.
Now we can use dynamic programming to solve the problem, we define the following dynamic programming state:
- $$$f(u, a, b)$$$ means we are current at vertex $$$u$$$, we will do the next several operation 3 onto a card with $$$a$$$ HP after all the operations and we will do the next several operation 2 onto a card with $$$b$$$ damage after all the operations, and this two cards are not the same.
- $$$g(u, a, b)$$$ means we are current at vertex $$$u$$$, We will do the next several operation 2 and operation 3 onto a card currently having $$$a$$$ HP and $$$b$$$ damage.
The time complexity is $$$O(m \times n^2 \times k^2)$$$, but it's not enough.
Lemma2. If a card has $$$a$$$ HP and $$$b$$$ damage after all the operations and it's not the flash card, $$$\min(a, b) \leq k$$$.
Proof. If we use this card as the flash card instead of the last one, and do all the operations done onto the last flash card onto this card, the power of the flash card will be larger. So it won't exist in this half problem.
Now the time complexity becomes $$$O(m \times n \times k^2)$$$, it's still not enough.
Lemma3. Let's define $$$A$$$ as the maximum HP of all the cards except the flash card after all the operations, $$$B$$$ as the maximum damage of all the cards except the flash card after all the operations, $$$\min(A, B) \leq k$$$.
Proof. Let's assume that the $$$i$$$-th card has $$$A$$$ HP after all the operations, the $$$j$$$-th card has $$$B$$$ damage after all the operations and $$$A > k, B > k$$$. If $$$i = j$$$, it conflicts with Lemma 2; If $$$i < j$$$, because of the Lemma 2, the HP of the $$$j$$$-th card after all the operations won't exceed $$$k$$$, let's use $$$A'$$$ as the HP of the $$$j$$$-th card after all the operations, and $$$B > k$$$, so we have done some operation 3 onto the $$$j$$$-th card. But because $$$A > k \geq A'$$$, if we do these operations onto the $$$i$$$-th card, the answer will be better; If $$$i > j$$$, it's symmetric with $$$i < j$$$.
But we can make the dynamic programming state better:
$$$f(u, a, b)$$$ means we are current at vertex $$$u$$$, we will do the next several operation 3 onto a card with $$$a$$$ HP after all the operations and we will do the next several operation 2 onto a card with $$$b$$$ damage after all the operations, and this two cards are not the same.
$$$g_a(u, a, a')$$$ means we are current at vertex $$$u$$$, we will do the next several operation 3 onto a card with $$$a$$$ HP after all the operations, we will do the next several operation 2 onto a card and totally increase $$$a'$$$ HP in the next several operations to reach it's HP after all the operations.
$$$g_b(u, b, b')$$$ it's symmetric with $$$g_a$$$, just swap "HP" and "damage".
Let's define $$$A$$$ as the maximum HP of all the cards except the flash card after all the operations, $$$B$$$ as the maximum damage of all the cards except the flash card after all the operations. We will find the $$$a, a', b, b'$$$ in $$$f, g_a, g_b$$$ is $$$O(k)$$$ because if $$$A \leq k$$$, using $$$g_a$$$ we can get the right answer, if $$$B \leq k$$$, using $$$g_b$$$ we can get the right answer.
the time complexity of the transition is $$$O(m \times k^2)$$$.
The transition is very complex, you can see more details in my code : )
In my code, I use $$$g(u)$$$ to make the transition better. When $$$a'$$$ or $$$b'$$$ reaches $$$0$$$, we reaches the vertex $$$u$$$ and there is a operation 1 on vertex $$$u$$$, I don't enumerate the value of the next $$$a$$$ and $$$b$$$ while transiting. I transit it to $$$g(u)$$$ first, and enumerate the value of the next $$$a$$$ and $$$b$$$ together.
III. the problem itself
Since the path is a chain, and we use dynamic programming to solve the problem. There's no difference whether the graph is a chain.
Time complexity: $$$O(m(nk+k^2))$$$
Memory complexity: $$$O(n^2k+nk^2)$$$
#include<bits/stdc++.h>
using namespace std;
typedef long long ll;
const int N = 200 + 5;
const ll lim = 1e9;
inline void checkmax(int &x, int y){
if(y > x) x = y;
}
inline void checkmax(ll &x, ll y){
if(y > x) x = y;
}
inline void checkmax(pair<int, int> &x, pair<int, int> y){
if(y > x) x = y;
}
int n = 0, m = 0, k = 0, opt[N] = {}, a[N] = {}, b[N] = {}, w[N] = {};
int f[N][N][N] = {}, g[N] = {}, g_a[N][N][N] = {}, g_b[N][N][N] = {};
pair<int, int> dp[N][N * N] = {};
vector<vector<int> > G(N);
int main(){
scanf("%d %d", &n, &m);
for(int u = 1 ; u <= n ; u ++){
scanf("%d", &opt[u]);
if(opt[u] == 1) scanf("%d %d", &a[u], &b[u]);
else if(opt[u] == 2) scanf("%d", &a[u]);
else if(opt[u] == 3) scanf("%d", &b[u]);
else if(opt[u] == 4) scanf("%d", &w[u]);
k = max(k, max(a[u], b[u]));
}
for(int i = 1, u = 0, v = 0 ; i <= m ; i ++){
scanf("%d %d", &u, &v);
G[u].push_back(v);
}
memset(f, -1, sizeof(f));
memset(g, -1, sizeof(g)), memset(g_a, -1, sizeof(g_a)), memset(g_b, -1, sizeof(g_b));
memset(dp, -1, sizeof(dp));
f[1][0][0] = 0;
for(int u = 1 ; u <= n ; u ++){
if(opt[u] == 1 && g[u] != -1){
for(int x = a[u] ; x <= k ; x ++) checkmax(g_a[u][x][x - a[u]], g[u] + x * b[u]);
for(int x = b[u] ; x <= k ; x ++) checkmax(g_b[u][x][x - b[u]], g[u] + x * a[u]);
checkmax(dp[u][a[u]], make_pair(b[u], g[u]));
}
for(int v : G[u]){
if(opt[v] == 1){
for(int x = 0 ; x <= k ; x ++) for(int y = 0 ; y <= k ; y ++){
if(f[u][x][y] != -1){
checkmax(f[v][max(x, a[v])][max(y, b[v])], f[u][x][y] + a[v] * b[v]);
checkmax(g[v], f[u][x][y]);
}
if(g_a[u][x][y] != -1){
checkmax(g_a[v][max(x, a[v])][y], g_a[u][x][y] + a[v] * b[v]);
if(!y){
checkmax(g[v], g_a[u][x][y]);
checkmax(f[v][x][b[v]], g_a[u][x][y] + a[v] * b[v]);
}
}
if(g_b[u][x][y] != -1){
checkmax(g_b[v][max(x, b[v])][y], g_b[u][x][y] + a[v] * b[v]);
if(!y){
checkmax(g[v], g_b[u][x][y]);
checkmax(f[v][a[v]][x], g_b[u][x][y] + a[v] * b[v]);
}
}
}
for(int x = 0 ; x <= n * k ; x ++) if(dp[u][x] != make_pair(-1, -1)){
int y = dp[u][x].first, z = dp[u][x].second;
checkmax(dp[v][x], make_pair(y, z + a[v] * b[v]));
}
}
else if(opt[v] == 2){
for(int x = 0 ; x <= k ; x ++) for(int y = 0 ; y <= k ; y ++){
if(f[u][x][y] != -1) checkmax(f[v][x][y], f[u][x][y] + a[v] * y);
if(g_a[u][x][y] != -1 && y >= a[v]) checkmax(g_a[v][x][y - a[v]], g_a[u][x][y]);
if(g_b[u][x][y] != -1) checkmax(g_b[v][x][y], g_b[u][x][y] + a[v] * x);
}
for(int x = 0 ; x <= n * k ; x ++) if(dp[u][x] != make_pair(-1, -1)){
int y = dp[u][x].first, z = dp[u][x].second;
checkmax(dp[v][x + a[v]], make_pair(y, z));
}
}
else if(opt[v] == 3){
for(int x = 0 ; x <= k ; x ++) for(int y = 0 ; y <= k ; y ++){
if(f[u][x][y] != -1) checkmax(f[v][x][y], f[u][x][y] + x * b[v]);
if(g_a[u][x][y] != -1) checkmax(g_a[v][x][y], g_a[u][x][y] + x * b[v]);
if(g_b[u][x][y] != -1 && y >= b[v]) checkmax(g_b[v][x][y - b[v]], g_b[u][x][y]);
}
for(int x = 0 ; x <= n * k ; x ++) if(dp[u][x] != make_pair(-1, -1)){
int y = dp[u][x].first, z = dp[u][x].second;
checkmax(dp[v][x], make_pair(y + b[v], z));
}
}
else{
for(int x = 0 ; x <= k ; x ++) for(int y = 0 ; y <= k ; y ++){
if(f[u][x][y] != -1) checkmax(f[v][x][y], f[u][x][y] + w[v]);
if(g_a[u][x][y] != -1) checkmax(g_a[v][x][y], g_a[u][x][y] + w[v]);
if(g_b[u][x][y] != -1) checkmax(g_b[v][x][y - b[v]], g_b[u][x][y] + w[v]);
}
for(int x = 0 ; x <= n * k ; x ++) if(dp[u][x] != make_pair(-1, -1)){
int y = dp[u][x].first, z = dp[u][x].second;
checkmax(dp[v][x], make_pair(y, z + w[v]));
}
}
}
}
ll ans = f[n][0][0];
for(int x = 0 ; x <= n * k ; x ++) if(dp[n][x] != make_pair(-1, -1)){
int y = dp[n][x].first, z = dp[n][x].second;
checkmax(ans, lim * x * y + z);
}
printf("%lld", ans);
return 0;
}
I apologize for the unreasonable difficulty. I'm sorry :(
UPD: https://codeforces.net/blog/entry/120967
chill
thanks for the round
Thanks :)
amazing round loved it
Wish you good luck in the future :)
chill, in my opinion the round was great, ABCD and F were great problems (although D might be a bit too easy for a D?). It seemed that div1 was too hard but I guess that such things happen and can't be predicted.
I'm looking forward to compete into one of your rounds again :)
The scoring for D justifies its difficulty. There was a huge difficulty gap between D and E though and that's the only issue. Nevertheless, amazing problems.
Next time I'll pay attention to the difficulty gap of adjacent problems :)
Thanks, welcome to the next Jellyfish Round :)
it was a great contest the problems were very interesting I had so much fun so thank you <3
And thank you for supporting the round! :)
It was my first contest, littrally I solved nothing. But it was a nice experience
Wish you get a higher score in the next round :)
maybe you are the king of dp :) (doge)
In fact, I personally think it's the number of dp problems is too much, but it seems like a lot of people thinks it's a math-y round X_X
No need to apologize at all.
I really enjoyed the round and think that no one should be blamed after their hard work.
I admire your work.
+1
Don't feel guilty...
Some people will never appreciate your hard work <3
At some point I have just started guessing C and it worked 226022161 (only if I didn't forget to take n modulo m first)
pls explain me why in Problem C my solution passed for only 60 iterations but otherwise it showed tle
Can you explain why u have taken max 300 operations
Why my O(n^2) solution fails 226039678? Thanks in advance
https://codeforces.net/contest/1875/submission/226075447
Change ur code a little bit(just removing maps cuz of shit const factor and adding vector as cnt). Always be careful with maps(never use it when it isn't necessary)
Also it's not good to have #define int long long cuz sometimes u can get TLE cuz of that.
I think you can solve Div1.D in $$$O(nm)$$$: https://codeforces.net/contest/1874/submission/225975684
tilted asf, how to become better on coming up with ideas for DP, ABC superfast and ... no way, i wasted 2.5 h on D and didnt solve it, but came up with O(n^3) solution first, then O(n^2*logn), tl..
Practice DP!!
Can you explain your n^2log(n) solution of Div2 D. Jellyfish and Mex.
I have one doubt . in Div2C how do we know, that for the given set of powers it is always possible to divide the apples , in a away that all m people will get that set of powers?
Each person needs $$$\frac{n}{m}$$$ apples. We can divide each of the $$$n$$$ apples into $$$m$$$ pieces, then each piece is $$$\frac{1}{m}$$$, Each person takes $$$n$$$ pieces.
This is only possible since $$$m$$$ is a power of two, otherwise we would not be able to divide an apple into $$$m$$$ pieces.
Thanks
thats almost right...consider n = 3 and m = 24 here m is not a power of two but its still possible....thats why actually m/gcd(m,n) should be a power of two ;)
Yes what I did was take gcd, divide it from m and n at the beginning, then multiply back at the end
In Div2C, Why the answer in the tutorial is m×|S|−n. Anyone help me pls, I was thinking from yesterday.
Everyone should have $$$|s|$$$ pieces( $$$\frac n m = \sum_{i \in S} \frac 1 {2^i}$$$ ), there are $$$m$$$ people and $$$n$$$ pieces (each one is a whole apple), and the number of apple pieces is added to exactly $$$1$$$ for each cut.
I can't understand this statement:
I just made 2000 iterations simulating the process:
suppose after some large no (let it be a) of iterations, let x be (n*2^a) then -> x%m (according to your code only)
now, x%m to be zero m should be the divisor of x.
now m can be in the form of something * 2^(something else).
now this something cannot come from 2^a so it should come from n (see x=n*2^a) therefore m/(__gcd(n, m)) should not have any other prime factors other than two because 2 (in other words __gcd(n,m) should take all factors from m which are not 2) can be taken care by 2^a, but remaining factors have to be taken care by n itself.
explanation might be bad english not my native language.
Thank you I got it.
Help me understand this in a better way, please.
If you can give everyone the same piece then (n * 2^k = 0) mod m, or n * 2^k = q * m. You can factor out gcd(n, m) to get
Now if b is a power of 2, then q = a and 2^k = b satisfies the equation. But if b is not a power of 2, then b has some additional factor(s) that is not found in a (because a is coprime to b) and 2^k (since b is not a power of 2).
Also https://youtu.be/UvnVghpIjwk?si=rmsoD_dtVNUFOtLx&t=668
For 1875C — Jellyfish and Green Apple can you please explain: why __builtin_popcount(a) is equal to |S|?
Because m is 2 's interger power, (a/m) is equal to |S|,and so do(a);
THANK YOU !!
In problem $$$D$$$, ternary search for finding the optimal $$$x$$$ works for some reason.
In problem $$$E$$$, $$$FFT$$$ does pass without any D&C, just by computing the convolution of $$$dp[i-1]$$$ and $$$dp[n-i]$$$ for every $$$i$$$ (used the fastest 1000 lines long implementation from yosupo judge).
"Note that the sum of n over all test cases is not bounded." what does this mean in Div2 A problem?
Usually the sum of $$$n$$$ over all test cases is such that an $$$O(n)$$$ solution would remain $$$O(n)$$$ (not $$$O(nt)$$$) even though you're answering $$$t$$$ test cases. However if $$$n$$$ is unbound, this doesn't stay true.
why does my div2 D python code TLE (link)
Div2D problem can be solved in $$$O(n)$$$ time and memory.
Let's say that mex values we add to the total score are interesting values. So main the observation is that there are $$$O(\sqrt{n})$$$ that can be interesting values. Proof: Define $$$cnt(x)$$$ as the count of $$$x$$$ in the array. Suppose that $$$x < y$$$, and $$$cnt(x) = cnt(y)$$$. We will not select both of them as interesting because it's enough to choose one. So if we choose $$$y$$$ its contribution to the answer is $$$y * cnt(y)$$$ and if we choose $$$x$$$ the contribution is $$$x * cnt(x)$$$, and as $$$cnt(x) = cnt(y)$$$ it's optimal to choose $$$x$$$. For every $$$cnt(i) = x$$$, it's enough to choose minimal $$$i$$$ as interesting value and as $$$1 + 2 + 3 + ... 2 \sqrt{n} \ge n$$$ there are at most $$$2 \sqrt{n}$$$ interesting values.
We can do DP as in editorial but only with values that can be interesting. Total complexity is $$$O(\sqrt{n}^2) = O(n)$$$.
i did not understand it clearly could you explain with this example??
0 0 0 0 0 1 1 1 1 2 2 2 3 3 4
For this example, $$$n = 15$$$. Also $$$cnt(4) = 1$$$, $$$cnt(3) = 2$$$, $$$cnt(2) = 3$$$, $$$cnt(1) = 4$$$ and $$$cnt(0) = 5$$$. So 0, 1, 2, 3, and 4 (5 numbers) can be interesting values. And $$$5 \leq 2 \sqrt{15} \approx 8 $$$. We do DP only for these numbers.
Let's see another example. $$$n = 20$$$ 0 1 2 2 3 3 4 4 4 5 5 5 6 6 6 6 7 7 7 7. For this example only numbers 0, 2, 4, 6 (4 numbers) can be interesting. $$$4 \leq 2 \sqrt{20} \approx 9 $$$.
And for any array count of possible interesting numbers won't exceed $$$2 \sqrt{n}$$$.
oh i got it! great solution bro loved it!!
It is a really nice idea and it is sad that it's not possible to make this problem use it because convex hull trick will pass anyway.
why is the contribution y* cnt(y) shouldn't it be mex * cnt(y)
Actually complexity is $$$O(log(n)n)$$$ or $$$O(\sqrt n n)$$$. Because you need too count numbers.
you don't need an extra log to count numbers, because we can ignore all values which are not less than n
Yep, but first you need to sort
again, you can use count sort, so it is still O(n) )
True, thx
There is also o(1) approach for problem b https://codeforces.net/contest/1875/submission/226021601
Reading array of n numbers already requires O(n). For a reason it can't be O(1).
Yes! Thanks for explanation.
it's not O(1) you are summing the values.
Div2 D works with convex hull trick!
Can you elaborate?
how the equation m×|S|−n come out in the problem Div2C? Can anyone explain this? There is no any additional comments about this in Editorial.
|S| is the number of pieces of apple each person is going to get , so m*|S| will be the total number of pieces they are going to get. Since each cut increase the number of pieces by 1 and initially there are n pieces when there are 0 cuts hence for x pieces we have to make x-n cuts and hence number of cuts will be m*|S| — n.
I was also stuck in how builtin_popcount_ ( or the number of set bits in a) is equal to the value of m. I got it like this:
If each person gets a/b (simplest form of n/m) weight of apple: It is represented as 1/2^k + 1/2^l + .... and sum of all these individual parts for a person is equal to a/b. a will be the numerator . Also, we notice that on adding these we will get just some powers of 2 in numerator . and lets say 2^p + 2^q +.... =a So number of pieces for each person can be obtained as number of different powers of 2 that add up to form the numerator . ie (a), and this is just equal to the number of set bits in 'a'. Also, in one cut we get 1 extra pieces , so for m*|S| pieces , we need to make m*|s| -n cuts, as we already had n pieces when 0 cuts were made (as explained by the person above)
For Div1C, can someone please explain why $$$p_i$$$ is different for each $$$v_i$$$, I don't get it, it should be same according to me.
It depends on what is $$$p_i$$$?
$$$p_i$$$ is probability that both of them chose $$$i$$$, possibility after destroying some roads.
Let's say graph has 4 outgoing edges from 1, aka $$$1->2$$$,$$$1->3$$$,$$$1->4$$$,$$$1->5$$$,
Let $$$f_2=0.4$$$, $$$f_3=0.2$$$, $$$f_4=0.1$$$, $$$f_5=0.5$$$.
During the first turn, Asuka will choose all nodes with equal probability, so for maximum probability to reach n, Jellyfish should choose $$$5$$$, there $$$p_5=0.25$$$.
The second turn happens only if Asuka does not choose $$$5$$$ in the first turn.
If $$$2$$$ still exists for the second turn, Jellyfish should choose it. $$$p_2=0.5*0.5$$$, (Asuka chose 3/4 in the first turn and 2 in the second turn).
If $$$2$$$ does not exist, then Jellyfish should choose $$$3$$$, $$$p_3=0.25*0.5$$$. (Asuka chose 2 in the first turn, and then 3 in the second turn)
Jellyfish will not get a third turn; he will never choose $$$4$$$, $$$p_4=0$$$,
$$$f_1= p_2*f_2+ p_3*f_3+p_4*f_4+p_5*f_5$$$
$$$f_1= 0.25*0.4+ 0.625*0.2+0*0.1+0.25*0.5$$$
Can you explain why f[i] = $$$\sum$$$ f[j]*p[j] ?
It's via linearity of expectation/probability.
If one is at $$$i$$$, the next possible outcome is one move to one of the nodes ($$$j$$$) with an edge and eventually moves to $$$n$$$, or one remains stuck there.
So, the probability of reaching $$$n$$$ from $$$i$$$ will be the probability of moving from $$$i$$$ to one of the adjacent nodes multiplied by their probability to reach $$$n$$$, and $$$0$$$ (probability to reach $$$n$$$ after being struck on node $$$u$$$) multiplied by probability to remain stuck there.
Sure enough, you can also add other non-adjacent nodes in this equation but the probability of reaching them from $$$i$$$ is $$$0$$$.
Thanks, I see
By calculating, p=[0.25,0.25,0.125,0] , it also satisfies the condition.
how can we be sure it satisfies the condition without checking all cases??
One cannot find all probabilities without checking all cases, just that you can memorise the subproblems via appropriate dp.
Thanks, I misunderstood the problem twice, but your explanation made it very clear why the $$$p_i$$$'s would be different. But, the only issue is to calculate these $$$p_i$$$'s in general, how did you extended this approach for any general $$$d$$$ outgoing edges?
Asuka chooses randomly with equal probability; by exchanging arguments, one can conclude that Jellyfish should always choose the one with the highest $$$f_j$$$, still around.
Sort all the possible nodes in the adjacency list in descending order; then, we want to find $$$p_j$$$ of each of them, with the condition that Jellyfish will always choose the first available one.
We can define the following $$$dp[L][R]$$$ as the probability of selecting $$$L+1$$$ th person in a sequence of $$$L+1+R$$$ people.
If $$$L$$$ is zero, then Jellyfish will choose the first person, Asuka will choose it with probability $$$1/(R+1)$$$, and if Auska does not choose it, it cannot be chosen later.
If $$$L$$$ is non-zero, then Jellyfish will choose the first element,
- Asuka, chose the first element with probability $$$1/(L+1+R)$$$, the process ends,
- Asuka, chose one of the left but not the first element, now we are left with $$$dp[L-2][R]$$$ subproblem with probability $$$(L-1)/(L+1+R)$$$
- Asuka, chose $$$L+1$$$ node; now, $$$L+1$$$ node will be destroyed and cannot be chosen later.
- Asuka, chose one of the right, and now we are left with $$$dp[L-1][R-1]$$$ subproblem with probability $$$R/(L+1+R)$$$
Probably the editorial was written on the same idea, but I was not able to understand it until you dropped a much simpler explanation. Thanks!
thanks dude, your explanation is really clear and understandable. and maybe a typo error that in the case " chose one of the left ", the probability should be (L-1)/(L+1+R) instead of (L-2)/(L+1+R)
Fixed. :)
Sorry for necroposting but is there an easy way to prove that the probability of choosing the element we want is non-increasing? Or maybe I'm not understanding the proof by exchanging arguments (that obviously works in the first step since every edge has $$$1/d$$$ probability but I can't extend it for subsequent turns...).
I don't understand why the strategy of always going for the highest $$$f_j$$$ still around works. Won't there be cases where it's better to wait for some edges to be deleted and then choose the highest $$$f_j$$$ among all of them? In that case, we should first go for some other edge with lesser $$$f_i$$$ and then, after some turns, go for the highest $$$f_j$$$, since it would have higher probability.
The only way I can think of is proving that $$$dp[L][N-L-1] \geq dp[L+1][N-L-2]$$$ but the formulas blow up recursively.
How do you calculate p2? If Asuka chooses 3/4 in the first move, then this probability = 2/5 = 0.4, and from there, for Asuka to pick 2, it should be 1/3. So p2 should be (2/5)*(1/3) right?
There are only $$$4$$$ outgoing edges from $$$1$$$, so possible choices are $$$4 (2,3,4,5)$$$ instead of $$$5$$$. So, in the first move, Asuka has to select $$$3$$$ or $$$4$$$, which has $$$2/4$$$ probability.
In the second move, the remaining edges will be $$$1->2$$$ and ($$$1->3$$$ or $$$1->4$$$); selecting $$$1->2$$$ will have probability $$$1/2$$$.
Overall $$$2/4*1/2$$$
You can easily optimise the Div2 D solution to O(N) by noticing that you will ever only consider deleting O(sqrt(N)) distinct elements. As the author mentioned, the deletions will be of strictly descending value but they will also be of strictly ascending count of appearance. Of course the sum of the counts of the elements we erase is bounded by N and since the counts will be distinct as mentioned earlier, the number of elements we consider erasing is O(sqrt(N)) (this is because in the worst case the counts are 1, 2, 3, .. and the sum of this series is quadratic). Then we can just do the same DP with complexity O(sqrt(N)^2) = O(N).
Great observation! We could have had a harder version of the problem with $$$n \le 2 \cdot 10 ^ 5$$$
Here is a Video Editorial for Div2 B/Div1 A
And Div2 A too
For question C, turns out 1000 operations fits under the time limit and satisfies an "infinite" number of operations 226038405 :)
You can even go down to a number as small as 100, since 2^100 is way greater than 10^9! Great solution, I didn't even think about brute forcing.
Number of Submissions(E)*Submissions(F)*Submissions(G)<Submissions(D)
finally got it thanks passwordisa
Try this test case:
Please someone explain the mathematics behind the Div2C problem as mentioned in editorial in detail.
Since it's cut in half every time, if m/gcd(n,m) is not an integral power of 2, there's no solution. (Div 2C) Can anyone explain this statement?
The denominator on each slice needs to be a power of two. For example, you couldn't make $$$\frac{1}{3}$$$ of an apple with slices of $$$\frac{1}{2}, \frac{1}{4}, \frac{1}{8}\cdots$$$, thus if $$$n=1$$$ and $$$m=3$$$ you can immediately say it's not possible.
For Div1 C, you can also think of the solution as writing n/m in binary such that the mantissa is not recurring. Why? Because you will divide the apples in powers of 2 and sum them to get n/m, which is possible in finite steps only if the sequence terminates.
And how do we know that if the mantissa is recurring? Well, if you get remainder 0 at some step then it terminates. And if you get the same reminder again, then it doesn't. To avoid storing digits, you can just check if the number is longer than 32 bits, if not, then it is recurring since its modulo is bounded and it can't extend to a digit longer than that.
Submission for reference
Can someone clarify the middle paragraph in Div1.B?
I don't understand why we need +1 in 4^8. Also are the nodes in the BFS functions?
thx in advance
I think tests for Div1C are not great because my solution runs in time sum of squares of degrees which is $$$O(n \cdot m)$$$ in the worst case, and I think it should be just about ok to pass the time limit. However, in reality my solution works in 75ms which is even faster than most of the other solutions that work in time $$$O(n^2 + m \log n)$$$.
Can someone explain me why my solution is wrong for div2 question 1. 226083957
Thought process is that whenever the timer comes to 1 we add the array element, now we have 1+array element seconds left until bomb explodes ( obviously if 1+array element is greater than permissible value then I will just take the permissible value, formally min(arr[i]+1, a) )
now we just sum this up for each element and the answer will be
initial timer value + sum
b + sum should be answer isn't it?
You have
int
overflow, uselong long
insteadThanks, that does solve the problem. But one thing which I now started to have question is that why was I subtracting n from my answer.
You do
sum+=min(arr[i]+1,a);
which actually adds one more second than the tool is supposed to, the correct statement would besum+=min(arr[i],a-1);
You then subtract by $$$n$$$ at the end to correct the error.
This was the best contest I have ever faced in a while. I enjoyed the problems so much. Thank you Atomic-Jellyfish.
I solved Div. 1 D/Div. 2 G in $$$O(nm)$$$ using an observation I didn't try to prove.
First of all, I arrived at the formula that the answer is
and optimize the nested sum using DP.
I noted that the sum is simply
if $$$p_i = a_0 + a_1 + \dots + a_i$$$. I used a slightly different dp transition with states (current index, sum of weights so far) to obtain the following formula
which can therefore be altered to
Now, the observation is as follows:
Denote $$$\displaystyle f(c, k, x) = c + \frac{x}{k - x}$$$ defined on all real numbers $$$x < k$$$. Let $$$k_1 < k_2$$$.
If $$$c_1 = c_2$$$, then $$$f(c_1, k_1, x)$$$ and $$$f(c_2, k_2, x)$$$ intersect exactly once at $$$x = 0$$$, where $$$f(c_1, k_1, x) \le f(c_2, k_2, x)$$$ for all $$$x \ge 0$$$.
If $$$c_1 > c_2$$$, then $$$f(c_1, k_1, x)$$$ and $$$f(c_2, k_2, x)$$$ never intersect, where $$$f(c_1, k_1, x) \le f(c_2, k_2, x)$$$ for all $$$x \ge 0$$$.
If $$$c_1 < c_2$$$, then $$$f(c_1, k_1, x)$$$ and $$$f(c_2, k_2, x)$$$ intersect exactly once at $$$x = x_0$$$, where for all $$$x \ge x_0$$$ we have $$$f(c_2, k_2, x) \le f(c_1, k_1, x)$$$ and for all $$$x \le x_0$$$ we have $$$f(c_2, k_2, x) \ge f(c_1, k_1, x)$$$.
If there are $$$k_0, c_0$$$ such that $$$k_0 < k_1 < k_2$$$ and $$$c_0 < c_1 < c_2$$$, if the intersection of $$$f(c_0, k_0, x)$$$ and $$$f(c_1, k_1, x)$$$ is at $$$x = x_1$$$ and that of $$$f(c_1, k_1, x)$$$ and $$$f(c_2, k_2, x)$$$ is at $$$x = x_2$$$, then $$$x_1 < x_2$$$.
This is the observation stated formally. Now, what is this really trying to say is that we can treat each
as a function in the form of
with $$$c = dp[i + 1][k]$$$, and we plug in $$$x \mapsto S$$$. Now, we need a data structure to store the maintain the minimum of these functions at a given $$$S$$$.
Assume $$$dp[n][x] = 0$$$ for all $$$x$$$. We can calculate $$$dp[i][S]$$$ for each $$$i$$$ from $$$n - 1$$$ to $$$0$$$, and since the minimization is done over $$$S \le k \le m$$$, we may loop over $$$S$$$ from $$$m$$$ down to $$$0$$$ and we may insert the function $$$g(dp[i + 1][S], S)$$$ in the data structure for minimization.
So, we see that the data structure receives the functions in the form of $$$g(c, k)$$$ where $$$k$$$ keeps strictly decreasing. Now, to maintain what is the minimum function at $$$x = S$$$, we use the observation that we noted. We maintain a deque (or monotonic stack) containing pairs $$$(c, k)$$$ where $$$c_1 < c_2 < \dots$$$ and $$$k_1 < k_2 < \dots$$$.
When we receive a new $$$(c, k)$$$, it must be that $$$k < k_1$$$, and (1) and (2) in the observation tell us that if $$$c \ge c_1$$$, then we don't care since $$$g(c, k)$$$ will be minimal at every non-negative $$$x$$$. If $$$c < c_1$$$, we insert $$$(c, k)$$$ in the front of the deque, since (3) in the observation tells us that there is an intersection where $$$g(c, k)$$$ will become less than $$$g(c_1, k_1)$$$ and thus optimal.
Finally, we note from (4) that the intersections of the functions in our data structure are increasing, and between each two intersections one function overtakes the other function as being the optimal. So, we can note that while the current $$$S$$$ is greater than the intersection of the last two functions in the deque, we can just remove the last function. Then, we query at $$$x = S$$$ in the last function in the deque. This works in Amortized $$$O(nm)$$$.
Note: I calculated the intersections using equations from Wolfram Alpha.
Here is my submission.
In Div2 D, why do we need to subtract the initial MEX from
dp[0]
?Think of it this way:
[ a * ( cnt[b] - 1 ) + b ] + [ b * ( cnt [ c ] - 1 ) + c ] = a * cnt [ b ] + b * cnt[ c ] + c - a
But since c must end up being 0, it becomes minus a, which is the original m
Can anybody explain me why my code in problem C was showing tle
but when i iterate in while loop for less than 60 iteration it passes
The sequence takes O(n) steps before it terminates. In your code, you check the whole sequence which goes up to 1e9, and the number of test cases is 2e4, so overall you do around 2e13 operations which gives you TLE.
ooohhh now i get it, thanks a lot
Can anyone prove that precision error wont blow up in D1C?
Great contest...
thx for the round, really cool div1B, I love it.
Can you please explain ? What are the nodes of Bfs? and how is the time complexity calculated? Thanks in advance
Amazing Div1B !
I used bfs each time and it finally passed because I found there were only a few situations ($$$\leq 1000$$$). After reading the solution, I think it's a good problem.
exactly 9216
quadrangle inequality while observing beauty is non-decreasing.
What is quadrangle inequality?
Why does a pupil need to know that? Stop learning useless algorithms. Mr. bilibilitdasc can learn it because he's already IGM.
Who are you to stop me?
fake it till you make it, or so they say
If a function $$$w$$$ satisfy $$$\forall l_1 \leq l_2 \leq r_1 \leq r_2, w(l_1, r_1) + w(l_2,r_2) \leq w(l_1,r_2)+w(l_2,r_1)$$$, it is said $$$w$$$ satisfy the quadrangle inequality.
Where is the solution of 1875B?
1874A is the same problem as 1875B
Can someone explain me ,I have some doubt in 1875C — Jellyfish and Green Apple
I got till this part So we can uniquely find a set of positive integers S satisfying nm=∑i∈S12i
But how we get the answer by ,what is the logic We can use std::__builtin_popcount() to get |S|, the answer is m×|S|−n.
And moreover since we devided n and m by gcd(n,m), so first ideally we should compute no of operation for one group(the group we get by deviding m by gcd(m,n)) and then multiply that with gcd(n,m) to get the total operation.
Each cut increases the number of apples by 1. Since each person needs to get |S| apples in his account, final number of apples will be m*|S|. Since there were initially n apples, so we get the cuts by subtracting this from final apples, hence the answer m*|S| — n.
Also, for the intuition why to multiply a group ans by gcd(n,m) (as you say) goes by the following example. Suppose n = 30, m = 48. Then a = n/gcd(n,m) = 5 and b = m/gcd(n,m).= 8. Ans for one group is = 8*2-5 = 11. Now the number of groups have become 6 i.e. gcd(n,m). Hence the answer will be 11*6 = 66, which is same as that given by taking all n and m (48*2 — 30) = 66.
Fun fact: we add 1B to prevent the large gap between 1A and 1C.
And Atomic-Jellyfish got the idea of 1G, when playing game. (xd
Can I ask what's the game title?
Inscryption! You can find it on steam, I really enjoy it!
Inscryption, just the title.
The funny thing is that without 1B the gap would've been smaller :)
This problem gets too little user feedback, and micho solved it in a fairly short period of time, so errorgorn and I misjudged its difficulty X_X
errorgorn solved it very quickly, and he thinks this problem is very easy.
I can just orz errorgorn
In div1 B, why there is $$$5^8$$$ cases instead of $$$4^8$$$ cases?
Oh i understand
can you explain...i am still not getting it
Because your are going to record the shortest path's source and destinations, and since there maybe some types of $$$(a_i,b_i,m_i)$$$ that will not exist in the $$$(a,b,m)$$$, so except for all $$$(c,d)$$$ that is $$$(0/1, 0/1)$$$, there is another destination called "not exist", so the state is in total $$$(4+1)^8$$$.
what do you exactly mean by "some types of (ai,bi,mi) that will not exist in the (a,b,m)" ? can you give any example...thanks a lot
For example, $$$a=(1111)_2,b=(1111)_2,m=(0001)_2$$$, in this case, pair $$$(1, 1, 0), (1, 1, 1)$$$ exist and others like $$$(0,0,1)$$$ not exist, so you only need to consider the shortest path from $$$(1, 1, 0), (1, 1, 1)$$$ to there specific $$$(c_i,d_i)$$$, so except the $$$4$$$ state $$$(0/1,0/1)$$$, in the pre-calculation there is another state which is "not exist"
can you also attach your submission...tutorial solution is too complicated to understand
I haven't implemented this problem yet, maybe later.
Here is the link https://codeforces.net/contest/1874/submission/226266858
I think the author used some math (base 5 number representation and bitmask) to encode the states.
Let's see what does $$$5^8$$$ mean again? (thanks many helpful friend here helped me to understand it)
First, let's clarify this: Given same $$$(a,b,m)$$$ state of two different bit $$$i$$$ and $$$j$$$, no matter what we do, the resulting bits at position $$$i,j$$$ in $$$(x,y)$$$ must be the same.
From then we can just care about different $$$(a,b,m)$$$. For multiple bits that have same $$$(a_i,b_i,m_i)$$$, we can use one to represent the whole group. Now, effectively all numbers can be seen as some special 8-bit numbers.
The base, if is in range $$$0..3$$$, then corresponds to 4 cases of some arbitrary $$$(x_i, y_i)$$$ we can make from some $$$(a_i, b_i, m_i)$$$. If base equals $$$4$$$, it means that there isn't any $$$i$$$ such that $$$(a_i, b_i, m_i)$$$ forms the configuration $$$(a_{fixed}, b_{fixed}, m_{fixed})$$$ we're considering
The exponent, from $$$0$$$ to $$$7$$$, corresponds to the position, or more precisely the group I mentioned above.
Take a look at this example:
00004002
is a base-5 number corresponding to a state:Last digit is $$$2$$$. It means that the 0-th group, which is $$$(a,b,m) = (0,0,0)$$$, has $$$(x,y) = (1,0) \text{ (number 2)}$$$
3-th digit is $$$4$$$, it means that the group $$$(a,b,m) = (0,1,1)$$$ contains no bit in it!
For the remaining 6 groups, they just have the same $$$(x,y) = (0,0)$$$
So why do we have to represent data this way? In this representation, we see:
Destination and Starting point are being tied together (with destinations $$$(x,y)$$$ as digits, and starting point $$$(a,b,m)$$$ as position (or call it order, exponent, ...))
Although we kind of "separated" individual bits to come up with the solution, yet in the solution they must be stored together !? Because each operation AND, OR, XOR affects each bit of the numbers. They're all transforming after each operation, so their values must be recorded and observed altogether.
Thanks for the explanation. It's really helpful.
For E,how can I use BFS(breadth-first search) for preprocessing? I wonder why to use BFS and how to use BFS.
Suppose we start with $$$x = 1$$$, $$$y = 1$$$, $$$m = 0$$$. What are the possible $$$(x, y)$$$ reachable using the 4 operations given and the minimum number of operations to achieve that state? We use BFS for that purpose.
Each state of the numbers is a node, each operation changing from one state to another is an edge.
Therefore, a sequence of operations that transforms the init. configuration to the desired configuration, is a path between 2 nodes.
And minimum number of operations is just equivalent to the shortest path. In this case, it's unweighted graph so BFS is sufficient
In problem 1B, why there are $$$5^8$$$ states instead of just $$$4^8$$$?
$$$2 * 2$$$ for each possible $$$(c_i, d_i)$$$, and another $$$1$$$ corresponding to when there is no state for that $$$(x_i, y_i, m_i)$$$.
Could somebody elaborate the following editorial for Div1D?
Does it mean that is there a way to directly evaluate $$$[x^i] F_n(x)$$$ for each $$$i$$$ in $$$O(n^2)$$$ time? And what does it have to do with dot product?
I think what is meant is that you store all $$$F_k(x), k \leq n$$$ in point value form $$$ (i, F_k(i))$$$, for $$$O(n^2)$$$ points.
Convenient is to use the points $$$i = 0,1,2,3,4, \dots$$$. All operations needed (summing, convolving two polynomials, shifting the polynomial with $$$x^n$$$) can be done in time linear in the size of the representation, so in $$$O(n^2)$$$. Afterwards, the only thing left is from the point value form of $$$F_n(x)$$$, going back to the coefficient form. This can be done with the lagrange interpolation formula in $$$O(n^4)$$$ with some tricks (You need to first with brute force calculate the coefficients of the polynomial $$$x \cdot (x-1) \cdot (x-2) \cdot \dots $$$, and do polynomial division to obtain each of the summands of the lagrange interpolation polynomial.
Ah, now I understand! So let me rephrase it:
All that left is to do Lagrange interpolation, which I understand well. Thanks a lot for the explanation!
Why my solution fails 226099505? Thanks in advance!
The input and output of my answer are the same as the example!!!
https://codeforces.net/contest/1875/submission/226099505 here are the website
You forgot to output the newline character.
use
std::endl
.can someone explain div2A. Why it isn't min(a-1, x-1) according to the condition
We get the optimal solution if we use the tool when the timer is at 1. If we increase the timer past $$$a$$$, it will be set to $$$a$$$. We have two outcomes:
$$$(x + 1 \leq a):$$$ timer change, $$$1 \to x + 1 \to a$$$ increase of $$$a- 1$$$
$$$(x + 1 < a):$$$ timer change, $$$1 \to x + 1$$$ increase of $$$x$$$
any better explaination for div2 c?
I'll try to explain it as best as I can.
When dividing the n apples among the m people, if n is greater than or equal to m, we can give each person n // m apples. After doing this, we can disregard how many apples each person has because they all have the same amount. We can now focus on the n % m remaining apples.
Let's say a = n % m. If we have no remaining apples (a == 0), then we have finished. Otherwise, we have to make a cuts to now give us 2 * a apples. If (2 * a) % m == 0, then we have finished. Otherwise, we have to make 2 * a more cuts to give us 4 * a apples. We do this until (a * 2^k) % m == 0, where k is the minimum number that satisfies the equation.
Now we have to figure out when there is no answer. In order for there to be an answer, (a * 2^k) % m == 0. This means that a * 2^k == q * m for some integer q. In order for this equation to be satisfied, all of m's non-2 factors must also be in a. If m had a non-2 factor that weren't in a, then it wouldn't be a divisor (it can have some factors of 2 that aren't in a because of the 2^k factor). This means that gcd(a, m) must have all of the non-2 factors that m has. This means that m / gcd(a, m) must be a power of 2. We can perform this check at the beginning of program and then proceed.
Here is what I did: https://codeforces.net/contest/1875/submission/226131697
nice explaination:) thanks a lot
If m had a non-2 factor that weren't in a, then it wouldn't be a divisor (it can have some factors of 2 that aren't in a because of the 2^k factor). This means that gcd(a, m) must have all of the non-2 factors that m has.
a * 2^k == q * m
why q was not consider, can it have non-2 factors?
Let's ignore the 2^k for now. If a = q * m, it means that both q and m are factors of a. Therefore a must have any factors that q and m has. If a didn't have a factor that m has, for example, we could have something like 3^2 = q * (3 * 5), and there is no positive integer solution here for q.
So, by this logic, we can see that all of m's factors must be present in a, so therefore gcd(a, m) = m, and m / gcd(a, m) = m / m = 1 (or in the original problem it must be 2^c for some c). So at the end of the day, q doesn't really matter at all.
Thank you so much, you saved my life, oh, I'm really happy now that I found an explanation, I was so confused, I think you must be the maker of all editorials, I probably wouldn't understand without you why did we use m/gcd(m,n) at all, thanks very much :)
i learned a lot from problem C, thank you Atomic-Jellyfish
^_^
:3
If max(a)>max(b), MAX=max(a). If min(a)=max(a), then min(a)>max(b), Jellyfish will do nothing. Otherwise, Jellyfish won't swap the apple with value MAX.
In the tutorial for 1874A — Jellyfish and Game, can anyone explain to me whatOtherwise
means? Isdo nothing
andwon't swap
is different?"do nothing" and "won't swap" is the same thing.
Think there's a mistake in 1874C — Jellyfish and EVA tutorial. The transition should be:
$$$ \forall 1 < i \leq k , g_{k, i} = g_{k-2, i-2} \times \frac{\mathbf{i - 2}}{k} + g_{k-2,i-1} \times \frac{k-i}{k} $$$
Typo, thanks for the correction!
For 1875C — Jellyfish and Green Apple can you please explain: why __builtin_popcount(a) is equal to |S|?
For example, if a = 13, b = 32, each person will get 3 pieces of apple, a is 1101 in binary, 1101 = 1000 + 100 + 1(in binary), which is an apple piece with 8/32 kg, an apple piece with 4/32 kg, an apple piece with 1/32 kg.
ok i got this but why you have taken a = n / __gcd(n, m)??
We need to reduce it to its simplest fraction. for example, n = 5, m = 20, n/m = 5/20 = 1/4, the popcount of n is 2 but the popcount of a is 1.
Can someone please explain the editorial of D1B?
can someone please explain why gcd(m,n) in problem C Div.2
Need a better explanation for div 2 E. It doesn't provide details about what preprocessing you're doing and how it helps in solving the tests.
Hi! https://codeforces.net/blog/entry/120969 I have provided a bit more explanation here! Please upvote it you find it useful!
Thanks that was way more clear. Atomic-Jellyfish please consider linking it in the editorial.
Atomic-Jellyfish may you explain a bit in detail what you mean "by greedy and induction" we would arrive at those conclusions? what induction step is there ? greedy would be I think just wanted to make the sum more at each round by the one who is making the move .
"By greedy and induction" means, we can get the result of k rounds of the game from the result of k-1 rounds of the game, for example, if k=2, Jellyfish will know what Gellyfish will do in the next round, which is the result of k=1, this is "induction". The result of k=1 is very easy, which is written in the tutorial — swap the max and min. Thus when k = 2, you can predict what your opponent will do next, for each of your operational scenarios, you will choose the best one, this is "greedy"
How do we prove that the optimal answer is m×|S|−n for Div2 C? .
Somehow i managed to prove it myself.
Let us assume that $$$\frac{m}{\text{gcd}(n, m)}$$$ is a power of $$$2$$$ . So we can say that $$$\frac{n}{m}$$$ can be finitely represented as some binary representation like $$$\sum_{i \in S} \frac{1}{2^i}$$$.Each individual people is going to get $$$\frac{n}{m}$$$ weight of apples and he will get this weight in denominations of $$$\frac{1}{2^i}$$$ . For each $$$i$$$ in $$$\sum_{i \in S} \frac{1}{2^i}$$$ we need to have exactly $$$m$$$ pieces . For an apple of weight $$$1$$$ we can get $$$2^x$$$ pieces each weighting $$$\frac{1}{2^x}$$$ and this will cost $$$2^x - 1$$$ cuts. For each $$$i$$$ in $$$\sum_{i \in S} \frac{1}{2^i}$$$ we need to have exactly $$$m$$$ pieces each weighting $$$\frac{1}{2^i}$$$ and it can be done using $$$\frac{m}{2^i}$$$ number of apples having initial weight of $$$1$$$ and for each apples it will cost $$$2^i - 1$$$ cuts . For each $$$i$$$ in $$$\sum_{i \in S} \frac{1}{2^i}$$$ we would need exactly $$$\frac{m}{2^i}(2^i - 1)$$$ cuts . Total number of cuts will be $$$\sum_{i \in S} \left(\frac{m}{2^i}(2^i - 1)\right)$$$ . We know that $$$\frac{n}{m} = \sum_{i \in S} \frac{1}{2^i}$$$, then $$$n = m \sum_{i \in S} \frac{1}{2^i}$$$.So, the expression $$$\sum_{i \in S} \left(\frac{m}{2^i}(2^i - 1)\right) = \sum_{i \in S} \left(m - \frac{m}{2^i}\right) = \sum_{i \in S} m - \sum_{i \in S} \frac{m}{2^i} = m|S| - n$$$.
"Since the number of apple pieces is added to exactly 1 for each cut, So we just need to minimize the final number of apple pieces." So $$$m \times |S|$$$ is the final number of apple pieces, $$$n$$$ is the number of apple pieces in the beginning.
Solution for Div2 D with comments:
Great Round! div.1 CD are both counting problems, typical style of CNOI ig
In problem 1874C — Jellyfish and EVA, in second test case, I have no idea why the output is "0.625000000000".Please explain to me.
i understand the implement of div2G but i still don't know how do we get that formula. Can anyone help me explain that formula
amzing solution about div2c!
I didn't understand how the graph is constructed in JellyFish and Math problem, author of this editorial didn't explicitly explain how it is done, and I am not able to figure it out from the code, any help will be appreciated. Thank you
I've understood the problem, I'am going to write up the solution in complete detail because I believe the solution provided leaves out a lot of detail that are not obvious at all
First we need to encode reachability information from initial state of $$$x=a, y=b$$$ We do that by focusing on each bit of $$$x, y$$$ and $$$m$$$ as follows (since bits are independent and same bit combination goes together)
We're going to represent reachability information from initial state as follows
For each possible combination of initial state $$$(a, b, m)$$$ there are only 4 possible states it can reach to $$$(0/1,0/1)$$$, and there are 8 possible initial states, so we can represent each of them using a 8-digit base 4 number, and we're only going to need $$$4^8$$$ possible configurations.
Example : for configuration
00001003
What is the initial configuration ? It is simple, for each digit corresponding to $$$(a, b, m)$$$, its value is simply $$$(ab)_2$$$
Now, we want to find all such reachable configurations (cause clearly not all $$$4^8$$$ configurations are reachable), we do that by using the operations to define transitions
Each configuration can reach to exactly 4 different configurations in a single step (using the different operations given as follows)
Using operation op, we define the transition as
For each digit corresponding to state $$$(a, b, m)$$$, say it is $$$(xy)_2$$$ we create new reachable state $$$(x'y')_2$$$ given by operation op (eg: in case of $$$op==1$$$ we have $$$x' = x\&y$$$, $$$y'=y$$$)
Remark : We're changing all the digits because from a configuration of $$$(x,y)$$$, in one operation it changes all of its bits, so the all 8 groups must be changed to reflect that.
All the configurations reachable from initial config in this graph are exactly all those configs that are reachable from init config (preserving the number of steps), kind of obvious but this construction is quite meticulous
Now that we have this graph we can do a BFS over it starting from initial configuration, and we can get all the states reachable from it along with minimum number of steps to reach that particular state.
Now, given $$$a,b,c,d,m$$$, we need to find a configuration where for each $$$(a_i, b_i, m_i)$$$ digit we have $$$(c_i, d_i)$$$ value at that digit, to find that we simply compute that configuration as say
config
and compute dist[config].In case we get in consistency, while computing config i.e. if different bits of input corresponding to same $$$(a, b, m)$$$ values but has different $$$(c, d)$$$ value, then that configuration is not reachable. Doing this requires 5 states instead of 4. (so we'll instead encode using 5 base numbers, using 4 to specify state isn't set or we can say 4 is don't care)
What if dist[config] == INF ? In this case we do not have any immediate inconsistencies, but the state is simply not reachable in the directed graph
Hope that was helpful
Thank you, it helped me a lot
if m/gcd(n,m) is not an integral power of 2, there's no solution. can someone explain me this line of editorial of div2 question3
The denominator on each slice needs to be a power of two. For example, you couldn't make $$$\frac{1}{3}$$$ of an apple with slices of $$$\frac{1}{2}, \frac{1}{4}, \frac{1}{8}\cdots$$$, thus if $$$n=1$$$ and $$$m=3$$$ you can immediately say it's not possible.
It's very interesting that 1C doesn't involve determining an actual optimal strategy, but instead we somehow just skip that and directly calculate the probability distribution of outcomes in an optimal strategy.
Not sure I followed how we ensure that the derived distribution corresponds to a valid strategy though. And how do you even come up with the recursion for $$$g_k$$$?
Why the div1 has the b div2 and doesn't have c&d div2 ?
.
Can someone explain why my method for calculating the expected number of step to reach $$$n$$$ in Div.1 D is wrong? I think on this for a few times but still can't find what stuff goes wrong :/
My approach is as followed:
First consider using linearity of expectation, we will get
then obviously every edge will be traversed from left to right, and every time it got traversed from right to left, it will also be traversed from left to right eventually, so we have
which lead to the final answer
include
include
include
using namespace std;
void solve() { long long int a, b, n; cin >> a >> b >> n;
int x; for (int i = 0; i < n; ++i) { cin>>x; v.push_back(x); }
}
int main() { int t; cin >> t;
} this code gives me timed exceed on second test can anything be done inorder to fix it ?
Sick problems!