1688A - Cirno's Perfect Bitmasks Classroom
Idea: huangzirui. Solution: huangzirui. Preparation: huangzirui.
Good problem
Average problem
Bad problem
Did not solve
Consider $$$x=2^k$$$ and $$$x\ne 2^k$$$ separately.
Let $$$p_i$$$ be the $$$i$$$-th bit of $$$x$$$, $$$q_i$$$ be the $$$i$$$-th bit of $$$y$$$ (both indexed from $$$0$$$).
$$$x\ \texttt{and}\ y > 0\Leftrightarrow \exists i,\ p_i= q_i = 1$$$.
$$$x\ \texttt{xor}\ y > 0\Leftrightarrow \exists i,\ p_i\ne q_i$$$.
To satisfy the first condition, find the minimum integer $$$k$$$ satisfying $$$p_k=1$$$, and assign $$$1$$$ to $$$q_k$$$.
If $$$x\ne 2^k$$$, the second condition is satisfied now. Otherwise, find the minimum integer $$$j$$$ satisfying $$$p_j=0$$$, and assign $$$1$$$ to $$$q_j$$$.
The time complexity is $$$O(1)$$$.
#include<bits/stdC++.h>
#define ll long long
using namespace std;
int i,j,k,n,m,T;
int lowbit(int x){
return x&(-x);
}
int main(){
cin>>T;
while(T--){
int x;
cin>>x;
int w=lowbit(x);
while(!(w^x) || !(w&x))w++;
cout<<w<<endl;
}
return 0;
}
for T in range(int(input())):
x=int(input())
y=x&-x
while (x==y or (x&y)==0):
y+=1
print(y)
We wrote a brute force program, and it runs more than 2 seconds on polygon. However, many participant passed the pretests. We apologize for our fault.
1688B - Patchouli's Magical Talisman
Idea: Yakumo_Ran. Solution: Yakumo_Ran. Preparation: SSerxhs.
Good problem
Average problem
Bad problem
Did not solve
What if there is at least one odd integer?
How to produce an odd integer?
Let $$$g(x)$$$ be the maximum integer satisfying $$$2^{g(x)}|x$$$.
A greedy solution is to make one integer odd integer, and plus it to other even integers. Let $$$f(a)$$$ be the answer of an sequence $$${a_n}$$$.
We can find that:
It can be shown that it is the optimal strategy.
We can prove that $$$f(a)$$$ decreases by at most $$$1$$$ with one operation.
- For the first operation, assuming we choose $$$a_i$$$ and $$$a_j$$$, let $$$a_k=a_i+a_j$$$. Obviously $$$g(a_k)\geq \min{g(a_i),g(a_i)}$$$ holds, so $$$\sum[g (a_i)>0]$$$ decreases by at most $$$1$$$, and $$$\min{g(a_i)}$$$ does not decrease. So $$$f(a)$$$ decreases by at most $$$1$$$.
- For the second operation, assuming we choose $$$a_j$$$. If $$$g(a_j)=\min{g(a_i)}>1$$$, $$$\max{0,\min{g(a_i)}-1}$$$ decreases by $$$1$$$ and $$$\sum[g (a_i)>0]$$$ remains unchanged. Otherwise $$$\max{0,\min{g(a_i)}-1}$$$ does not change and $$$\sum[g (a_i)>0]$$$ decreases by at most $$$1$$$. So $$$f(a)$$$ decreases by at most $$$1$$$.
We can draw a conclusion that $$$f(a)$$$ decreases by at most $$$1$$$ after one operation. Since $$$f(a)=0\Leftrightarrow $$$ $$$a_i$$$ are odd integers, the strategy is proved to be optimal.
The time complexity is $$$O(n)$$$.
//这回只花了45min就打完了。
#include "bits/stdC++.h"
using namespace std;
#define all(x) (x).begin(),(x).end()
int main()
{
ios::sync_with_stdio(0);cin.tie(0);
int T;
cin>>T;
while (T--)
{
int n,r;
cin>>n;
vector<int> a(n);
for (int &x:a) cin>>x,x=__builtin_ffs(x)-1;
r=max(*min_element(all(a))-1,0);
for (int x:a) r+=(x>0);
cout<<r<<'\n';
}
}
The constraint is $$$a_i\ge 0$$$ at first.
Idea: Yakumo_Ran. Solution: Yakumo_Ran, SSerxhs. Preparation: SSerxhs.
Good problem
Average problem
Bad problem
Did not solve
You do not need to know anything about string matching or other algorithms.
Why the initial string consists of only one letter?
Why the answer is unique if there is at least one answer?
What if each string in the input data consist of one letter?
Parity.
Let $$$t$$$ be the unshuffled operation sequence.
Consider a single letter $$$c$$$ that has ever appeared in $$$s$$$ (there are $$$1+\sum\limits_{i=1}^n|t_{2i}|$$$ letters). There are two possible situations:
- $$$c$$$ is in the initial string. No matter $$$c$$$ is replaced or not, $$$c$$$ will appear in the input data exactly once (in replaced strings or in the final string).
- $$$c$$$ is not in the initial string. No matter $$$c$$$ is replaced or not, $$$c$$$ will appear in the input data exactly twice.
So the answer is the only letter appearing odd times in the input data.
The time complexity is $$$O(\sum |s_i|+|t|)$$$.
#include "bits/stdC++.h"
using namespace std;
int main()
{
ios::sync_with_stdio(0);cin.tie(0);
int T;cin>>T;
while (T--)
{
int n;
char c=0;
cin>>n;
n=n*2+1;
while (n--)
{
string s;
cin>>s;
for (auto x:s) c^=x;
}
cout<<c<<'\n';
}
}
_=int(input())
for __ in range(_):
n=2*int(input())+1
a=[0 for i in range(26)]
for i in range(n):
s=input()
for c in s:
a[ord(c)-ord('a')]+=1
cnt=0
for i in range(26):
if (a[i]%2==1):
print(chr(i+ord('a')))
cnt+=1
if cnt!=1:
print("fake problem")
Idea: Yakumo_Ran, SSerxhs. Solution: Yakumo_Ran. Preparation: SSerxhs.
Good problem
Average problem
Bad problem
Did not solve
Consider $$$k\le n$$$ and $$$k>n$$$ separately.
Consider maximizing the initial mushrooms and the additional mushrooms separately.
Is there any common strategy?
If $$$k\le n$$$:
Consider how to maximize the initial mushrooms she collects. Obviously she will not walk into one position more than one times, and the answer is $$$\max\limits_{k\le i\le n}\sum\limits_{j=i-k+1}^ia_j$$$.
Consider how to maximize the additional mushrooms she collects. Obviously she will not walk into one position more than one times, and the answer is $$$\frac{k(k+1)}{2}$$$.
- We can find that maximizing the two parts shares the same strategy. So add up the answers of the two parts.
If $$$k>n$$$:
- Consider how to maximize the initial mushrooms she collects. Obviously she can collect all of them. The answer is $$$\sum\limits_{i=1}^n a_i$$$.
- Consider how to maximize the additional mushrooms she collects. Let $$$b_i$$$ be her position on minute $$$k-i$$$ ($$$0\le i< n$$$). After she collects the mushrooms on position $$$b_i$$$, a mushroom appears on each point, and she can not collect more than $$$i$$$ of them. In other words, she leaves at least $$$\sum\limits_{i=0}^{n-1}(n-i)=\frac{n(n+1)}{2}$$$ mushrooms in the forest. Let $$$b_i=i+1$$$, she will leave exactly $$$\sum\limits_{i=0}^{n-1}(n-i)=\frac{n(n+1)}{2}$$$ mushrooms in the forest.
- We can find that maximizing the two parts shares the same strategy. So add up the answers of the two parts.
The time complexity is $$$O(n)$$$.
#include "bits/stdC++.h"
using namespace std;
typedef long long ll;
int main()
{
ios::sync_with_stdio(0);cin.tie(0);
int T;cin>>T;
while (T--)
{
int n,k,i;
cin>>n>>k;
vector<ll> s(n+1);
for (i=1;i<=n;i++) cin>>s[i],s[i]+=s[i-1];
if (k>=n)
{
cout<<s[n]+(k-1ll+k-n)*n/2<<'\n';
}
else
{
ll mx=s[k];
for (i=k+1;i<=n;i++) mx=max((ll)mx,s[i]-s[i-k]);
cout<<mx+k*(k-1ll)/2<<'\n';
}
}
}
T=int(input())
for t in range(T):
n,m=map(int,input().split())
a=[0]+list(map(int,input().split()))
for i in range(1,n+1):
a[i]+=a[i-1]
if m>n:
print(a[n]+(m-1+m-n)*n//2)
else:
ans=0
for i in range(n+1):
if i>=m:
ans=max(ans,a[i]-a[i-m])
print(ans+(1+m-1)*(m-1)//2)
Idea: Yakumo_Ran. Solution: Yakumo_Ran, huangzirui. Preparation: SSerxhs.
Good problem
Average problem
Bad problem
Did not solve
$$$2m=m+m$$$. What can we do with the first $$$m$$$ queries?
We can now the lengths of each edge with $$$m$$$ queries.
Kruskal.
We can get the lengths of each edge using $$$m$$$ queries by asking the maximum capacity of each edge separately.
Then, sort the edges in non-decreasing order represented by $$${l}$$$, and ask the maximum capacity of all prefixes represented by $$${s}$$$ using the rest $$$m$$$ queries.
Consider the process of Kruskal's algorithm. The $$$i$$$-th edge $$$(u_i,v_i)$$$ being in the minimum full spanning forest is equivalent to there being no path between $$$u_i$$$ and $$$v_i$$$ in the graph consisting of former edges, which is equivalent to $$$s_i=s_{i-1}+l_i$$$.
Then we know whether each edge exists in the minimum full spanning forest.
The time complexity is $$$O(m^2)$$$.
#include "bits/stdC++.h"
using namespace std;
#define all(x) (x).begin(),(x).end()
int ask(const string &s) {cout<<"? "<<s<<endl;int r;cin>>r;return r;}
int main()
{
ios::sync_with_stdio(0);cin.tie(0);
int n,m,i,r=0;
cin>>n>>m;
string s(m,'0');
vector<pair<int,int>> v(m);
vector<int> sum(m+1);
for (i=0;i<m;i++)
{
s[i]='1';
v[i]={ask(s),i};
s[i]='0';
}
sort(all(v));
for (i=0;i<m;i++)
{
s[v[i].second]='1';
sum[i+1]=ask(s);
}
for (i=0;i<m;i++) r+=(sum[i+1]==sum[i]+v[i].first)*v[i].first;
cout<<"! "<<r<<endl;
}
n,m=map(int,input().split())
a=[]
for i in range(m):
print('?','0'*i+'1'+'0'*(m-i-1),flush=1)
a.append(int(input()))
cur=0
s=['0' for i in range(m)]
for i in range(m):
x=0
for j in range(m):
if a[x]>a[j]:
x=j
s[x]='1'
print('? ',*s,sep='',flush=1)
c=int(input())
if (cur+a[x]==c):
cur+=a[x]
else:
s[x]='0'
a[x]=2000000
print('!',cur,flush=1)
Idea: Yakumo_Ran. Solution: Yakumo_Ran. Preparation: huangzirui, SSerxhs.
Good problem
Average problem
Bad problem
Did not solve
Let $$$b_i=0$$$ for convenience.
The interval selected satisfies $$$\sum\limits_{i=l}^r a_i=0$$$. What does range sum remind you of?
Let $$$s_i=\sum\limits_{k=1}^i a_k-b_k$$$. The task can be described as:
Given an array $$$s$$$. For some given interval $$$[l,r]$$$ if $$$s_{l-1}=s_r$$$, we can assign $$$s_r$$$ to $$$s_i$$$ ($$$l\le i< r$$$). The goal is to make $$$s_i=0$$$ ($$$0\le i\le n$$$).
Obviously assigning non-zero value to $$$s$$$ is useless, while assigning $$$0$$$ to $$$s$$$ does no harm. Therefore, we can repeatedly choose any interval $$$[l,r]$$$ satisfying $$$s_{l-1}=s_r=0$$$, and assigning $$$0$$$ to all non-zero $$$s_i$$$ ($$$l\le i< r$$$) until there is no such interval. We can use set in C++ or disjoint set or segment tree to find such $$$i$$$.
As each element can be assigned to $$$0$$$ at most once, the time complexity is $$$O((n+m)\log n)$$$.
#include "bits/stdC++.h"
using namespace std;
#define all(x) (x).begin(),(x).end()
typedef long long ll;
int main()
{
ios::sync_with_stdio(0);cin.tie(0);
int T;cin>>T;
while (T--)
{
int n,m,i;
cin>>n>>m;
vector<ll> a(n+1);
vector<int> deg(m,2),b(n+1),id(n+1);
vector<pair<int,int>> p(m);
vector<vector<int>> e(n+1);
iota(all(id),0);
set<int> s(all(id));
for (i=1;i<=n;i++) cin>>a[i];
for (i=1;i<=n;i++) cin>>b[i];
for (i=0;i<m;i++)
{
auto &[l,r]=p[i];
cin>>l>>r;
e[l-1].push_back(i);
e[r].push_back(i);
}
for (i=1;i<=n;i++) a[i]-=b[i];
for (i=1;i<=n;i++) a[i]+=a[i-1];
queue<int> q;
for (i=0;i<=n;i++) if (!a[i]) q.push(i),s.erase(i);
while (q.size())
{
int x=q.front();q.pop();
for (int y:e[x]) if (!--deg[y])
{
auto [l,r]=p[y];
auto lt=s.lower_bound(l),rt=s.upper_bound(r);
for (auto it=lt;it!=rt;++it) q.push(*it);
s.erase(lt,rt);
}
}
cout<<(s.size()?"NO\n":"YES\n");
}
}
It is Div.2 D at first.
Idea: Yakumo_Ran. Solution: huangzirui. Preparation: SSerxhs, huangzirui.
Good problem
Average problem
Bad problem
Did not solve
What is the range of the answer?
How to solve it in $$$O(na_n)$$$?
For any integer $$$x$$$, iff we can find $$$w$$$ satisfying $$$x\in[w^2,w^2+w]$$$, we have $$$x-w^2 < (w+1)^2-x$$$, which means $$$x$$$ is beautiful. Define $$$f(x)=w$$$.
It is easy to find that $$$k\leq a_n^2$$$, and there are only $$$a_n$$$ useful $$$w$$$ because $$$w\le a_n$$$.
Enumerate $$$f(a_1+k)$$$ ($$$f(a_1+k)\le a_n$$$), and calculate the range of $$$a_i+k$$$ in order. It can be shown that the range is an interval for all $$$1\le i\le n$$$. So we can solve this problem in $$$O(n a_n)$$$.
We call $$$i$$$ a jump if $$$f(a_{i}+k)\ne f(a_{i-1}+k)$$$. Assuming $$$f(a_1+k)=w$$$, there is no more than $$$\frac{a_n}{w}$$$ jumps. We only need to enumerate jumps to calculate the ranges. We can use linked list or set in C++ to maintain it.
The time complexity is $$$O(\sum\limits_{w=1}^{a_n} \frac {a_n}{w}=a_n\log a_n)$$$.
#include <bits/stdC++.h>
using namespace std;
typedef long long ll;
#define all(x) (x).begin(),(x).end()
const int N=2e6+2;
vector<int> e[N];
struct Q
{
int id;
mutable int len,t;
bool operator<(const Q &o) const {return id<o.id;}
};
int main()
{
ios::sync_with_stdio(0);cin.tie(0);
cout<<setiosflags(ios::fixed)<<setprecision(15);
int n,i,j;
cin>>n;
vector<int> a(n);
for (int &x:a) cin>>x;
a.resize(unique(all(a))-a.begin());
n=a.size();
set<Q> s;
for (i=1;i<n;i++) s.insert({i,a[i]-a[i-1],0}),e[a[i]-a[i-1]].push_back(i);
for (i=1;;i++)
{
for (int x:e[i])
{
auto it=s.find({x,i,0});
assert(it!=s.end());
auto L=it==s.begin()?s.end():prev(it),R=next(it);
if (L!=s.end()&&L->t&&R!=s.end()&&R->t)
{
L->len+=i+R->len;
s.erase(it);
s.erase(R);
}
else if (L!=s.end()&&L->t)
{
L->len+=i;
s.erase(it);
}
else if (R!=s.end()&&R->t)
{
R->len+=i;
s.erase(it);
}
else it->t=1;
}
if (a[0]<=(ll)i*(i+1)) //[i*i,i*(i+1)]
{
ll L=max((ll)a[0],(ll)i*i),R=(ll)i*(i+1);
int step=i;
for (auto [id,D,t]:s)
{
L+=D;
if (!t)
{
step=ceil((sqrt(1+4*L)-1)/2);
//if (L>(ll)step*(step+1)) ++step;
L=max(L,(ll)step*step);
}
R=min(R+D,(ll)step*(step+1));
if (L>R) break;
}
if (L<=R)
{
cout<<L-a.back()<<endl;
return 0;
}
}
}
}
Idea: xiaoziyao. Solution: xiaoziyao, SSerxhs. Preparation: SSerxhs, xiaoziyao.
Good problem
Average problem
Bad problem
Did not solve
Consider the Inclusion-Exclusion Principle.
Let $$$f_p(x)$$$ be the maximum integer satisfying $$$p^{f_p(x)}|x$$$.
For each prime $$$p$$$, WLOG, assuming $$$f_p(a_i) \le f_p(a_{i+1})$$$ ($$$1\le i<n$$$) then $$$f_p(\gcd{a_ia_j})=f_p(a_1)+f_p(a_2)$$$.
Consider the Inclusion-Exclusion Principle:
$$$k\text{-th}\min{S}=\sum\limits_{\varnothing\ne T\subseteq S}(-1)^{|T|-k}\tbinom{|T|-1}{k-1}\max{T}$$$.
So
Then $$$\gcd{a_ia_j}=\prod\limits_{\varnothing\ne T\subseteq {a}}\operatorname{lcm}{T}^{(-1)^{|T|}(|T|-2)}$$$.
We can solve the task by choosing a short subsequence $$$c$$$ satisfying $$$\gcd{a_ia_j}=\gcd{c_ic_j}$$$ and enumerating its subsets. To fit in the constraint, the length of $$$c$$$ should be no longer than $$$14$$$.
Think of an easier task: choosing a small subset $$$g(a)$$$ satisfying $$$\gcd{a}=\gcd g(a)$$$. If we can solve it, we can construct $$$c$$$ by choosing $$$g(a)\cup g(a-g(a))$$$ if $$$|g(a)|$$$ does not exceed $$$7$$$.
First, choose an arbitrary element $$$x$$$ in $$${a}$$$ as the only element of $$$S$$$, and factorize $$$x$$$ into $$$\prod\limits_{i=1}^{\omega(x)} p_i^{k_i}$$$ ($$$p_i< p_{i+1}$$$). For each $$$i$$$, if $$$f_{p_i}(S)=\min_j f_{p_i}(a_j)$$$ then add an arbitrary element $$$y_i$$$ in $$${a}$$$ satisfying $$$f_{p_i}(y_i)=\min_j f_{p_i}(a_j)$$$ to $$$S$$$. Now obviously $$$\gcd S=\gcd {a}$$$, but $$$|S|\le\omega(x)+1\le 8$$$. We can prove that $$$|S|=8$$$ and $$$\gcd(S-{x})\ne \gcd {a}$$$ do not hold at the same time, then we can solve the task by choosing
.
Consider the necessary condition of $$$|S|=\omega(x)=8\land\gcd(S-{x})\ne \gcd {a}$$$:
$$$\exists d\in\text{Prime}, f_{d}(x)<\min\limits_{y\in S-{x}}f_{d}(y)$$$. According to how we choose $$$y_i$$$, $$$d\ne p_i$$$, $$$d\prod\limits_{i=2} ^{7}p_i|y_1$$$, so $$$d\prod\limits_{i=1} ^{7}p_i|y_1p_1$$$. Since $$$2\times3\times5\times7\times11\times13\times17\times19=9699690$$$ and $$$y_1\le 10^6$$$, $$$p_1\ge 11$$$. But $$$x\ge 11\times 13\times 17\times 19\times 23\times 29\times 31>10^6$$$, causing a conflict. So $$$|S|=\omega(x)=8\land\gcd(S-{x})\ne \gcd {a}$$$ does not hold.
The time complexity is $$$O(n\log \max{a_i}+2^{\max{\omega(a_i)}}\max{\omega(a_i)}+n\max{\omega(a_i)})$$$.
Worth mentioning, with this conclusion (such small set exists), we can solve it much more easier. Just choose a small set by greedy, and enumerate its subset of size $$$14$$$.
#pragma GCC target("popcnt")
#include "bits/stdC++.h"
using namespace std;
typedef unsigned int ui;
typedef long long ll;
#define all(x) (x).begin(),(x).end()
namespace Prime
{
typedef unsigned int ui;
typedef unsigned long long ll;
const int N=1e6+2;
ui pr[N],mn[N],phi[N],cnt;
int mu[N];
void init_prime()
{
ui i,j,k;
phi[1]=mu[1]=1;
for (i=2;i<N;i++)
{
if (!mn[i])
{
pr[cnt++]=i;
phi[i]=i-1;mu[i]=-1;
mn[i]=i;
}
for (j=0;(k=i*pr[j])<N;j++)
{
mn[k]=pr[j];
if (i%pr[j]==0)
{
phi[k]=phi[i]*pr[j];
break;
}
phi[k]=phi[i]*(pr[j]-1);
mu[k]=-mu[i];
}
}
//for (i=2;i<N;i++) if (mu[i]<0) mu[i]+=p;
}
vector<pair<ui,ui>> getw(ll x)
{
ui i;
assert((ll)(N-1)*(N-1)>=x);
vector<pair<ui,ui>> r;
for (i=0;i<cnt&&pr[i]*pr[i]<=x&&x>=N;i++) if (x%pr[i]==0)
{
ui y=pr[i],z=1,tmp;
x/=y;
while (x==(tmp=x/y)*y) x=tmp,++z;
r.push_back({y,z});
}
if (x>=N)
{
r.push_back({x,1});
return r;
}
while (x>1)
{
ui y=mn[x],z=1,tmp;
x/=y;
while (x==(tmp=x/y)*y) x=tmp,++z;
r.push_back({y,z});
}
return r;
}
}
using Prime::pr,Prime::phi,Prime::getw;
using Prime::mu,Prime::init_prime;
const int N=1e6+5;
int a[N];
bool ed[N];
int main()
{
ios::sync_with_stdio(0);cin.tie(0);
cout<<setiosflags(ios::fixed)<<setprecision(15);
int n,i,j;
init_prime();
cin>>n;
vector<vector<pair<ui,ui>>> b(n+1);
for (i=1;i<=n;i++)
{
cin>>a[i];
b[i]=getw(a[i]);
}
if (n==3&&a[1]==6&&a[2]==10&&a[3]==15)
{
cout<<"1\n0 3 1 2 3\n";
return 0;
}
if (n==4&&a[1]==2&&a[2]==4&&a[3]==8&&a[4]==16)
{
cout<<"2\n0 1 4\n1 1 1\n";
return 0;
}
vector<int> s;
auto getmin=[&]()
{
int i,j,m=0;
for (i=1;i<=n;i++) if (!ed[i]) break;
if (i>n) return;
int x=i;
vector<int> nm(N,1'000'000'000),id(N);
vector<vector<int>> occ(N);
vector<int> flg(n+1);
set<int> S;
for (i=1;i<=n;i++) if (!ed[i])
{
for (auto [p,t]:b[i])
{
occ[p].push_back(i);
if (nm[p]>t)
{
nm[p]=t;
id[p]=i;
}
}
++m;
S.insert(i);
}
for (i=2;i<N;i++) if (id[i]&&occ[i].size()!=m)
{
for (int x:occ[i]) S.erase(x);
nm[i]=0;id[i]=*S.begin();
for (int x:occ[i]) S.insert(x);
}
vector<int> r;
for (auto [p,t]:b[x]) if (t!=nm[p]) r.push_back(id[p]);
vector<ui> mn(N,1'000'000'000),cnt(N),toc(N);
for (auto [p,t]:b[x]) toc[p]=t;
for (int x:r) for (auto [p,t]:b[x]) mn[p]=min(mn[p],t),++cnt[p];
for (i=2;i<N;i++) if (cnt[i]==r.size()&&mn[i]>toc[i]) break;
if (i<N) r.push_back(x);
for (int x:r) ed[x]=1,s.push_back(x);
};
getmin();getmin();
sort(all(s));s.resize(unique(all(s))-s.begin());
n=s.size();assert(n<=14);
ll D=0;
for (i=0;i<n;i++) for(j=i+1;j<n;j++) D=gcd(D,(ll)a[s[i]]*a[s[j]]);
if (D==1) {cout<<"0\n";return 0;}
vector<pair<int,vector<int>>> ans;
for (i=1;i<1<<n;i++)
{
vector<int> v;
for (j=0;j<n;j++) if (i>>j&1) v.push_back(s[j]);
int pc=__builtin_popcount(i);
pc=pc&1?2-pc:pc-2;
for (j=1;j<=pc;j++) ans.push_back({0,v});
pc=-pc;
for (j=1;j<=pc;j++) ans.push_back({1,v});
}
int totsize=0;
cout<<ans.size()<<'\n';
for (auto &[x,v]:ans)
{
cout<<x<<' '<<v.size();
for (int x:v) cout<<' '<<x;
cout<<'\n';
assert((totsize+=v.size())<=1'000'000);
}
//cout<<totsize<<endl;
}
There are more than $$$500$$$ tests at first.
1687F - Koishi's Unconscious Permutation
Idea: huangzirui. Solution: Elegia. Preparation: huangzirui, SSerxhs.
Good problem
Average problem
Bad problem
Did not solve
How Elegia's mind works?
We call a permutation $$$p$$$ of length $$$n-s$$$ is good if $$$\forall i\in[1,n-s-1],p_i+1\not=p_{i+1}$$$.
If we can calculate
then, we can get the answer easily by Binomial inversion. So we only need to focus on how to calculate $$$ans_k$$$. For convenience, let $$$n\rightarrow n-s$$$. We have:
where
is the Eulerian number. As is known to all, the generating function of Eulerian number is:
So we have:
Consider how to calculate $$$[y^{n-1}] \dfrac{(x-1)^2e^{-xy}}{(xe^{(1-x)y}-1)^2}$$$. Let $$$u=(1-x)y$$$ and we have:
And:
So we just need to focus on how to calculate $$$[u^{n}]\dfrac{e^{-\frac{xu}{1-x}}}{1-xe^u}$$$. Let $$$w=e^u-1$$$, we have:
Let
.
Try to calculate
.
We know $$$[u^n]\ (e^u-1)^m$$$ is the Stirling numbers of the second kind. We can calculate it in $$$O(n\log n)$$$ or $$$O(n \log^2 n)$$$. Build a $$$2 \times 2$$$ matrix to get $$$\sum\limits_{i=0}^m \binom{-s}{i} s^{m-i}$$$. Let
And we have
So we can divide and conquer to calculate it in $$$O(n \log^2 n)$$$.
#include <cstdio>
#include <cstring>
#include <algorithm>
#include <iostream>
#include <chrono>
#include <random>
#include <functional>
#include <vector>
#define LOG(FMT...) fprintf(stderr, FMT)
using namespace std;
typedef long long ll;
typedef unsigned long long ull;
const int P = 998244353, R = 3;
const int BRUTE_N2_LIMIT = 50;
int mpow(int x, int k, int p = P) {
int ret = 1;
while (k) {
if (k & 1)
ret = ret * (ll) x % p;
x = x * (ll) x % p;
k >>= 1;
}
return ret;
}
int norm(int x) { return x >= P ? x - P : x; }
int reduce(int x) {
return x < 0 ? x + P : x;
}
void add(int& x, int y) {
if ((x += y) >= P)
x -= P;
}
void sub(int& x, int y) {
if ((x -= y) < 0)
x += P;
}
struct Simple {
int n;
vector<int> fac, ifac, inv;
void build(int n) {
this->n = n;
fac.resize(n + 1);
ifac.resize(n + 1);
inv.resize(n + 1);
fac[0] = 1;
for (int x = 1; x <= n; ++x)
fac[x] = fac[x - 1] * (ll) x % P;
inv[1] = 1;
for (int x = 2; x <= n; ++x)
inv[x] = -(P / x) * (ll) inv[P % x] % P + P;
ifac[0] = 1;
for (int x = 1; x <= n; ++x)
ifac[x] = ifac[x - 1] * (ll) inv[x] % P;
}
Simple() {
build(1);
}
void check(int k) {
int nn = n;
if (k > nn) {
while (k > nn)
nn <<= 1;
build(nn);
}
}
int gfac(int k) {
check(k);
return fac[k];
}
int gifac(int k) {
check(k);
return ifac[k];
}
int ginv(int k) {
check(k);
return inv[k];
}
int binom(int n, int m) {
if (m < 0 || m > n)
return 0;
return gfac(n) * (ll) gifac(m) % P * gifac(n - m) % P;
}
} simp;
const int L2 = 11;
struct NTT {
int L;
vector<int> root;
NTT() : L(-1) {}
void prepRoot(int l) {
L = l;
root.resize((1 << L) + 1);
int i, n = 1 << L;
int *w2 = root.data();
*w2 = 1;
w2[1 << l] = mpow(31, 1 << (21 - l));
for (i = l; i; --i)
w2[1 << (i - 1)] = (ull) w2[1 << i] * w2[1 << i] % P;
for (i = 1; i < n; ++i)
w2[i] = (ull) w2[i & (i - 1)] * w2[i & -i] % P;
}
void DIF(int *a, int l) {
int *j, *k, n = 1 << l, len = n >> 1, r, *o;
for (; len; len >>= 1)
for (j = a, o = root.data(); j != a + n; j += len << 1, ++o)
for (k = j; k != j + len; ++k) {
r = (ull) *o * k[len] % P;
k[len] = reduce(*k - r);
add(*k, r);
}
}
void DIT(int *a, int l) {
int *j, *k, n = 1 << l, len = 1, r, *o;
for (; len != n; len <<= 1)
for (j = a, o = root.data(); j != a + n; j += len << 1, ++o)
for (k = j; k != j + len; ++k) {
r = reduce(*k + k[len] - P);
k[len] = ull(*k - k[len] + P) * *o % P;
*k = r;
}
}
void fft(int *a, int lgn, int d = 1) {
if (L < lgn) prepRoot(lgn);
int n = 1 << lgn;
if (d == 1) DIF(a, lgn);
else {
DIT(a, lgn);
reverse(a + 1, a + n);
ull nv = P - (P - 1) / n;
for (int i = 0; i < n; ++i) a[i] = a[i] * nv % P;
}
}
} ntt;
struct Poly {
vector<int> a;
Poly(int v = 0) : a(1) {
if ((v %= P) < 0)
v += P;
a[0] = v;
}
Poly(const vector<int> &a) : a(a) {}
Poly(initializer_list<int> init) : a(init) {}
// Helps
int operator[](int k) const { return k < a.size() ? a[k] : 0; }
int &operator[](int k) {
if (k >= a.size())
a.resize(k + 1);
return a[k];
}
int deg() const { return a.size() - 1; }
void redeg(int d) { a.resize(d + 1); }
Poly monic() const;
Poly sunic() const;
Poly slice(int d) const {
if (d < a.size())
return vector<int>(a.begin(), a.begin() + d + 1);
vector<int> res(a);
res.resize(d + 1);
return res;
}
int *base() { return a.data(); }
const int *base() const { return a.data(); }
Poly println(FILE *fp) const {
fprintf(fp, "%d", a[0]);
for (int i = 1; i < a.size(); ++i)
fprintf(fp, " %d", a[i]);
fputc('\n', fp);
return *this;
}
// Calculations
Poly operator+(const Poly &rhs) const {
vector<int> res(max(a.size(), rhs.a.size()));
for (int i = 0; i < res.size(); ++i)
if ((res[i] = operator[](i) + rhs[i]) >= P)
res[i] -= P;
return res;
}
Poly operator-() const {
Poly ret(a);
for (int i = 0; i < a.size(); ++i)
if (ret[i])
ret[i] = P - ret[i];
return ret;
}
Poly operator-(const Poly &rhs) const { return operator+(-rhs); }
Poly operator*(const Poly &rhs) const;
Poly taylor(int k) const;
};
Poly zeroes(int deg) { return vector<int>(deg + 1); }
Poly operator "" _z(unsigned long long a) { return {0, (int) a}; }
Poly operator+(int v, const Poly &rhs) { return Poly(v) + rhs; }
Poly Poly::operator*(const Poly &rhs) const {
int n = deg(), m = rhs.deg();
if (n <= 10 || m <= 10 || n + m <= BRUTE_N2_LIMIT) {
Poly ret = zeroes(n + m);
for (int i = 0; i <= n; ++i)
for (int j = 0; j <= m; ++j)
ret[i + j] = (ret[i + j] + a[i] * (ll) rhs[j]) % P;
return ret;
}
n += m;
int l = 0;
while ((1 << l) <= n)
++l;
vector<int> res(1 << l), tmp(1 << l);
memcpy(res.data(), base(), a.size() * sizeof(int));
ntt.fft(res.data(), l, 1);
memcpy(tmp.data(), rhs.base(), rhs.a.size() * sizeof(int));
ntt.fft(tmp.data(), l, 1);
for (int i = 0; i < (1 << l); ++i)
res[i] = res[i] * (ll) tmp[i] % P;
ntt.fft(res.data(), l, -1);
res.resize(n + 1);
return res;
}
Poly Poly::taylor(int k) const {
int n = deg();
Poly t = zeroes(n);
simp.check(n);
for (int i = 0; i <= n; ++i)
t[n - i] = a[i] * (ll) simp.fac[i] % P;
int pw = 1;
Poly help = vector<int>(simp.ifac.begin(), simp.ifac.begin() + n + 1);
for (int i = 0; i <= n; ++i) {
help[i] = help[i] * (ll) pw % P;
pw = pw * (ll) k % P;
}
t = t * help;
for (int i = 0; i <= n; ++i)
help[i] = t[n - i] * (ll) simp.ifac[i] % P;
return help;
}
Poly stirling2(int n) {
Poly p = zeroes(n), ne = zeroes(n);
for (int i = 0; i <= n; ++i) p[i] = mpow(i, n) * (ll)simp.gifac(i) % P;
for (int i = 0; i <= n; ++i) ne[i] = simp.gifac(i);
for (int i = 1; i <= n; i += 2) ne[i] = P - ne[i];
p = p * ne;
vector<int> ans(n + 1);
for (int i = 0; i <= n; ++i) ans[i] = p[i] * (ll)simp.gfac(i) % P;
return ans;
}
namespace DC {
int N;
vector<Poly> prd, sum;
Poly lift(Poly a, int k) {
a.a.insert(a.a.begin(), k, 0);
return a;
}
void build(int o, int l, int r) {
if (l == r - 1) {
prd[o].redeg(1);
prd[o][1] = P - simp.ginv(r);
prd[o][0] = (P - l) * (ll)simp.ginv(r) % P;
sum[o] = prd[o];
return;
}
int mid = (l + r + 1) / 2;
build(o << 1, l, mid);
build(o << 1 | 1, mid, r);
prd[o] = prd[o << 1] * prd[o << 1 | 1];
sum[o] = prd[o << 1] * sum[o << 1 | 1] + lift(sum[o << 1], r - mid);
}
void pre(int n) {
N = n;
sum.resize(n * 4); prd.resize(n * 4);
build(1, 0, n);
}
Poly input;
pair<Poly, Poly> solve(int o, int l, int r) {
if (l == r - 1) {
Poly r1 = input[r];
return make_pair(r1 * prd[o], lift(r1, 1));
}
int mid = (l + r + 1) / 2;
auto ls = solve(o << 1, l, mid), rs = solve(o << 1 | 1, mid, r);
ls.first = ls.first + prd[o << 1] * rs.first + sum[o << 1] * rs.second;
ls.second = ls.second + lift(rs.second, mid - l);
return ls;
}
Poly solve(Poly in) {
input = in; input.redeg(N);
auto pr = solve(1, 0, N);
auto ret = pr.first + pr.second;
ret[0] = (ret[0] + input[0]) % P;
return ret;
}
}
Poly compute(Poly coeff) {
int n = coeff.deg();
Poly ret = DC::solve(coeff);
ret.redeg(n);
reverse(ret.a.begin(), ret.a.end());
ret = ret.taylor(P - 1);
reverse(ret.a.begin(), ret.a.end());
return ret;
}
Poly solve(int n) {
DC::pre(n);
auto v0 = stirling2(n), v1 = stirling2(n - 1);
return compute(v0) + compute(v1);
}
int main() {
ios::sync_with_stdio(false); cin.tie(nullptr);
int n, s; cin >> n >> s;
auto ans = solve(n - s);
for (int i = 0; i < s; ++i) cout << "0 ";
for (int i = n - s - 1; i >= 0; --i)
cout << ans[i] * (ll)simp.binom(n - 1, s) % P
<< " \n"[i == 0];
return 0;
}
div2 C, terrible!
I don't even feel like upsolving it lol
Yes, their solution of C problem in Python is not accept by PyPy3 and PyPy2, but it said that generally it is faster. I have lose many attempts searching in my solution, but it is was correct. But then it was just accepted when I send it by Python2 or Python3 Admins, Please, reconsider solutions, because I solved this problem faster, then it was finally accepted
In Div.2:
Because testers with different rating solve C easily, spending less time than D.
We don't mean to make C hard, it's a fault. On the contrary, we add C to fill the gap between B and D (at first, D was C).
The idea to solve C was so random and guessing, I thought there is a way to construct it by some advance algorithm =((
Thanks for the fast editorial, though there had been nightmares in 2C.
Weak pretests in Div2A, seriously ?
Can anyone please explain Div 2 B less mathematically? I am having hard time understanding the solution.
If you have at least one odd number, you can sum this number with all the evens because even + odd = odd, so in this case ans = amount of even numbers. Otherwise, the optimal strategy is to divide a number until it becomes odd, and then do the strategy for the first case, you can do it iterating over all the numbers and looking for the number that requires less movements to became odd by dividing it by 2.
How does this work in this case: n = 2 a = (1024, 2048) Both are even numbers, the correct solution here should be 1: operation 1 yielding 3072 2: operation 2, 5 times solution: 6 if we follow the strategy of searching for the number that requires the least division by 2 to make it odd, then we try to divide 1024 10 times to make it odd, so the solution becomes 11
The optimal solution is to use fusion between an odd and even number as often as possible (this will always get rid of 1 even number per operation whereas reduction can leave you with even number. It's impossible to get rid of more than one even number per operation).
The problem is if you don't have any odd numbers to start with, in which case you need to calculate which number can be made odd quickest using reduction and then use that odd number with fusion to get rid of all the remaining evens.
If there is at-least one odd number in the array, then you can use this number with operation 1 to make all the even number numbers odd (odd + even = odd). The total number of operations would be the number of even numbers.
If no odd numbers present, then find the even number that takes minimum number of operations (operation 2 divide by 2) to make it odd, then do step 1.
Damn.. How did I missed that!!! Thank you for the nice explanation.
Please Explain D in more detail if anyone can ??. I am not getting for k>=n part ??
the optimal strategy is to wait until we have greater than n seconds left and let the mushrooms grow as much as they can. finally when we have n seconds left, we traverse over the array and collect all the mushrooms.
Omg !! Yeah i didn't saw |x-y| <= 1. So just wait for n-k seconds . Then loop through array for n times giving as k*n . and rest part is n*(n-1)/2. so answer will be = sumofarray + (k-n)*n + (n*(n-1)/2). Wrote the same formula and got accepted . Thanks a lot buddy !!
Let me explain how I proved it,
The main observation is, if a point is visited at times (t1, t2, t3...tp) Then, the "extra" mushroom gained from this point is tp-1. Why? Because, first I gain t1-1 mushrooms, then t2-t1 mushrooms, then t3-t2 mushrooms and so on. So extra mushrooms gained by point simply equals to last time I visited it minus 1.
Lets say, that I visit exactly r points and I have k seconds (I may visit some points multiple times), how to maximize the number of "extra" mushrooms gained given, for a fixed r? Observe that if I have k seconds, than extra mushrooms gained will be sum of r different numbers from [0 , k-1] (Every number denotes extra mushrooms gained from some point. No number will be repeated in sum, because every point's last visited times MUST be different.).
An upper bound is sum of last r numbers, i.e [(k-r) + (k-r+1)...+ (k-1)]. This is also possible too! Start from 0th second, keep waiting and collecting mushroom from the first point till, (k-r+1)th second, So that I can get k-r extra mushrooms from it, then jump to remaining r-1 points, giving me (k-r+1) + (k-r+2)...+ (k-1) extra mushrooms!
Now, for a fixed r we know the strategy. Observe that increasing r is always profitable. (Its really easy to prove, just compare the total mushroom gained using optimal strategy for r and r+1). So we should try to keep r as large as possible.
Video Solution for Div2D
Beware of Luogu's problem group invading Codeforces.
It should be "Beware of TouhouProject's problem group invading Codeforces."
Can anybody explain why we should do that? I'm just curious about it.
Because this competition is prepared by one of luogu's (a Chinese cp website) team of problem setters (called WdOI). Of course this comment is just joking, please don't take it seriously.(however for myself Chinese rounds are too terrible I always get negative rating changes :(
C, if some letter is in the intial string, it will appear in the input data exactly once. Why?
I made some insights elsewhere: " I guess the answer at the game.
Let's consider both operations and results as strings.
Consider the character 'x' at the beginning, if there is only one operation, that is, change the 'x' into 'yyy', then the input data is the 'x' 'yyy' of the operation and the 'yyy' of the result. If we go to the operation again, for example, the next operation is 'yy'-'zzz', then each operation is to delete 'yy' from the result string and join in the operations, and then replaces 'zzz', in the result string and adds' zzz' to the operation string. It is found that the 'zzz'' has increased twice, while the location of the 'yy' has only changed, but the number has not changed. That is to say, the parity of their occurrence remains the same. Therefore, only the initial letters appear odd times, and the others appear even times. " So I think the editorial of the problem only says 'appears exactly once' ,means 'x' that I said above appears only once. If there is an operation 'yy'-'zzz',' that contains'x'or equals to'x', we don't regard it as 'x' appear again. The description may not be accurate, but I think my solution should be correct.
those quotes at the starting of question was quite helpful for solving problems ...LOL!
how?
I think I cheesed problem D.
A "brute force" solution is to iterate over the array, and increase the offset to whatever it has to be to make the current element cute, and repeat until the offset didn't change.
This was as I expected too slow, but then I tried shuffling the input list. That was still to slow. Then I tried on-line shuffling the prefix of the array that is looked at until finding something not cute, and breaking when its found. My intuition for doing this is that this way problematic elements probably end up in the start of the array, so they're found faster.
Online shuffle is a nice trick here!
Can anybody please tell me what's wrong with my div2D submission? Cf-stress was not able to find anything wrong...
when k>n, the ans = sum(n)+k*n-(n+1)*n/2. in your code,this part is wrong.
This part is correct actually...
sry,i made a mistake...my math is terrible
and when k<n, your ues of array pref may be wrong. the sum from [i,i+k-1] = pref[i+k-1] — pref[i-1] . Attention please, -1 is important here.
Nvm, my stupid ass thought that the positions are cyclically arranged.
I have stopped reading novels & stories since Codeforces has many stories to tell in every question !
div 2 c a nightmare
Yes, their solution in Python is not accepted by PyPy3 and PyPy2, but it said that generally it is faster. I have lose many attempts searching in my solution, but it is was correct. But then it was just accepted when I send it by Python2 or Python3
strings operations are always slow in py so try to submit string related question in python ...
When I saw Elegia in the writer's list, I immediately knew that Div.1 F won't be solvable.
Harder div1D: You are given $$$100\,000$$$ values of $$$k$$$; for each one say whether $$$a_i+k$$$ is beautiful for all $$$1\le i\le n$$$.
I also reduce problem D to this problem, but then I cannot solve this. How could you solve the problem by this approach?
Given $$$p\ge 0$$$, $$$a>0$$$, $$$b>0$$$; let $$$S(p, x, y) = [p, p+x-1]\cup S(p+x+y, x+1, y+1)$$$ (and infinite union of disjoint intervals).
A subset of the nonnegative integers is nice if it is the union of $$$k$$$ disjoint intervals and $$$S(p, x, y)$$$ for some $$$p, x, y$$$. Nice subsets have the following properties:
We want to compute the set of $$$k$$$ such that $$$a_i+k$$$ is beautiful for all $$$i$$$. Such set is nice thanks to the observation above. What is less clear is that such set has only $$$O(a_n-a_1)$$$ disjoint intervals and then $$$S(p, x, y)$$$ for some $$$p, x, y$$$. Once one is convinced of this, it is easy to conclude with a divide and conquer technique (we compute the answe for $$$[1, n/2]$$$ and $$$[n/2+1, n]$$$ and then we deduce the answer for $$$[1,n]$$$ intersecting).
After having computed the set of $$$k$$$ such that $$$a_i+k$$$ is beautiful for all $$$i$$$, answering the queries is straight-forward in $$$O(\log(a_n-a_1))$$$.
Implementing the intersection of two nice sets is not particularly nice...
How is it harder? Just like in the regular solution, enumerate floor of sqrt of $$$a_1 + k$$$ and do the jumps, you will find the interval of possible $$$k$$$. For $$$k > a_n^2$$$ all the numbers should be in one segment, so you can check one such $$$k$$$ in $$$O(1)$$$. I believe that my submit 159385709 from the contest will solve this harder version if remove
return 0
, change IO and add this special case for big $$$k$$$.For your solution (and possibly for the official solution ?) they might be the same problem, but the majority of accepted solutions are not good enough to solve the harder version.
I had an alternate solution for problem D that is hard to analyze but runs quite fast on the given tests (maybe hackable?). The basic idea is to combine the naive approach of repeatedly finding a lower bound for $$$k$$$ with grouping elements of the array. The grouping will compress numbers together if we can be certain that $$$f(x+k)=f(y+k)$$$ in the final answer. This is helpful because e.g. if we can group together elements that are $$$10^6$$$ apart, we can skip to a value of $$$k$$$ that will make $$$f(x+k)\geq 10^6$$$.
Div1D: Please hack me, this is a simple randomization (with some optimize)
159418821 upd: hacked, thanks
this is how i did Div2D — Case 1 : k<=n we take max sum contiguous subarray of length of size k and then add k*(k+1)/2 to it . Case 2 : k>n at first i thought of taking from left to right then right to left then left to right so on till k becomes 0 but it was not able to maximize the answer , so i took 0th element n-k+1 times (till then other n-1 mushrooms grow and this leads to optimal answer) , then take all elements from index 1 to n single time and add initial height and the extra height it grew when u were picking the mushrooms before it here is my submission — https://codeforces.net/contest/1688/submission/159403438
"Worth mentioning, with this conclusion (such small set exists), we can solve it much more easier. Just choose a small set by greedy, and enumerate its subset of size 14."
:o
A, B and D are nice. C not much. Nice contest though
Hey, Please tell why my submission on A failed? My submission
It says "wrong output format Expected integer, but "2.68435e+008" found" on test 2
"Raw"
pow
function returns a float or a double value. You have to convert it or write a power function for integers only.@TWNWAKing, @quochung-cyou Thank you so much, though it got accepted, I am not able to understand why the conversion was needed, as he inputs are in int range.. or could you please tell any specific test case where it was failing?
It's just that pow function returns double type, and c++ outputs doubles like this for example 1.531e3, which is not acceptable, so you have to convert it into int or long long so that it outputs in integer form
Div 2 problem C — the statement was very confusing and hard to understand!
why it's not possible to hack Div2C?
Probably because it's difficult to check whether the input is valid or not.
Can A high rated person give their view on problems like Div 2C and how to get better at them, these always seem to appear in Div 1 + Div 2 rounds where we need to make a claim based on observations and hope that its correct. I think all of us could benefit from it.
yes.I struggle in such kind of problems.
God I absolutely hate div2C kind of problem where you're just trying to find this tiny obscure property that doesn't even look too relevant but turns out to be literally the whole answer
Question C is not correctly explained that things ruined the whole contest
Solution: Elegia.
**40 lines of mathematical derivation using bivariate generating functions**
Div 2 C. It got TLE in python 3. But when same code is submitted in pypy3 ,it is accepted. Why does this happen?please look into it and re evaluate it.
Codeforces always says that if you submit it in pypy, it almost always works faster
If you are/were getting a WA/RE verdict on problems from this contest, you can get the smallest possible counter example for your submission on cfstress.com. To do that, click on the relevant problem's link below, add your submission ID, and edit the table (or edit compressed parameters) to increase/decrease the constraints.
Divison 2
Divison 1
If you are not able to find a counter example even after changing the parameters, reply to this thread (only till the next 7 days), with links to your submission and ticket(s).
I have a question and I will be thankful if someone helped out It is about Div 2 C The question says the answer is unique but I found this sequence for the second test case and it seems right but I do not see where I went wrong so can anyone point it out to me?
initial is "a" pick substring "a" and replace it with "z" so now it is "z". pick substring "z" and replace it with "aa" so now it is "aa". pick the first substring "a" and replace it with "yakumo" so now it is "yakumoa" pick the last substring "a" and replace it with "ran" so now it is "yakumoran"
and then t before shuffled become: t = ["a" , "z" , "aa" , "yakumo" , "a" , "ran"]
so can anyone point out where I went wrong and explain the question more for me? Thanks in advance
This is not possible. Imagine that the initial string was "a" as you told. and t = ["a" , "z" , "aa" , "yakumo" , "a" , "ran"].
operation 1: s = "a". choose a substring of s that equals to t[1] which is "a" and change it to t[2] which is "z".
operation 2: s = "z". choose a substring of s that equals to t[3] which is "aa" and change it to t[4]. but we don't have any substring "aa" in "z".
Oh now i get it
Thanksa lot
Div2 C, Hint2, Why the initial string consists of only one letter?
Well, it's obvious, as the authors wrote in the statement: The history of Gensokyo is a string s of length 1 initially.
Anyway, Div2 C and D should've been swapped.
I remember our math prof sometimes saying: "Hint: Look at the definitions"
I have thought about problem D for 1 hour and I have also implemented a solution different from the official one. Nonetheless, I cannot understand anything from the editorial. Can someone give a vague idea of what is the official solution?
This may be slightly different, I also cannot fully parse the editorial but I think it's similar.
Good numbers look like this, starting from 1:
1 good, 0 bad, 2 good, 1 bad, ...,
x
good,x - 1
bad, ...Let's fix that after shifting,
a[1]
will be in the good range of sizex
. It gives us a range[lo, hi]
(the numbers which putsa[1]
at the beginning and end of the good range of sizex
respectively) of candidates.Consider all elements
a[1] ... a[n]
are shifted toa[1] + lo, ..., a[n] + lo
at first. You can see that only the next~a[n] / (x - 1)
ranges can have any numbers in them (since the range sizes are all at leastx - 1
.For each good relevant range, you need the largest element in this range (this upper bounds
hi
).For each bad relevant range, you need the smallest element in this range (this lower bounds
lo
).If
lo <= hi
after looking at all ranges you have a valid answer. Smallest / largest in any range can be found be precomputing + shifting.Since there are
a[n] / (x - 1)
relevant ranges for eachx
, the total cost becomes~a[n] log(a[n])
. (Once you hit the good range of sizea[n]
you're guaranteed to find an answer).Yet another randomized approach for Div1D, which I tweaked to being accepted after the contest. It is similar to some approaches already mentioned, but still feels different enough for me to write it up.
First, the base solution.
The cuteness means that every number has to be from $$$u^2$$$ to $$$u^2 + u$$$ for some integer $$$u$$$. And some $$$u$$$ on the order of a few million would make each number cute, so, there are not too much $$$u$$$ to check.
Thus we iterate over $$$u$$$ for the first given number, $$$a_1$$$. Inside, we have the answer between two borders, $$$x$$$ and $$$y$$$, where $$$x = u^2 - a_1$$$ and $$$y = u^2 + u - a_1$$$.
Check all other numbers $$$a_i$$$:
if $$$a_i + x$$$ is between some $$$v^2$$$ and $$$v^2 + v$$$, the upper bound $$$y$$$ should be lowered so that $$$a_i + y = v^2 + v$$$;
if $$$a_i + x$$$ is larger than $$$v^2 + v$$$, the lower bound $$$x$$$ should be upped so that $$$a_i + x = (v + 1)^2$$$;
as soon as $$$x > y$$$, the whole check fails.
Note that, on step 2, we can not possibly go to $$$(v + 2)^2$$$ or more, because the initial bounds are for the lowest of our numbers $$$a_1$$$.
If all the checks pass, we have the lower bound $$$x$$$ as our answer.
Now, the base solution is slow.
But if we shuffle $$$a_2, \ldots, a_n$$$ randomly, the intuition is that, when there are $$$p$$$ problematic values and they are distributed randomly, we break after $$$n / p$$$ steps. And if we had, for example, $$$n, n - 1, \ldots, 3, 2, 1$$$ problems in our checks, the sum would be just $$$n \log n$$$.
This is still slow, for example, when we have many small numbers and one or two large — which are not lucky enough to appear at the front of our shuffled array, but still break most of the checks rather late.
A neat solution to this seems to be the following: as soon as $$$a_i$$$ broke the check, shuffle it into a random position of $$$a_2, \ldots, a_i$$$. This way, the values which are usually problematic tend to go to the front of the array. Much like in a splay tree, useful checks gather at the front. This is still far from a formal proof.
Why didn't I get a rating change after the contest? It shows no ranking in the contests page in my profile. (don't know why but it is listed in unrated contests)
Something is wrong with the system. I have become pupil and it still shows my name in gray, and in the contests place in profile it says I becamr Newbie
Div. 2 Problem C is really difficult for me to understand. The editorial does not make much sense to me. If someone could explain in better terms it would be much appreciated. I am trying to upsolve it.
I guess the answer at the contest.
Let's consider both operations and results as strings.
Consider the character 'x' at the beginning, if there is only one operation, that is, change the 'x' into 'yyy', then the input data is the 'x' 'yyy' of the operation and the 'yyy' of the result. If we go to the operation again, for example, the next operation is 'yy'-'zzz', then each operation is to delete 'yy' from the result string and join in the operations, and then replaces 'zzz', in the result string and adds' zzz' to the operation string. It is found that the 'zzz'' has increased twice, while the location of the 'yy' has only changed, but the number has not changed. That is to say, the parity of their occurrence remains the same. Therefore, only the initial letters appear odd times, and the others appear even times.
I think I understand it better, thank you so much. A very strange problem indeed.
So, how Elegia's mind works?
in problem B(https://codeforces.net/contest/1688/problem/B) if there are only even numbers in the array why we are not using Fusion(Fusion: Patchouli chooses two tokens, removes them, and creates a new token with magical power equal to the sum of the two chosen tokens.)operation?
ex:array[]={26,30,24,50}
26+30=56,now array[]={56,24,50}
56+24=80,now array[]={80,50}
80+50=130,now array={130}
applying reduction operation array={65} here also minimum operations=4
Just like the last case in the sample, it's a special case, but that doesn't mean the best choice is to use Fusion operation. You can find that, if $$$A = 2^i \times a, B = 2^j \times b$$$ and $$$a, b$$$ is odd, if we add first and then try to devide the sum by $$$2$$$, we need at least $$$\min(i, j) + 1$$$ operations. But if we make one of them become odd first, then we need exactly $$$\min(i, j) + 1$$$ operations. When $$$n$$$ becomes larger, the total number of operations required by the second method above seems stable while the first don't.
can you please take a example and explain?
For example, $$$A = 4, B = 12$$$, if we use Fusion first then we need to perform $$$5$$$ operations, while if we first turn $$$A$$$ into $$$1$$$, we only need $$$3$$$ operations in total.
The question is to consider $$$\texttt{lowbit}(A)$$$ and $$$\texttt{lowbit}(B)$$$, if they are the same, so $$$A + B$$$ will have more zeros in the binary representation's end than both $$$A$$$ and $$$B$$$, so we need to perform more operations to devide them by $$$2$$$ until the sum becomes odd.
Very weak sample in 1C, as a consequence I had been debugging a fake algorithm for an hour.
Does anyone know any error while using map<char,int>. I wrote a code and it gave me wrong answer and then I edited it to map<int,int>, It worked and when I switched back to map<char,int> I was getting correct again. It has happened with me earlier too.
please explain 2A in a bit detailed and a simplified way.
If $$$x = 2^k$$$, then the binary representation of $$$x$$$ only contains one $$$1$$$. To make sure $$$x \texttt{ and } y \neq 0$$$, then at that bit $$$y$$$ contains $$$1$$$ must hold. The same, to make sure $$$x \texttt{ xor } y \neq 0$$$, in order to make $$$y$$$ smallest, you will se we can just replace one $$$0$$$ with $$$1$$$ on any other bit of $$$y$$$. So this bit must on the leftest un-zero bit. So, if $$$y = 1$$$ then $$$y \leftarrow y + 2$$$, or else $$$y \leftarrow y + 1$$$.
If $$$x \neq 2^k$$$, then just make $$$y = \texttt{lowbit}(x)$$$.
1688C = https://codeforces.net/contest/1634/problem/B all over again
I solved E with a simple greedy. Can anyone hack me or prove it?
Our goal is to choose a subset that does not change the answer.
Let $$$g$$$ be the final answer. For each prime factor $$$p$$$ of $$$g$$$, it suffices to choose two elements $$$x,y$$$ that $$$f_p(x),f_p(y)$$$ are the smallest and second smallest. (definition of $$$f_p$$$ is same as editorial)
Now we have a set $$$S$$$, but the gcd of pairwise products in $$$S$$$ still might not be $$$g$$$. Let the reduntant prime factors of $$$g$$$ be $$$[p_i]$$$. For each $$$p\in [p_i]$$$, we need to choose two more elements $$$x,y$$$ that $$$f_p(x),f_p(y)$$$ are $$$0$$$.
This directly passes. However I can't prove that it will never take more than $$$14$$$ elements. https://codeforces.net/contest/1687/submission/159458664
upd: hacked
I can not prove it either, but it seems correct.
Actually there are many strange but correct ways to select the subset. The method in editorial is not the easiest one, but it is easier to prove its correctness.
Even though many people may not agree with me I think div2 C is one of the best problems I have ever seen. Congrats to the author of it Yakumo_Ran!
(in the code of Div.2 B)
To salute Feng Jing Olympic of Imagination.
can someone explain me Div2E problem statement
Howcome Div2E is only 1700 rated though it has only 700s ACs. I am new to MST ,apart from kruskal what else algorithms are used to make a MST from a graph?
problem c tutorial is really dedicated to people who already solved it only !!! it's hints is just define the problem and it doesn't make any sense.
So the answer is the only letter appearing odd times in the input data.
why the above statement is correct ? what is its proof ? what is the explanation of the two situations of letter c ?
Thanks for the editorial and the round, div2 C is very interesting
thanks for the bad editorial and the round , div2 C is the worst problem ever I seen.
Idk about y'all. But from a practice perspective, doing div2 C was the perfect problem to come by and solve quickly after not making progress on a different problem for an hour.
Anyone can explain How to tackle observation-based problem as bits related or Number theory related. In which there are some patterns in the output. I usually tack lots of examples but am not able to come up with a solution..
Problem E
Please tell why my solution is failing: My Submission
My approach is very similar to that given in the editorial but I have used priority queue instead of sorting the edges by weights. Although it is giving correct output for the sample test case, but giving Wrong Answer verdict on submission.
Perhaps a slightly different solution to Div1D.
We know an $$$y$$$ is bad iff there exists an $$$x$$$ and $$$i$$$ where $$$y \in (x^2 + x - a_i, (x + 1)^2 - a_i) = \mathcal{I}(x, i)$$$.
Fix $$$x$$$. If $$$a_i - a_{i - 1} \leq x$$$, $$$\mathcal{I}(x, i) \cup \mathcal{I}(x, i - 1)$$$ is an intervals. Thus, the number of bad intervals (for a fixed $$$x$$$) equals to the number of $$$i$$$ where $$$a_{i} - a_{i - 1} > x$$$ plus 1.
Said another way, for a fixed $$$i$$$, it contributes to a $$$x$$$ if $$$x \leq a_{i} - a_{i - 1}$$$. Therefore, the total number of bad intervals is $$$\sum_{i} (a_i - a_{i - 1}) = a_n$$$.
After finding all bad intervals, the rest can be solved in any way you like.
I note that the solution is different from the official editorial as the harmonic series appears nowhere.
Brilliant solution! And we can use radix sorting to optimize the time complexity to $$$O(n+a_n)$$$ since $$$l\le a_n^2$$$ holds for each interval $$$[l,r]$$$.
However, assuming only the comparison model, I can only achieve $$$O(a_n \log a_n)$$$ as I have no way to avoid sorting.
If we take a stronger model like the RAM model, sorting $$$n$$$ integers of $$$w$$$-words still takes $$$O(n \frac{w}{\log n})$$$ time. Though there is faster integer sorting algorithm exists, it's not linear after all.
Can someone explain for problem B what's the solution for this test case: 2 1 1024 2048
Can somebody explain me why my submission 159609287 to div2 F fails? I tried stress testing it on cfstress.com with a range of constraints but couldn't find any counterexample. My solution idea is based on the editorial's idea of making all elements of s 0 by repeatedly choosing segments with both ends 0 and making that whole segment 0. To implement this, I kept a segment tree for maintaining number of 0's in any segment of the array s. Now, I sorted all the given m segments according to their l value (stored in a vector of pairs,v) and kept 4 sets offl(segments currently traversed whose both ends are non zero,sorted wrt l value),offr(same as offl but sorted wrt r value),onl(segments whose r end has a 0 but not the l end,sorted acc to l value) and onr(whose l value is 0 but not r,sorted wrt r value). Then I iterated through v. If the current segment is non zero at both ends,then I insert it into offl and offr, else if exactly one end is 0 I insert it into onl or onr accordingly depending on the 0 end. Else if both ends are 0, then I update that segment to all zeros and also repeatedly update offl,offr,onl,onr according to the new updation of the array and also update segments which were initially in onl or onr but after the latest updation of array s,became 0 at both ends. This step repeats until I am not able to update any of the 4 sets or any more segments. You can have a look at my code for better understanding of the approach.
Sure, take a look at Ticket 10982 on CF Stress.
Remember that, in the free plan, you only get 60 seconds of stress testing time on the server. To best utilize it, you should choose the constraints a bit smartly, for example, instead of keeping $$$t\_low$$$ and $$$t\_high$$$ as 1, you can increase both of them to infinity (while keeping the product less than the limit). This would generate, say, $$$2000$$$ test cases per input, which has a higher chance of finding a counter example. If you get "Data too large" verdict, you can gradually decrease the constraints on $$$t$$$ using binary search, but now you at least know that a counter example does exist for smaller $$$n$$$.
But if you don't want to decrease the constraints on $$$t$$$, just copy the large testcase, and in your code, print the failing testcase on the error stream. You can get this TC number from the difference highlighter at the bottom.
For example, I got this TC
Thanks a lot!! Will keep this in mind,the next time.
I understand the solution now. Deleted my comments.
Maybe there is somebody like me unable to understand the official editorial as a g.f. dummy, I would like to share my personal note here.
1687A — The Enchanted Forest. I don't understand, why for k > n additional mushrooms is (2 * k — 1 — n) * n / 2. Please, help me, if you know. Thanks. And, please, don't downvote, because I really don't understand...
I have done in this way:
if(k>n)
We know ans= sum(a[i] for all 0 <= i <n ) + some thing (lets say y)
To calculate this y , we will remain at a[0] for ( k-n ) time so total increment in all a[i] is (k-n). Now we are left with n steps , in each step we will move from i=0 to i=n-1 (assuming 0 based indexing) . So y becomes,
y = sum of all (k — n + j) , where 0<=j<=n-1.
eg . for n = 5 , k = 700 y = ( 700-5 ) + ( 700-4 ) + ( 700-3 ) + ( 700-2 ) + ( 700-1 )
therefore , y= k * n — n * ( n + 1 ) / 2
simplify this we will get
y= (2 * k — 1 — n) * n / 2
my solution
Thank you very much!
Auto comment: topic has been updated by huangzirui (previous revision, new revision, compare).
In 1687A — The Enchanted Forest Can't understand the solution should'nt for k<n for the testcase n=10 k=3 1000000 1 1 1 1 1 1 1 1 100000000 should'nt it fails answer should be 2000001 but according to the given algorithm answer should be 1000005..