How many odd integers are there in a pair $$$(a, b, c)$$$?
At least 2.
It's always better to put exactly $$$2$$$ odd integers and exactly $$$1$$$ even integer in one pair of $$$(a, b, c)$$$. Is there a way to always choose optimal $$$(a, b, c)$$$?
Try consecutive integers.
T = int(input())
for _ in range(T) :
l, r = map(int, input().split())
print(((r + 1) // 2 - l // 2) // 2)
2007B - Index and Maximum Value
Consider the position of the maximum value in one array. If there's multiple maximum values, choose any. What can you observe?
The position of the maximum value never changes. That is, if one element is the maximum value in the beginning, it will always be.
T = int(input())
for _ in range(T) :
n, m = map(int, input().split())
v = max(map(int, input().split()))
for __ in range(m) :
c, l, r = input().split()
l = int(l)
r = int(r)
if (l <= v <= r) :
if (c == '+') :
v = v + 1
else :
v = v - 1
print(v, end = ' ' if __ != m - 1 else '\n')
If you can perform an operation that decreases $$$a$$$ or $$$b$$$ from an element, will it be different?
No. Decrease an element by $$$a$$$ is the same as increasing all other elements by $$$a$$$, as it does not affect the range.
If $$$a = b$$$, can you solve the problem?
import math
T = int(input())
for _ in range(T) :
n, a, b = map(int, input().split())
d = math.gcd(a, b)
a = list(map(int, input().split()))
a = [x % d for x in a]
a.sort()
res = a[n - 1] - a[0]
for i in range(1, n) :
res = min(res, a[i - 1] + d - a[i])
print(res)
2006A - Iris and Game on the Tree
How to calulate the weight of a leaf quickly?
Consider a formed string. Let's delete useless part that won't contribute to the count of $$$\tt{01}$$$ and $$$\tt{10}$$$ substrings. What will be the rest?
A string with each pair of adjacent bits different. For example, $$$\tt{110001}\rightarrow\tt{101}$$$. Then what decides whether the weight is zero or not?
The root is somehow important. When to fill $$$s_1$$$?
T = int(input())
for _ in range(T) :
n = int(input())
x, y, z, w = 0, 0, 0, 0
deg = [0] * (n + 1)
for __ in range(n - 1) :
u, v = map(int, input().split())
deg[u] += 1
deg[v] += 1
s = " " + input()
for i in range(2, n + 1) :
if deg[i] == 1 :
if s[i] == '?' :
z += 1
elif s[i] == '0' :
x += 1
else :
y += 1
elif s[i] == '?' :
w += 1
if s[1] == '0' :
print(y + (z + 1) // 2)
elif s[1] == '1' :
print(x + (z + 1) // 2)
else :
print(max(x, y) + (z + (w % 2 if x == y else 0)) // 2)
The previous statement of the problem did not directly tell you that the vertices are numbered by dfs order. It was changed to make the statement more readable. Can you prove that the following property is the same as telling you the vertices are numbered by dfs order?
For any three nodes $$$i,j,k$$$ that $$$1\le i<j<k\le n$$$, if $$$i$$$ is not an ancestor of $$$j$$$, then $$$i$$$ won't be an ancestor of $$$k$$$.
Let's still focus on "the vertex numbers in one subtree are consecutive". Suppose it's not consecutive: Let the minimum vertex number be $$$i$$$(the root, for $$$p_i<i$$$), the maximum vertex number be $$$k$$$, and one of the lost vertex number be $$$j$$$, you can see that $$$i$$$ is an ancestor of $$$k$$$ but not an ancestor of $$$j$$$. So the conslusion is proved.
On a tree with nodes numbered with dfs order, we have to find the distance between two consecutive nodes. What can you think of?
Each edge is passed by exactly two of the paths.
How to calculate the maximum value of $$$\text{dist}(i,(i\bmod n)+1)$$$? Is it different when the weight of all of its edges are known?
#include <bits/stdc++.h>
namespace FastIO {
template <typename T> inline T read() { T x = 0, w = 0; char ch = getchar(); while (ch < '0' || ch > '9') w |= (ch == '-'), ch = getchar(); while ('0' <= ch && ch <= '9') x = x * 10 + (ch ^ '0'), ch = getchar(); return w ? -x : x; }
template <typename T> inline void write(T x) { if (!x) return; write<T>(x / 10), putchar(x % 10 ^ '0'); }
template <typename T> inline void print(T x) { if (x < 0) putchar('-'), x = -x; else if (x == 0) putchar('0'); write<T>(x); }
template <typename T> inline void print(T x, char en) { if (x < 0) putchar('-'), x = -x; else if (x == 0) putchar('0'); write<T>(x), putchar(en); }
}; using namespace FastIO;
#define MAXN 200001
int fa[MAXN], dep[MAXN];
int c1[MAXN], c2[MAXN], len[MAXN];
void solve() {
int N = read<int>(); long long w = read<long long>();
for (int i = 2; i <= N; ++i) fa[i] = read<int>();
for (int i = 2; i <= N; ++i) dep[i] = dep[fa[i]] + 1;
for (int i = 1; i <= N; ++i) len[i] = c1[i] = 0;
for (int i = 1, x, y; i <= N; ++i) {
x = i, y = (i == N ? 1 : i + 1);
while (x != y) {
if (dep[x] < dep[y]) std::swap(x, y);
(c1[x] ? c2[x] : c1[x]) = i, x = fa[x], ++len[i];
}
}
long long sum = 0, sur = N;
for (int i = 1, x; i < N; ++i) {
x = read<int>(), sum += read<long long>();
if ((--len[c1[x]]) == 0) --sur;
if ((--len[c2[x]]) == 0) --sur;
print<long long>(sum * 2 + sur * (w - sum), " \n"[i == N - 1]);
}
}
int main() { int T = read<int>(); while (T--) solve(); return 0; }
Can you solve the problem if the nodes are not numbered by dfs order?
Let's try to judge whether an array is brilliant, and try to sort it first. What happens if we minus $$$1$$$ from each element?
Nothing will change.
We only care about the relative positions between the elements. Let's consider the difference array $$$b$$$ of array $$$a$$$: $$$b_i=a_i-a_{i-1}(i\ge 2)$$$.
What will the difference array look like after no more operations can be done?
#include<bits/stdc++.h>
using namespace std;
#define int long long
#define pii pair<int,int>
#define all(v) v.begin(),v.end()
#define pb push_back
#define REP(i,b,e) for(int i=(b);i<(int)(e);++i)
#define over(x) {cout<<x<<endl;return;}
#define cntbit(x) __buitin_popcount(x)
int n;
int a[400005],lsteq[400005];
struct two_pointers{
int l,r,mid;
int val[400005],rt;
void init(){
l=r=mid=rt=0;
val[0]=a[0];
}
inline void addr(){++r;rt=__gcd(rt,a[r]);}
inline void addl(){if(l+1>r)addr();++l;if(l>mid){rt=0,mid=r;val[mid]=a[mid];
for(int i=mid-1;i>=l;--i)val[i]=__gcd(val[i+1],a[i]);}}
inline int query(){return __gcd(val[l],rt);}
};
void Main() {
cin>>n;
REP(i,0,n)cin>>a[i];
REP(i,1,n)a[n-i]-=a[n-i-1];
REP(i,1,n)a[i-1]=llabs(a[i]);
--n;
lsteq[n]=n;
for(int i=n-1;i>=0;--i)lsteq[i]=(a[i]? i:lsteq[i+1]);
two_pointers p;p.init();
int ans=1,cnt=1;
REP(i,0,n)if(lsteq[i]==i){
while(p.l<i)p.addl();
while(p.r<n&&cntbit(p.query())>1)p.addr();
ans+=(n-p.r)*cnt+1;cnt=1;
}else ans+=lsteq[i]-i+1,++cnt;
cout<<ans<<endl;
}
void TC() {
int tc=1;
cin>>tc;
while(tc--){
Main();
cout.flush();
}
}
signed main() {
return cin.tie(0),cout.tie(0),ios::sync_with_stdio(0),TC(),0;
}
#include<bits/stdc++.h>
using namespace std;
#define int long long
#define pii pair<int,int>
#define all(v) v.begin(),v.end()
#define pb push_back
#define REP(i,b,e) for(int i=(b);i<(int)(e);++i)
#define over(x) {cout<<x<<endl;return;}
#define cntbit(x) __buitin_popcount(x)
int n;
int a[400005],res[400005],lsteq[400005];
int st[400005][20];
int query(int l,int r){
int s=__lg(r-l+1);
return __gcd(st[l][s],st[r-(1<<s)+1][s]);
}
void Main() {
cin>>n;
REP(i,0,n)cin>>a[i];
--n;
if(!n)over(1)
REP(i,0,n)st[i][0]=llabs(a[i]-a[i+1]);
REP(j,0,__lg(n)){
REP(i,0,n-(1<<(j+1))+1){
st[i][j+1]=__gcd(st[i][j],st[i+(1<<j)][j]);
}
}
lsteq[n]=n;
for(int i=n-1;i>=0;--i)lsteq[i]=(a[i]==a[i+1]? lsteq[i+1]:i);
int ans=1;
int l=0,r=0;
REP(i,0,n){
l=max(l,lsteq[i]);r=max(r,l);
while(r<n&&cntbit(query(l,r))>1)++r;
ans+=n-r;ans+=lsteq[i]-i+1;
}
cout<<ans<<endl;
}
void TC() {
int tc=1;
cin>>tc;
while(tc--){
Main();
cout.flush();
}
}
signed main() {
return cin.tie(0),cout.tie(0),ios::sync_with_stdio(0),TC(),0;
}
There's a harder but similar version of the problem. Can you solve it?
There're $$$q$$$ queries: $$$L,R,k$$$. Find the number of pairs $$$(l,r)$$$ that $$$L\le l\le r\le R$$$, and $$$(\lvert Q({a_l\dots a_r}) \rvert-1) \cdot k\ge \max(a_l\dots a_r) - \min(a_l\dots a_r) $$$, where $$$Q(array)$$$ means the result of putting the array into the set.
2006D - Iris and Adjacent Products
Let's use another way to describe the problem. Let's define an array good, that it can be reordered to satisfy the condition. How many numbers do we have to change to make it good?
To handle this, we need to know what is the optimal way to reorder an array.
Sort the array. Suppose $$$a_1$$$ is the minimum and $$$a_n$$$ is the maximum. Where to place the two elements? Can you prove this is optimal?
How to maintain the answer better using the properties?
#include<bits/stdc++.h>
using namespace std;
#define all(v) v.begin(),v.end()
#define pb push_back
#define REP(i,b,e) for(int i=(b);i<(int)(e);++i)
#define over(x) {cout<<(x)<<endl;return;}
int n,q,r;
int a[300005];
int sum[300005];
int ql[300005],qr[300005];
int ans[300005];
int cnt[300005],cnt2[300005];
void Main() {
cin>>n>>q>>r;
REP(i,0,n)cin>>a[i];
sum[0]=0;
int B=sqrt(r);
REP(i,0,n)sum[i+1]=sum[i]+(a[i]<=B);
REP(i,0,q){
int x,y;
cin>>x>>y;
--x,--y;
ql[i]=x;qr[i]=y;
ans[i]=0;
if(sum[y+1]-sum[x]<(y-x+1)/2)ans[i]=(y-x+1)/2-sum[y+1]+sum[x];
}
REP(i,0,B){
REP(j,0,n+1)cnt[j]=cnt2[j]=0;
REP(j,0,n){
cnt[j+1]=cnt[j]+(a[j]<=i);
cnt2[j+1]=cnt2[j]+(a[j]<=r/(i+1));
}
REP(j,0,q){
int x=ql[j],y=qr[j],l=y-x+1;
int c1=cnt2[y+1]-cnt2[x],c2=cnt[y+1]-cnt[x];
ans[j]=max(ans[j],min((l-c1-c2+1)/2,l/2-c2));
}
}
REP(i,0,q)cout<<ans[i]<<' ';
cout<<endl;
}
void TC() {
int tc=1;
cin>>tc;
while(tc--){
Main();
cout.flush();
}
}
signed main() {
return cin.tie(0),cout.tie(0),ios::sync_with_stdio(0),TC(),0;
}
/*
1. CLEAR the arrays (ESPECIALLY multitests)
2. DELETE useless output
*/
2006E - Iris's Full Binary Tree
Consider the degress of vertices in a binary tree. What can you observe?
The root has degree $$$2$$$, the non-leaf nodes different from the root have degree $$$3$$$, the leaf nodes have degree $$$1$$$. The degree of every node is not greater than $$$3$$$.
Choose a root. It should have degree $$$\le 2$$$. Is it always possible to build a binary tree on the initial tree if all nodes have degree $$$\le 3$$$? What will be the depth of the binary tree with the given root?
It's always possible to build the binary tree, and the depth of the tree is the maximum distance from the root to any nodes in the tree.
Consider the binary depth of the full tree. We should to go over every possible root, but how to find the maximum distance from the root to other nodes? Are there any special nodes?
Diameter.
Is it possible to maintain the answer for all the roots when adding a leaf to the tree?
#include<bits/stdc++.h>
using namespace std;
#define int long long
#define pii pair<int,int>
#define all(v) v.begin(),v.end()
#define pb push_back
#define REP(i,b,e) for(int i=(b);i<(int)(e);++i)
#define over(x) {cout<<(x)<<endl;return;}
struct ds{
int seg[2000005],tag[2000005];
void build(int l,int r,int p){
seg[p]=(l==0? 0:1e18),tag[p]=0;
if(l==r)return;
int m=(l+r)>>1;
build(l,m,p*2+1);build(m+1,r,p*2+2);
}
void pushdown(int p){
if(!tag[p])return;
tag[p*2+1]+=tag[p];tag[p*2+2]+=tag[p];
seg[p*2+1]+=tag[p];seg[p*2+2]+=tag[p];
tag[p]=0;
}
void add(int l,int r,int s,int t,int p,int val){
if(l<=s&&t<=r){
seg[p]+=val;tag[p]+=val;
return;
}
int m=(s+t)>>1;pushdown(p);
if(m>=l)add(l,r,s,m,p*2+1,val);
if(m<r)add(l,r,m+1,t,p*2+2,val);
seg[p]=min(seg[p*2+1],seg[p*2+2]);
}
void update(int pos,int l,int r,int p,int val){
if(l==r){
seg[p]=val;
return;
}
int m=(l+r)>>1;pushdown(p);
if(m>=pos)update(pos,l,m,p*2+1,val);
else update(pos,m+1,r,p*2+2,val);
seg[p]=min(seg[p*2+1],seg[p*2+2]);
}
}seg;
int n;
vector<int>v[500005];
int fa[500005],an[500005][21];
int dep[500005],dfn[500005],rev[500005];
int tot,deg[500005],ls[500005];
void dfs(int x,int pre,int d){
dep[x]=d;fa[x]=pre;an[x][0]=fa[x];ls[x]=1;
REP(i,0,__lg(n+1))if(an[x][i]==-1)an[x][i+1]=-1;else an[x][i+1]=an[an[x][i]][i];
rev[tot]=x;dfn[x]=tot++;
for(auto i:v[x])dfs(i,x,d+1),ls[x]+=ls[i];
}
int getlca(int x,int y){
if(dep[x]<dep[y])swap(x,y);
int d=dep[x]-dep[y];
for(int i=__lg(d);i>=0;--i)if((d>>i)&1)x=an[x][i];
if(x==y)return x;
d=dep[x];
for(int i=__lg(d);i>=0;--i)if((1<<i)<=dep[x]&&an[x][i]!=an[y][i])x=an[x][i],y=an[y][i];
return fa[x];
}
int getan(int x,int y){
if(dfn[y]>=dfn[x]&&dfn[y]<=ls[x]){
int d=dep[y]-dep[x]-1;
for(int i=0;(1<<i)<=d;++i)if((d>>i)&1)y=an[y][i];
return y;
}else return fa[x];
}
void updside(int x,int y){
if(dfn[y]>=dfn[x]&&dfn[y]<=ls[x]){
seg.add(0,n-1,0,n-1,0,1);
x=getan(x,y);
seg.add(dfn[x],ls[x],0,n-1,0,-1);
}else seg.add(dfn[x],ls[x],0,n-1,0,1);
}
int getdist(int x,int y){return dep[x]+dep[y]-2*dep[getlca(x,y)];}
struct diameter{
int x,y,len;
int update(int z){
if(x==y){
int d=getdist(x,z);
if(d<=len)return d+len;
updside(y,z);
y=getan(y,z);
return d+len;
}else{
int d1=getdist(x,z),d2=getdist(y,z),d3=min(d1,d2);
if(d3<=len)return d3+len+1;
if(d1>d2)swap(x,y);
updside(y,z);
y=x;++len;return len*2;
}
}
int query(int z){
if(x==y)return len+getdist(x,z);
else return len+min(getdist(x,z),getdist(y,z))+1;
}
};
void Main() {
cin>>n;
REP(i,0,n)v[i].clear();
REP(i,1,n){
cin>>fa[i];--fa[i];
v[fa[i]].pb(i);
}
tot=0;dfs(0,-1,0);seg.build(0,n-1,0);
REP(i,0,n)deg[i]=0,ls[i]+=dfn[i]-1;
diameter d={0,0,0};
cout<<"1 ";
REP(i,1,n){
++deg[fa[i]];++deg[i];
if(deg[fa[i]]==4){
REP(j,i,n)cout<<-1<<' ';
cout<<endl;return;
}
if(deg[fa[i]]==3)seg.update(dfn[fa[i]],0,n-1,0,1e18);
seg.update(dfn[i],0,n-1,0,d.update(i));
cout<<seg.seg[0]+1<<' ';
}
cout<<endl;
}
void TC() {
int tc=1;
cin>>tc;
while(tc--){
Main();
cout.flush();
}
}
signed main() {
return cin.tie(0),cout.tie(0),ios::sync_with_stdio(0),TC(),0;
}
/*
1. CLEAR the arrays (ESPECIALLY multitests)
2. DELETE useless output
*/
Here're two methods. Method $$$1$$$ is my method, and method $$$2$$$ is some testers' method.
Method 1:
Each column or row should be painted at most once. Suppose we paint each column and row exactly once. Then what does the number on one position tell us?
It tells us whether the row was painted first of the column was painted first. Try to build a graph.
We can build a directed bipartite graph. How to judge whether it is possible to paint the graph? How to calculate the beauty of one single grid?
Do topological sort on the graph. A cycle means it is impossible. Otherwise, the topo sort splits the graph into several layers. The beauty is the product of factorials of number of nodes in the layers (except the first layer, for we needn't do those operations).
How to solve the original problem when the initial grid has positive beauty? How to optimize? Consider adjacent layers.
How to solve the original problem when the initial grid is impossible to be painted out? Are there too many ways to flip a block?
How to optimize the process of building the graph and doing topological sort?
#include<bits/stdc++.h>
using namespace std;
#define MOD 998244353
#define speMOD 2933256077ll
#define int long long
#define pii pair<int,int>
#define all(v) v.begin(),v.end()
#define pb push_back
#define REP(i,b,e) for(int i=(b);i<(e);++i)
#define over(x) {cout<<x<<endl;return;}
#define lowbit(x) ((x)&(-(x)))
#define cntbit(x) __builtin_popcount(x)
#define rf(v) shuffle(all(v),sd);
mt19937 sd(std::random_device{}());
int qpow (int a,int b,int m=MOD,int res=1){
a%=m;
while(b>0)res=(b&1)?(res*a%m):(res),a=a*a%m,b>>=1;
return res;
}
int fac[2000005],inv[2000005],si[2000005];
void init(int n){
fac[0]=inv[0]=si[0]=1;
REP(i,1,n+1)fac[i]=fac[i-1]*i%MOD;
inv[n]=qpow(fac[n],MOD-2);
for(int i=n-1;i>=1;--i)inv[i]=inv[i+1]*(i+1)%MOD;
REP(i,1,n+1)si[i]=inv[i]*fac[i-1]%MOD;
}
int binom(int x,int y){
return x<y||y<0? 0:fac[x]*inv[y]%MOD*inv[x-y]%MOD;
}
namespace fastIO {
#define BUF_SIZE 6000010
bool IOerror = 0;
inline char nc() {
static char buf[BUF_SIZE], *p1 = buf + BUF_SIZE, *pend = buf + BUF_SIZE;
if(p1 == pend) {
p1 = buf;
pend = buf + fread(buf, 1, BUF_SIZE, stdin);
if(pend == p1) {
IOerror = 1;
return -1;
}
}
return *p1++;
}
inline bool blank(char ch) {
return ch == ' ' || ch == '\n' || ch == '\r' || ch == '\t';
}
inline void read(int &x) {
char ch;
while(blank(ch = nc()));
if(IOerror)
return;
for(x = ch - '0'; (ch = nc()) >= '0' && ch <= '9'; x = x * 10 + ch - '0');
}
#undef BUF_SIZE
};
using namespace fastIO;
int n,m;
int a[200005],b[200005];
vector<pii>c;
int d1[200005],d2[200005];
int visa[200005],visb[200005];
vector<int>h[200005],nodes[200005],buc[200005];
int X,Y;
int id1[200005],id2[200005],cnt[400005];
int solve(){
REP(i,0,n+1)buc[i].clear();
REP(i,0,n)buc[a[i]].pb(i);
int num=0;
REP(i,0,n+1)for(auto j:buc[i])id1[num++]=j;
REP(i,0,n+1)buc[i].clear();
REP(i,0,n)buc[b[i]].pb(i);
num=0;
REP(i,0,n+1)for(auto j:buc[i])id2[num++]=j;
REP(i,0,n)d1[i]=d2[i]=-1;
int ans=1,x=0,y=0,cur=0;
while(x<n||y<n){
if(x<n&&a[id1[x]]<=y){
int sum=0;
while(x<n&&a[id1[x]]<=y)++sum,d1[id1[x++]]=cur;
if(cur)(ans*=fac[sum])%=MOD;
}else if(y<n&&b[id2[y]]<=x){
int sum=0;
while(y<n&&b[id2[y]]<=x)++sum,d2[id2[y++]]=cur;
if(cur)(ans*=fac[sum])%=MOD;
}else return X=x,Y=y,0;
++cur;
}
return ans;
}
int updater(int x,int y){
int f=0,ret;
for(auto i:c)if(i==make_pair(x,y))f=1;
if(f)--b[y],++a[x],ret=solve(),++b[y],--a[x];
else ++b[y],--a[x],ret=solve(),--b[y],++a[x];
return ret;
}
void Main() {
fastIO::read(n);fastIO::read(m);
c.clear();
REP(i,0,n)a[i]=b[i]=0;
REP(i,0,n*2)cnt[i]=0;
REP(i,0,n)visa[i]=visb[i]=0,h[i].clear(),nodes[i].clear();
REP(i,0,m){
int x,y;
fastIO::read(x);fastIO::read(y);
--x,--y;
c.pb({x,y});
++b[y];--a[x];
}
REP(i,0,n)a[i]+=n;
int ans=solve();
if(!ans){
if(X==n-1||Y==n-1)over(0)
int x=id1[X],y=id2[Y],f=0;
for(auto i:c)if(i==make_pair(x,y)){f=1;break;}
if(f){
for(auto i:c)if(i.first==x)visb[i.second]=1;
for(auto i:c)if(i.second==y)visa[i.first]=1;
X=Y=-1;
for(auto i:c)if(i.first!=x&&i.second!=y){
if(!visa[i.first]&&!visb[i.second]){X=i.first,Y=i.second;break;}
}
if(X==-1)over(0);
}else{
X=Y=-1;
for(auto i:c)h[i.first].pb(i.second);
int sum=0;
for(auto i:h[x])visa[i]=1,++sum;
REP(i,0,n){
int found=0;
for(auto j:h[i])if(j==y){found=1;break;}
if(!found)continue;
vector<int>z;
for(auto j:h[i])if(visa[j])visa[j]=0,--sum,z.pb(j);
if(sum){
X=i;
REP(j,0,n)if(visa[j])Y=j;
break;
}
for(auto j:z)++sum,visa[j]=1;
}
if(X==-1)over(0);
}
int x2=X,y2=Y;
int ans=updater(x,y)+updater(x,y2)+updater(x2,y)+updater(x2,y2);
cout<<(ans%MOD*qpow(n*n%MOD,MOD-2))%MOD<<endl;
return;
}
REP(i,0,n)++cnt[d1[i]],++cnt[d2[i]];
int res=0;
(res+=ans*cnt[0]%MOD*(cnt[1]==1? cnt[2]+1:1))%=MOD;
if(cnt[2]&&cnt[1])(res+=ans*(cnt[2]==1? cnt[3]+1:1))%=MOD;
REP(i,2,2*n-1)if(cnt[i]&&cnt[i+1]){
(res+=ans*(cnt[i+1]==1? cnt[i+2]+1:1)%MOD*(cnt[i]==1? cnt[i-1]+1:1))%=MOD;
}
cout<<res*qpow(n*n%MOD,MOD-2)%MOD<<endl;
}
void TC() {
init(1000000);
int tc=1;
fastIO::read(tc);
while(tc--) {
Main();
cout.flush();
}
}
signed main() {
return cin.tie(0),cout.tie(0),ios::sync_with_stdio(0),TC(),0;
}
/*
1. CLEAR the arrays (ESPECIALLY multitests)
2. DELETE useless output
*/
Method 2:
Try to find some way to "sort" the grid, for swapping different rows and different columns doesn't matter. How?
Sort by the number of $$$1$$$.
How to calculate the beauty?
The beauty is the product of factorials of number of equal rows and equal columns.
How to solve the original problem when the initial grid has positive beauty? How to optimize?
How to solve the original problem when the initial grid is impossible to be painted out? Are there too many ways to flip a block?
Read the hints.
Let's sort the rows and columns in some way, which means you should swap two rows or two columns each time to make them ordered. Let's sort by the number of $$$1$$$ in each row and column. For example, the result of sorting the left part is shown in the right part:
We claim that the grid is solvealbe if and only if the later rows are "included" by previous rows, and so for the columns. For example, if a position in any row is filled with $$$1$$$, then the position in any previous row must also be $$$1$$$. This is correct because you'll notice that any submatrices of the following two types cannot appear in a solveable grid. To check that, you'll only need to check adjacent rows and columns.
After you sort all of that, the number of ways to paint it is easy to think about. It's similar to the product of factorials of the number of each kind of rows(for example, the last two rows in the right above). The only to notice is the first step of painting shouldn't be counted.
Consider modifying one block. If the initial grid is solveable, there're only $$$O(n)$$$ types of changing(in each row, all the $$$1$$$ and $$$2$$$ can be considered the same). Each type of changing can be done in $$$O(1)$$$ using simple discussion.
If the initial grid is unsolveable, we should find the two adjacent rows and columns that starts the problem. We claim that the block to change must belong to one of the four blocks in the intersection of the rows and columns. Just try every possible situation and do brute force on it.
For example, consider the following matrix: The first two columns and the last two rows went wrong. So let's try to change blocks $$$(3,1),(3,2),(4,1),(4,2)$$$ to judge whether it is possible.
The time complexity is $$$O((n+m)\log n)$$$, and if you implement carefully enough you can solve it in $$$O(n+m)$$$.
#if defined(LOCAL) or not defined(LUOGU)
#pragma GCC optimize(3)
#pragma GCC optimize("Ofast,unroll-loops")
#endif
#include<bits/stdc++.h>
using namespace std;
struct time_helper
{
#ifdef LOCAL
clock_t time_last;
#endif
time_helper()
{
#ifdef LOCAL
time_last=clock();
#endif
}
void test()
{
#ifdef LOCAL
auto time_now=clock();
std::cerr<<"time:"<<1.*(time_now-time_last)/CLOCKS_PER_SEC<<";all_time:"<<1.*time_now/CLOCKS_PER_SEC<<std::endl;
time_last=time_now;
#endif
}
~time_helper()
{
test();
}
}time_helper;
#ifdef LOCAL
#include"dbg.h"
#else
#define dbg(...) (__VA_ARGS__)
#endif
namespace Fread{const int SIZE=1<<16;char buf[SIZE],*S,*T;inline char getchar(){if(S==T){T=(S=buf)+fread(buf,1,SIZE,stdin);if(S==T)return'\n';}return *S++;}}namespace Fwrite{const int SIZE=1<<16;char buf[SIZE],*S=buf,*T=buf+SIZE;inline void flush(){fwrite(buf,1,S-buf,stdout);S=buf;}inline void putchar(char c){*S++=c;if(S==T)flush();}struct NTR{~NTR(){flush();}}ztr;}
#define getchar Fread::getchar
#define putchar Fwrite::putchar
#define Setprecision 10
#define between '\n'
#define __int128 long long
template<typename T>struct is_char{static constexpr bool value=(std::is_same<T,char>::value||std::is_same<T,signed char>::value||std::is_same<T,unsigned char>::value);};template<typename T>struct is_integral_ex{static constexpr bool value=(std::is_integral<T>::value||std::is_same<T,__int128>::value)&&!is_char<T>::value;};template<typename T>struct is_floating_point_ex{static constexpr bool value=std::is_floating_point<T>::value||std::is_same<T,__float128>::value;};namespace Fastio{struct Reader{template<typename T>typename std::enable_if_t<std::is_class<T>::value,Reader&>operator>>(T&x){for(auto &y:x)*this>>y;return *this;}template<typename T>typename std::enable_if_t<is_integral_ex<T>::value,Reader&>operator>>(T&x){char c=getchar();short f=1;while(c<'0'||c>'9'){if(c=='-')f*=-1;c=getchar();}x=0;while(c>='0'&&c<='9'){x=(x<<1)+(x<<3)+(c^48);c=getchar();}x*=f;return *this;}template<typename T>typename std::enable_if_t<is_floating_point_ex<T>::value,Reader&>operator>>(T&x){char c=getchar();short f=1,s=0;x=0;T t=0;while((c<'0'||c>'9')&&c!='.'){if(c=='-')f*=-1;c=getchar();}while(c>='0'&&c<='9'&&c!='.')x=x*10+(c^48),c=getchar();if(c=='.')c=getchar();else return x*=f,*this;while(c>='0'&&c<='9')t=t*10+(c^48),s++,c=getchar();while(s--)t/=10.0;x=(x+t)*f;return*this;}template<typename T>typename std::enable_if_t<is_char<T>::value,Reader&>operator>>(T&c){c=getchar();while(c=='\n'||c==' '||c=='\r')c=getchar();return *this;}Reader&operator>>(char*str){int len=0;char c=getchar();while(c=='\n'||c==' '||c=='\r')c=getchar();while(c!='\n'&&c!=' '&&c!='\r')str[len++]=c,c=getchar();str[len]='\0';return*this;}Reader&operator>>(std::string&str){str.clear();char c=getchar();while(c=='\n'||c==' '||c=='\r')c=getchar();while(c!='\n'&&c!=' '&&c!='\r')str.push_back(c),c=getchar();return*this;}Reader(){}}cin;const char endl='\n';struct Writer{typedef __int128 mxdouble;template<typename T>typename std::enable_if_t<std::is_class<T>::value,Writer&>operator<<(T x){for(auto &y:x)*this<<y<<between;*this<<'\n';return *this;}template<typename T>typename std::enable_if_t<is_integral_ex<T>::value,Writer&>operator<<(T x){if(x==0)return putchar('0'),*this;if(x<0)putchar('-'),x=-x;static int sta[45];int top=0;while(x)sta[++top]=x%10,x/=10;while(top)putchar(sta[top]+'0'),--top;return*this;}template<typename T>typename std::enable_if_t<is_floating_point_ex<T>::value,Writer&>operator<<(T x){if(x<0)putchar('-'),x=-x;x+=pow(10,-Setprecision)/2;mxdouble _=x;x-=(T)_;static int sta[45];int top=0;while(_)sta[++top]=_%10,_/=10;if(!top)putchar('0');while(top)putchar(sta[top]+'0'),--top;putchar('.');for(int i=0;i<Setprecision;i++)x*=10;_=x;while(_)sta[++top]=_%10,_/=10;for(int i=0;i<Setprecision-top;i++)putchar('0');while(top)putchar(sta[top]+'0'),--top;return*this;}template<typename T>typename std::enable_if_t<is_char<T>::value,Writer&>operator<<(T c){putchar(c);return*this;}Writer&operator<<(char*str){int cur=0;while(str[cur])putchar(str[cur++]);return *this;}Writer&operator<<(const char*str){int cur=0;while(str[cur])putchar(str[cur++]);return*this;}Writer&operator<<(std::string str){int st=0,ed=str.size();while(st<ed)putchar(str[st++]);return*this;}Writer(){}}cout;}
#define cin Fastio::cin
#define cout Fastio::cout
#define endl Fastio::endl
void solve();
main()
{
int t=1;
cin>>t;
while(t--)solve();
}
template <uint32_t mod>
struct LazyMontgomeryModInt {
using mint = LazyMontgomeryModInt;
using i32 = int32_t;
using u32 = uint32_t;
using u64 = uint64_t;
static constexpr u32 get_r() {
u32 ret = mod;
for (i32 i = 0; i < 4; ++i) ret *= 2 - mod * ret;
return ret;
}
static constexpr u32 r = get_r();
static constexpr u32 n2 = -u64(mod) % mod;
static_assert(r * mod == 1, "invalid, r * mod != 1");
static_assert(mod < (1 << 30), "invalid, mod >= 2 ^ 30");
static_assert((mod & 1) == 1, "invalid, mod % 2 == 0");
u32 a;
constexpr LazyMontgomeryModInt() : a(0) {}
constexpr LazyMontgomeryModInt(const int64_t &b)
: a(reduce(u64(b % mod + mod) * n2)){};
static constexpr u32 reduce(const u64 &b) {
return (b + u64(u32(b) * u32(-r)) * mod) >> 32;
}
constexpr mint &operator+=(const mint &b) {
if (i32(a += b.a - 2 * mod) < 0) a += 2 * mod;
return *this;
}
constexpr mint &operator-=(const mint &b) {
if (i32(a -= b.a) < 0) a += 2 * mod;
return *this;
}
constexpr mint &operator*=(const mint &b) {
a = reduce(u64(a) * b.a);
return *this;
}
constexpr mint &operator/=(const mint &b) {
*this *= b.inverse();
return *this;
}
constexpr mint operator+(const mint &b) const { return mint(*this) += b; }
constexpr mint operator-(const mint &b) const { return mint(*this) -= b; }
constexpr mint operator*(const mint &b) const { return mint(*this) *= b; }
constexpr mint operator/(const mint &b) const { return mint(*this) /= b; }
constexpr bool operator==(const mint &b) const {
return (a >= mod ? a - mod : a) == (b.a >= mod ? b.a - mod : b.a);
}
constexpr bool operator!=(const mint &b) const {
return (a >= mod ? a - mod : a) != (b.a >= mod ? b.a - mod : b.a);
}
constexpr mint operator-() const { return mint() - mint(*this); }
constexpr mint pow(u64 n) const {
mint ret(1), mul(*this);
while (n > 0) {
if (n & 1) ret *= mul;
mul *= mul;
n >>= 1;
}
return ret;
}
constexpr mint inverse() const { return pow(mod - 2); }
friend ostream &operator<<(ostream &os, const mint &b) {
return os << b.get();
}
friend istream &operator>>(istream &is, mint &b) {
int64_t t;
is >> t;
b = LazyMontgomeryModInt<mod>(t);
return (is);
}
constexpr u32 get() const {
u32 ret = reduce(a);
return ret >= mod ? ret - mod : ret;
}
static constexpr u32 get_mod() { return mod; }
explicit operator u32() const {return get();}
};
#define modint LazyMontgomeryModInt
#define mint modint<p>
constexpr int p=998244353;
std::pmr::monotonic_buffer_resource pool(100000000);
void solve()
{
int n,m;
cin>>n>>m;
vector<std::pmr::vector<int>>a(n,std::pmr::vector<int>{&pool});
time_helper.test();
while(m--)
{
int u,v;
cin>>u>>v;
u--,v--;
a[u].emplace_back(v);
}
time_helper.test();
std::pmr::vector<mint>inv{&pool},fac{&pool},ifac{&pool};
inv.resize(n+1),fac.resize(n+1),ifac.resize(n+1);
std::pmr::vector<int>arr_cnt{&pool};
arr_cnt.resize(n+2);
auto arr_sort=[&](const auto&a)
{
for(int i=0;i<n;i++)
arr_cnt[a[i].size()+1]++;
for(int x=1;x<=n;x++)
arr_cnt[x]+=arr_cnt[x-1];
vector<std::pmr::vector<int>>b(n,std::pmr::vector<int>{&pool});
for(int i=0;i<n;i++)
b[arr_cnt[a[i].size()]++]=move(a[i]);
for(int x=0;x<=n+1;x++)
arr_cnt[x]=0;
return b;
};
a=move(arr_sort(a));
auto arr_eq=[&](const auto&a,const auto&b)
{
if(a.size()!=b.size())return false;
bool res=1;
for(auto q:a)arr_cnt[q]=1;
for(auto q:b)if(!arr_cnt[q]){res=0;break;}
for(auto q:a)arr_cnt[q]=0;
return res;
};
auto arr_includes=[&](const auto&a,const auto&b)
{
bool res=1;
for(auto q:a)arr_cnt[q]=1;
for(auto q:b)if(!arr_cnt[q]){res=0;break;}
for(auto q:a)arr_cnt[q]=0;
return res;
};
auto arr_set_difference=[&](const auto&a,const auto&b,auto &diff)
{
for(auto q:b)arr_cnt[q]=1;
for(auto q:a)
if(!arr_cnt[q])
{
diff.emplace_back(q);
if(diff.size()>2)break;
}
for(auto q:b)arr_cnt[q]=0;
};
{
fac[0]=1;
for(int x=1;x<=n;x++)
fac[x]=fac[x-1]*x;
ifac[n]=fac[n].inverse();
for(int x=n;x>=1;x--)
ifac[x-1]=ifac[x]*x,inv[x]=ifac[x]*fac[x-1];
}
int failcnt=0;
for(int x=1;x<n;x++)
failcnt+=!(arr_includes(a[x],a[x-1]));
if(failcnt==0)
{
int nc=0;
mint ansbas=fac[a[0].size()],ans=0;
std::pmr::vector<int>tnc{&pool},tmc{&pool};
tmc.emplace_back(a[0].size());
int ct=0;
for(int x=0;x<n;x++)
if(x==n-1||!arr_eq(a[x],a[x+1]))
{
nc++;
if(a[x].size()!=n)
ansbas*=fac[nc];
int mc=0;
if(x==n-1)mc=n-a[x].size();
else mc=a[x+1].size()-a[x].size();
if(x!=n-1)
ansbas*=fac[mc];
tnc.emplace_back(nc);
tmc.emplace_back(mc);
nc=0;
ct++;
}
else nc++;
for(int x=0;x<ct;x++)
{
int nc=tnc[x],mc=tmc[x+1];
if(mc==0)continue;
mint fix=ansbas;
if(x==ct-1)
{
fix*=mc;
}
if(x==ct-2&&mc==1&&tmc[x+2]==0)
{
fix*=inv[tnc[x+1]+1];
}
if(x==ct-1||mc!=1)
{
if(mc)
{
if(nc>1)ans+=fix;
else ans+=fix*(tmc[x]+1);
}
}
else if(nc>1)ans+=fix*(tnc[x+1]+1);
else ans+=fix*(tnc[x+1]+1)*(tmc[x]+1);
}
for(int x=0;x<ct;x++)
{
int nc=tnc[x],mc=tmc[x];
if(mc==0)continue;
mint fix=ansbas;
if(x==ct-1)
{
if(tmc[x+1]==0)fix*=nc;
if(nc==1)fix*=inv[tmc[x+1]+1];
}
if(x==0||mc!=1)
{
if(mc)
{
if(nc>1)ans+=fix;
else ans+=fix*(tmc[x+1]+1);
}
}
else if(nc>1)ans+=fix*(tnc[x-1]+1);
else ans+=fix*(tnc[x-1]+1)*(tmc[x+1]+1);
}
cout<<(ans*inv[n]*inv[n]).get()<<endl;
return;
}
if(failcnt>2){cout<<0<<endl;return;}
mint ans=0;
auto calc=[&](vector<std::pmr::vector<int>>&a)->mint
{
a=move(arr_sort(a));
for(int x=1;x<n;x++)
if(!arr_includes(a[x],a[x-1]))return 0;
int nc=0;
mint ansbas=fac[a[0].size()];
for(int x=0;x<n;x++)
if(x==n-1||!arr_eq(a[x],a[x+1]))
{
nc++;
if(a[x].size()!=n)
ansbas=ansbas*fac[nc];
int mc=0;
if(x==n-1)mc=n-a[x].size();
else mc=a[x+1].size()-a[x].size();
if(x!=n-1)
ansbas=ansbas*fac[mc];
nc=0;
}
else nc++;
return ansbas;
};
std::pmr::set<pair<int,int>>st{&pool};
std::pmr::vector<int>diff{&pool};
vector<std::pmr::vector<int>>b(n,std::pmr::vector<int>{&pool});
time_helper.test();
for(int x=1;x<n;x++)
if(!arr_includes(a[x],a[x-1]))
{
for(int y=x+2;y<n;y++)
if(!arr_includes(a[y],a[y-1]))goto fail;
{
diff.clear();
arr_set_difference(a[x],a[x-1],diff);
if(diff.size()<=1)
{
if(!st.count({x-1,diff[0]}))
{
b=a;
b[x-1].emplace_back(diff[0]);
ans+=calc(b),st.insert({x-1,diff[0]});
}
if(!st.count({x,diff[0]}))
{
b=a;
b[x].erase(find(b[x].begin(),b[x].end(),diff[0]));
ans+=calc(b),st.insert({x,diff[0]});
}
}
}
{
diff.clear();
arr_set_difference(a[x-1],a[x],diff);
if(diff.size()<=1)
{
if(!st.count({x,diff[0]}))
{
b=a;
b[x].emplace_back(diff[0]);
ans+=calc(b),st.insert({x,diff[0]});
}
if(!st.count({x-1,diff[0]}))
{
b=a;
b[x-1].erase(find(b[x-1].begin(),b[x-1].end(),diff[0]));
ans+=calc(b),st.insert({x-1,diff[0]});
}
}
}
fail:
break;
}
cout<<(ans*inv[n]*inv[n]).get()<<endl;
}
Congratulations Tourist for crossing 4000 rating!
I created video editorial for Div2D/Div2A : Iris and Game on the Tree
I created a hybrid video edtiorial and blog for Div2E/Div2B: Iris and the Tree.
Thanks for the blog.. it helped me a lot in understanding some issues in my implementation.
Congratulations for tourist for reaching $$$4000$$$ rating, which is the highest rating on Codeforces ever! I'm proud to see that in my contest!
Hi LMydd0225, Can you define the function cntbit(int) used in the solution of Round 969 div1 problem C using sparse table
There should be a definition for this at the beginning of the code.
It means the number of
1
bits in a number's binary form. For example,cntbit(10)
=__builtin_popcount(10)
=2, because $$$10_{10}=1010_2$$$Thanks for the quick Editorial and congratulations for tourist for reaching $$$4000$$$ rating! I can't to believe that, but that's true!
It's really amazing!
Can someone suggest me what minor changes should be done in this submission-278848822 so that it passes 5th case without TLE !! Thanks
Your solution's time complexity is O(m*n), which exceeds 10^8. Instead of updating the entire array, update only the maximum value as explained in the editorial.
You are basically brute forcing on entire array for each operation. Basically you have time complexity of n*m. Instead there's a hack that you don't need to update whole array for each of m operations. Instead maintain a maximum element value and update it each time.
Does brute-forcing every question is bad in long run?? Should i stop brute-forcing question focus more on optimizing the solution and writing it ?! Neerav03
Yes, try to find out most optimal solution, codeforces except for initial problem A wants you to write most optimal code.
you should concentrate the maximum element in array
hashir
Instead of solving div1B I solved the bonus mid-contest O_O
Same
Yeah same, only after printing the array, i realised each edge comes only twice over all paths.
Probem A
mine was different https://codeforces.net/contest/2007/submission/278777439
I can't believe i solved B with relative ease but took 4-5 attempts for A, am I the only one who found A that hard? :(
If i was able to solve A rather quickly I would have probably solved Div. 2 C for the first time :(
I think A was harder too.
I agree with you because I have spent 30 minutes on A but only 5 minutes on B (actually I do B before I do A, so much time penalty :( )
same
Trying to solve problem A quickly, I made several trivial logical mistakes. Once I had the key idea of how gcd of consecutive numbers is 1, I started coding without careful thinking. It should have took me only a couple of minutes to reason the rest but instead, I coded a solution then tested it, wrong -> fix it again quickly -> compile and test -> wrong and so on about 5 cycles
I think the best to do for me is to stop caring about rating and about solving a problem and just focus on learning something new from the process of trying to solve each problem.
wait than what were you doing before
I found B to be harder lol. Got TLE in it, while I solved both A,C correctly. EDIT: Oh wait I retried the question and realised I messed up lol...B is much easier, I agree.
I have a very stupid question in problem C
but what if this combination has a negative number like X is negative then what ?
Read Hint 1. As it is mentioned, if you add $$$a$$$ or $$$b$$$ to all the elements except $$$c_i$$$, it is the same as decreasing $$$a$$$ or $$$b$$$ from $$$c_i$$$. So it doesn't matter if $$$X,Y$$$ are negative.
Can you please explain where did the idea of the GCD come from?
I solved it using GCD but it was only a hunch and I can not prove anything.
Thank you.
nice set, liked Div1 A,B,C
LMydd0225 In Div1 D, when you rewrite the condition as $$$cnt(1,i)\geq cnt(\dots,n)$$$ should not be $$$k$$$ instead? and second, array $$$[k,1,k]$$$ doesn't fullfill $$$cnt(1,1)\geq cnt(\frac{k}{2}+1,k)$$$
Sorry, that's my fault. Updated.
About the second question, I forgot that you should checkmin the two cnt values with $$$\lfloor\frac n2\rfloor$$$. It only goes wrong when $$$n$$$ is odd, $$$cnt(1,i)=\frac{n-1}2$$$, $$$cnt(\frac{k}{i+1}+1,k)=\frac{n+1}2$$$.
Wow, I had a way different solution for E.
What I wrote was, if I calculated the distance from each (i) to (i mod n + 1) and store it in an array to[i]. and they're all unassigned (We need to know the number of unassigned edges on every path)
Then each time I would assign a value to an edge, I would remove 1 from to[x-1] and remove 1 from to[mx[x]] where mx[x] is the max number the subtree at x reached.
Now the answer is Sum of assigned edges * 2 + (W — Sum of assigned edges) * Number of paths with unassigned edges
I kept and updated the number of unassigned edges of all paths in a segment tree.
Did u use binary lifting ?
can some one please point out in problem C why we need to try c2,c3,…cn as the minimum in turn.
I can't get it intuitively
assume we have c1, c2, c3, c4, c5. After taking modulo of each, each number is <= gcd(a, b). Hence, adding d (i.e gcd(a, b)) to the current element makes it the largest. The next number therefore becomes the smallest in the list since we assumed that d was added to all previous numbers.
I understand you but I can't see why we are doing this brute force thing I know that because it will lead to the best answer but I can't visualize it in my mind
We want to eliminate the largest gap between the numbers
if we have g =8
c = 1 2 6 7 the largest gap is 4 between 2 & 6
by increasing 1 & 2 we eliminated this gap and got
6 7 9 10 which has an answer of 4 which is the best answer.
In the problem Div1 D, where it says cnt(k/(i+1)+1,n), shouldn't it be cnt(k/(i+1)+1,k)?
How do you solve the Div 1 C bonus? Is $$$k$$$ allowed to be non-integer?
There're only $$$n\log V$$$ possible $$$\gcd$$$ over all $$$[l,r]$$$. Go over all the $$$\gcd$$$ from small to large, maintain the array $$$b_i=x$$$: the maximum $$$x$$$ that $$$\gcd_{i\dots x}\ge g$$$, where $$$g$$$ is the current gcd. There're only $$$n\log V$$$ updates. Use range sum queries to answer the queries.
But $$$\frac{\max - \min + 1}{|Q(a_l\dots a_r)|}$$$ is not just a function of the gcd, how do you deal with that?
Really sorry for that, it should be a typo: should be $$$(|Q|-1)k\ge \max-\min$$$.
But I thought a while for the mistaken one: let $$$n=|Q|,d=\gcd$$$, then consider $$$nk<(n-1)d+1$$$, $$$n>\frac{d-1}{d-k}$$$. Let's take sth $$$B=\sqrt V$$$. Consider $$$\frac{d-1}{d-k}\le B$$$: the lower range for n is not high, so brute force it; otherwise, $$$d-k$$$ is not large, so brute force it. I can only solve it in $$$O(n\sqrt V\log)$$$ (or maybe $$$O(n\sqrt V\log^2)$$$ ?). There also seem to be something $$$O(V\log^{sth})$$$, but neither can fit the original constraints. Maybe it can be a not bad problem that $$$n,V\le10^5$$$. Anyway super sorry for the inconvenience ):
sth about Div1 B,feeling sick if not pouring it.
"For any vertex $$$i$$$ in the tree, there is an edge connecting..., with a weight equal to $$$t_i$$$.",
it seems like"for any vertex there is(/it has)a weight $$$t_i$$$",and easy to be mistaken,
(few read statement word by word to know your"with"refers to the"edge",especially there's formula making the sentence long,many just notice an edge,but I overlook it and wrongly focus on any vertex)
which is just what I'd consider in the contest and wastes half of my time to solve through the problems,
and of course annoying me a lot,not for my rating(dropped) but for the unsolving problems including C,D and maybe E.
To make it worse,there ISN'T ANY DEFINITION about the length of a simple path,the Note neither,so the participants like me may hard to find it when coding,only to find themselves not passing the examples.(on the condition that problems B,C,D are all easy)
As a CN CPer(you mentioned in the annoucement),I hate it badly,not intentionally to damage CN authors' impression,but I'd still say that,this is what CN rounds always happen,so I never intent to compete in any one of them,like some others.
as you see my English level is terrible with words ambiguous,and it doesn't mean this round is completely trash.After all,it really hurt me,and deepen the awful impression of CN rounds.
for problem C
"However, there's a better way to solve it. Similar to Euclidean Algorithm, the tolerance equals to the maximum odd divisor of the gcd of the difference array of a"
this came out of no where could you add more context please ?
also what is d in (c , d , len) ?
$$$d$$$ should be the tolerance.
About the gcd thing, you can consider merging two adjacent tolerances together: $$$a,b$$$ becomes $$$a,\frac{b-a}{2},\frac{a+b}{2}$$$. Ignore the $$$\frac 12$$$ stuff, and it's just the same reason as $$$\gcd(a,b)=\gcd(a,b-a)$$$. The whole process is similar to this.
how can you ignore the 1/2 part ?
Do you mind explaining intuitively why we are able to ignore the 1/2
Refer to this. Actually I'm not good at explaining things clearly, sorry for that.
Axial-Tilted This is the way I thought about it.
Hope this clarifies your question.
Other solution for 1E:
I do not notice the movement way of the middle point of diameter. So I pre-calculated the answer for each point. It can be easily shown the optimal root is at most $$$O(logn)$$$ distance far from the middle point of diameter.Do a bfs starting at each point,and when you find a point degree $$$\le 2$$$,exit.Maintain such $$$O(logn)$$$ values during bfs : the maximum time the answer is smaller than $$$k$$$ for each $$$k$$$. Time should be $$$O(nlogn)$$$ ,not so sure about the complexity of the bfs part. submission: 278872280 (Maybe less data structure stuff?)
The bfs part can be replaced by loops : see 278873149
Time complexity of pre-calc is : O(sum of distance from each point to the nearest point with degree no more than 2).Can it be linear? the case of a perfect binary tree is $$$O(n)$$$.If so,1E can be solved in $$$O(n)$$$,using $$$O(n)$$$~$$$O(1)$$$ lca.
I did the same observation of O(logn) distance from centre (it should be centre of diamater not centroid right?)
You can then keep an iterative segment tree which will store the minimum depth of an "activated" node in your subtree. This solces in nlog^2 as you can query this data structure for centre, p[centre], p[p[centre]] and so on for log times
This can be improved to nlogn by storing a frequency table f[i][j] = number of activated nodes in subtree of i at a distance j from it
278842295 is the nlog^2, 278876918 is the nlog
Funnily enough, the nlog^2 is faster
Also author passed off the O(log) observation as trivial, but I would say it was the only non trivial part of this problem for me
I thought I understood how GCD works until I saw problem C. btw Chinese round = mathforce
THANK YOU note to organiser...
After reading the TITBITS section in 2006B question, I would like to admit that I wouldn't have been solve the 2006B, if you had used the "PREVIOUS VERSION" mentioned in editorial.
This statement is prime example of, How one statement can change the difficulty of the problem drastically.
cc : LMydd0225
There is another way (simpler, no need to discuss anything) to solve the case where the original state is already legal (based on method 2 in the editorial).
It's easy to observe that in this case all columns with the same number of $$$1$$$s in it are completely equivalent. Since the sum of $$$1$$$s is $$$m$$$, the number of essentially different columns is $$$O(\sqrt m)$$$. Therefore there are only $$$O(\sqrt m)$$$ essentially different operations to do.
For each operation, we can directly brute force and recalculate the contribution from scratch. Note that we only need the set of "the numbers of $$$1$$$s in a column", and how many times each element of the set appears in the originally array. The size of the set is also $$$O(\sqrt m)$$$, and each operation will modify this set by at most $$$O(1)$$$ elements, so each time we calculate the answer, the complexity is $$$O(\sqrt m)$$$, with a total complexity of $$$O(m)$$$. (If you are lazy, you can also do some sorting every time to avoid some discussion, with an additional $$$\log m$$$ to the complexity).
did anyone do B with segment tree + lazy propagation?
You are not the only one brother. I also did it.
https://codeforces.net/contest/2007/submission/278797086
I wasted more than 25 minutes on this. I was surprised that SEG-with-LAZY is asked as B question.
In first half an hour of the contest My rank was above 3500++ . After I solved D and E, I could recover. Also I am grateful that contest was for 2.5 hours instead of 2 hour. Otherwise, I would not have been able to solve E problem :P
You used the intended solution and not segment tree. This means that you have no idea how you solved the problem, which points towards cheating. Please clarify below.
I do not think this is cheating. It looks like he has some segment tree solution commented out. Perhaps he found the observation right as he was about to finish the segment tree code.
Fair enough, but my concern is still valid though. Let's give him the benefit of the doubt then.
IT tree for me is the correct solution. I think the tutorial is not concrete. Somehow they only consider the case where the initial max value is inside l, r. There is a test case that it can repeatedly update l r not containing the initial max for many times, and after that the new max appear. I would rate B is terrible
but to go from max to max + 1, you must update the initial max
you mean initial max or current max? for example, I have 2 elements array [10, 9], I update l =2, r = 2 twice to make 9 become 11, the initial max 10 is not considered
$$$l=2,r=2$$$ changes nothing.It limits the value,not the position.
consider that in the operation, when you assign a(i) := a(i)+1, the condition is not on the index, so it doesn't say that l<=i<=r, condition is on the value itself. l<=a(i)<=r, if suppose maximum < l, then for all i, a(i)<=maximum<l operation doesn't change anything. if l<=maximum<=r then you print updated maximum, if maximum > r, then for some i we may have that l<=a(i)<=r, if a(i)<r, then a(i)+1<=r<maximum, so maximum doesn't change, and if a(i)=r, maximum>r=a(i) then maximum>=a(i)+1, so maximum still doesn't change.
tourist orz
To solve Problem B in Division 1, my approach for addressing the bonus task, which involves counting adjacent pairs (x, x%n+1) where the nodes are on different sides of a given edge, is to use a persistent segment tree. The method involves the following steps:
First, run a depth-first search (DFS) to obtain the DFS order of the nodes.
For each node x in the DFS order, count how many nodes in its subtree have their LCA with x%n+1 that is positioned above the subtree of x.
I messed up this problem during the contest because I forgot the condition of first depth search that the problem is given :(
BTW congrats Tourist! My goat
Problem B : Index and Maximum value
Video Editorial link: https://youtu.be/-vp9CdHeF6c?si=kuIEU6hnFIOSAyiN
Can someone please tell me from where should I study number theory for competitive programming ? I got stuck in question C of this contest.
Congratulations for tourist for reaching 4000 rating, which is the highest rating on Codeforces ever! I'm proud to see that in my contest!
For the problem div2D, can someone explain testcase 4 of example testcases. I did not quite understood this part from editorial: However, this can go wrong when there are equal numbers of 0 and 1 in the leaf nodes.
How is the answer
2
here?The first one to color the root or the leaf has disadvantage, so everyone tries to color the unimportant nodes first. So the process is: Iris sets $$$s_3=0/1$$$, Dora sets $$$s_1=0/1$$$, Iris sets $$$s_4=1-s_1$$$, Dora sets $$$s_5=s_1$$$, Iris sets $$$s_6=1-s_1$$$.
Can someone let me know why my submission is wrong? 278916361
What I am trying to achieve: In the line where I am printing the answer, I assume that (A.)the tree has the root node marked as 0 or 1 already and (B.)it is Iris's turn. So, the if statement before that tries to make some moves so that A and B conditions are satisfied.
However, this does not work, I replaced this block of code and it works now: 278980307. I do not know why did the previous submission fail
In your previous submission, you forgot the following case: If there's only one question mark at the position of the root, you code processes the second step (made by Dora) even if it doesn't exist.
correct:
1
I see my mistake now. Thanks a lot!
Need help in Div.2 D
What is the proof of this part from the tutorial: "If there is an odd number of ? in the unimportant nodes, then Dora will have to colour the root (after filling the ? in the unimportant nodes one by one)"
Why Iris need odd number of ? in the unimportant nodes (not only one appearance of ? in an unimportant node) and why Dora has to fill the unimportant nodes one by one?
The idea is basically that they mirror each other. They both don't want to color the root first but if there's an odd number of ? than Dora has to. You can watch Adaptatron's video for details.
Thank you.
Can anyone elaborate the approach using the Mo's algorithm for div1.D? (As the editorial mentioned)
We need to maintain all the $$$cnt(1,i)$$$ and $$$cnt(\frac{k}{i+1}+1,k)$$$ stuff, $$$2\sqrt k$$$ elements in total, for each query. Using Mo's algorithm, after moving one of the pointers, you can only maintain the number of $$$x$$$ that $$$x=i$$$ or $$$\frac{k}{i+1}<x\le \frac{k}{i}$$$ in the range. When you solve a query, use prefix sums to find the $$$2\sqrt k$$$ elements.
Oh I understand. I mistakenly think that I need to make a range modification and range query when moving the pointer, and I cannot figure out how to perform this $$$O(1)$$$.
So we need to make a single point update on moving the pointer, and we can brute force calculate the prefix sum every time, which bring us a time complexity of $$$O\left(n\sqrt q + q\sqrt k\right)$$$, which can be further improved with decomposition, providing a $$$O\left(n\sqrt q + q\left(k^{1/4}+k^{1/2}\right)\right)$$$ complexity. (don't worth it though)
Thank you for your elaboration!
You could use the Wikipedia link with the normal 'e' character. It loads the same page. https://en.wikipedia.org/wiki/Bezout's_identity
for DIV2E , my approch which i feel easy to implement ,
collect all edges wts and get the answer for this ,
now one by one try to remove edge wts .
see the effect of this on distances (whose sum needed) .
store the distances which pass through this edge wt , ( in future , if we remove some other weight , that weight can easily be added to these distances stored ) .
say , u are at some edge to remove it , now think of distances which pass through it , make sure that the distance does not belong to set (will consider separately ) . so say two or atleast one distance doesn't belong to set , for the pairs in the set now the delta in our answer is count of pairs * wt of this removed edge and the delta for the distances passing through this edges but not in the set is
cnt of the same * ( — this wt + sum of all removed weight including this wt as well ) — note that the cnt is the count of the distances (1 or 2 in number ) which are tightly packed with having no removed wts .
Is this not a simple edgecase for problem B:
??
goodonehhhh
goodone2hhhhj
fasfafafafaf
howaretyoy
ddddddddddddddddddddddddddddddddddddddddddddddddddd
test
test
Just a crazy implementation of Div2E
https://codeforces.net/contest/2007/submission/282876456
I have an idea for problem B bonus:
Similar to the original problem, every time we determine an edge weight, we want to know (1) the number of (i -> i%n + 1) paths that cover this edge (2) the number of (i -> i%n + 1) paths, weights of all of whose edges are determined right after the determination of this edge.
(1) can be solved offline, using prefix sum: for a path (x, y), add 1 to pre[x] and pre[y], subtract 2 from pre[lca(x, y)]; then from bottom to top, add pre[x] to pre[parent(x)]; finally, pre[x] is the number of paths that covers the edge (x, parent(x)).
(2) can be solved offline with path query on tree (doubling, HLD and etc.). Weights of all edges of a path are determined, is equivalent to, that the maximum determination time of an edge on this path is no greater than the current query time. Thus, we can first read all queries and set the query time of each edge on the tree; for each path, find the maximum query time on the path.
(1) can solved in O(n) and (2) can be solved in O(nlog(n)) (using doubling technique).
My submission passes the original problem: 285171751. (Not sure if it will pass the bonus one)
How to solve it online?
Here's an online method (mainly about the (2) part in your idea):
Let's consider $$$w(i,j)$$$: the number of not-covered edges on the path between node $$$i$$$ and the $$$2^j$$$-th ancestor of node $$$i$$$. $$$F(path)$$$ represents the number of not-covered edges on a path. It can be seen that $$$F(path)$$$ can be divided into sums of $$$O(\log n)$$$ different $$$w(i,j)$$$, and for each $$$j>1$$$, $$$w(i,j)$$$ can be divided into sum of two $$$w(x,j-1)$$$. We want to know all the paths $$$x$$$ that $$$F(x)$$$ becomes $$$0$$$ after this update.
$$$F(x)$$$ becomes $$$0$$$ if and only if the $$$w(i,j)$$$ it is divided into are all $$$0$$$. After we cover an edge ($$$x,fa_x$$$), first $$$w(x,0)$$$ becomes $$$0$$$, then it will affect some $$$w(y,1)$$$, and then $$$w(y,2)\dots$$$. However, it is meaningful to update the upper $$$w(x,y)$$$ values if only $$$w(i,j)$$$ becomes $$$0$$$. Thus there're only $$$O(n\log n)$$$ updates(Let $$$S_{x,y}$$$ represent the set of pairs $$$(i,j)$$$ that $$$w(i,j)$$$ can be divided into $$$w(x,y)$$$ and something else. When $$$w(x,y)=0$$$, do update on all elements in $$$S_{x,y}$$$). Then we can know all the paths that its weight will become 0 after the update.
It seems a bit difficult to implement, so I didn't do that. I took the idea from NOI2024 D1T3, that is really similar and much more difficult, you can refer to this for further details.
Btw, this solution can solve the problem that the $$$n$$$ paths are not $$$(i,(i\bmod n)+1)$$$, but can be the $$$m$$$ given special paths, online.