UPD: Code links are working now
Idea: s_jaskaran_s
Hint
Solution
Code
Idea: s_jaskaran_s
Hint
Solution
Code
Idea: ka_tri
Hint 1
Hint 2
Hint 3
Solution
Code(C++)
Code(Python)
Idea: s_jaskaran_s
Hint 1
Hint 2
Hint 3
Solution
Code(Prefix Sum)
Code(Sparse Table)
Idea: s_jaskaran_s
Hint 1
Hint 2
Solution
Code
Idea: nishkarsh and s_jaskaran_s
Solution
Code
Thanks for fast editorial.
What did I say wrong?
SuperFast Editorial
Multiply by 2022 is just interruption.
Multiply by 2022 saves you from getting stuck with modular-division/overflow-problems in Problem B
because you can just divide 2022/6 = 337 instead of calculating modular inverse of 6
LOL, I didn't noticed that 2022 is divisible by 6 and just copy pasted the modular inverse code.
Shit Never really thought about that
OMG I thought 2022 % 6 != 0 so I just multiplied by 1011 / 3, still worked though :)
lol
Please provide proof that the greedy strategy works in task E
see it like we have some numbers of each type of grapes from 1 to n but ith type of grapes are in bunch of i-1 and buying ith type grape (bunch of i-1) costs, then how can you buy maximum grapes with limited money. buying bunch of higher number cost less per grape and that's why we will buy it first. [i/(i-1) will be less for higher values of i]
that's not how proofs work.
Um_nik gives a pretty good explanation of this problem https://youtu.be/HrJhgj5pmdE?t=1981
Isn't it kinda obvious? Suppose you have an optimal answer and in that answer you have x edges with gcd x+1 and y edges with gcd y+1. If instead you can get (x+y) edges with gcd x+y+1, you end up with the same number of edges, x+y, and cost x+y+1 < x+y+1+1. Therefore, the greedy leads to a better answer than the optimal one. (notice that you can always add one edge in the end, choosing 2 numbers with gcd 2)
Yeah, feel free to down vote me cuz ur bad and don't understand simple math :(
I will show very simple proof with simulation of $$$(number$$$ $$$of$$$ $$$groups)$$$ * $$$(m)$$$ knapsack solution and some properties on the knapsack array that results taking biggiest group everytime is optimal.
Let array we're working on $$$a[]$$$ = {$$$1, 2, 3, 3, 4, 4, 4... n$$$}. where every element denotes an avaible group of size a[i].
and our knapsack array will be $$$dp[i][k]$$$ denoting subset with minimum size of prefix $$$i$$$ with $$$sum = k$$$.
$$$property1:$$$ at any step, $$$dp[i]$$$ is a non-decreasing sequence. ($$$dp[i][k-1]$$$ $$$<=$$$ $$$dp[i][k]$$$).
Obviously, this property holds for $$$dp[0]$$$. At any step it will hold for $$$k>a[i]$$$ because they will be minimised by a non-decreasing sequence assuming $$$dp[i-1]$$$ was non-decreasing.
Now we only need to show $$$dp[i-1][a[i]-1]$$$ $$$<=$$$ $$$dp[i][a[i]]$$$. Which is true because we know at some step $$$j < i$$$ step we minimised $$$dp[i][a[i]-1]$$$ with $$$dp[i][0]$$$ because we visited $$$a[i]-1$$$ at some point. and now we are minimising $$$dp[i][a[i]]$$$ with $$$dp[i][0]$$$.
$$$property 2:$$$ We don't need minimise operation, we can directly assign $$$dp[i][k]$$$ to $$$dp[i-1][k-a[i]]+1$$$.
We know array $$$dp[i-1]$$$ is sorted where $$$a[i-1]$$$ was either $$$a[i]$$$ or $$$a[i]-1$$$. So we are going from minimizing $$$dp[i][k]$$$ with $$$dp[i-1][k-a[i-1]]+1$$$ to minimizing it with either $$$dp[i-1][k-a[i-1]-1]+1$$$ or $$$dp[i-1][k-a[i-1]]+1$$$ which both are small or equal than $$$dp[i-1][k-a[i-1]]+1$$$ because $$$dp[i-1]$$$ were sorted so we can directly assign.
As you notice from second property, transitions show it's optimal to take greatest element whenever we can.
Proof: Let's first note that we can find at least one pair of vertex with
GCD(u, v) <= n / 2
and we cannot if GCD > n / 2. It is easy because there is(G, G * 2)
for everyG <= n / 2
and there is not else. We need to minimize the number of moves in the task. Let's greedy take pair with a largest GCD first, moving fromn / 2
to1
. If at some step we took more(current > m)
, then notice that the last step (let's sayk
edges in this move) can be replaced byk - (current - m)
so that it is exactly m. It is possible always becausek - (current - m) < k
and there is at least one such pair.Tight Constraints for C, Unordered map gave TLE
using a frequency array will work
Can you point out the mistack. I have used map and getting tle. 187515966 Edit: It work with array.
maps take O(logN) for searching a key
But array takes O(1) for retrival of data
Yes, I faced it too. In an attempt to remove tle I ended up with a bunch of WA and RE T_T
You should read neal's blog: Blowing up unordered_map, and how to stop getting hacked on it
Actually, it won't help much in this problem. Even
std::map
gives TLE (due to the extra $$$log(n)$$$ factor) and here is thestd::unordered_map
(with the splitmix64 custom_hash) solution which gives TLE.You are right! I didn't try it and thought that it will pass with the amortized $$$\mathrm{O}(1)$$$ operations of the unordered map.
I have a doubt how o(sqroot of n*n) is working it should give tle as overall complexity is going around 10^11.5 if we include t(number of test cases)
can someone explain how do you end up deriving this ∑i= 1 n (i⋅i) + ∑i= 1 n−1 (i(i+1)) = n(n+1)(4n−1)6
from problem B .
the ans of b is 1*1+1*2+2*2+...(n-1)*n+n*n=(1*1+2*2+...n*n)+(1*2+2*3+...(n-1)*n)=n*(n+1)*(2n+1)/6+(n-1)*n*(n+1)/3.
Why the sequence 1*2+2*3+...(n-1)*n) is not generating an equation n*(n+1)*(n+2)/3??
just let n=2
ok, i couldn't get this on first sight.
I think problem B was so confusing
Mostly used Oeis but zig zag path from start cell to end cell will be optimal if you observe few examples so a(n)=a(n-1)+n*(n-1)+n*n , from here you can derive
Why my code is not working
ll n; cin>> n;
ll ans = ((n*(n+1))%mod*(4*n — 1))%mod;
ans = (ans/6);
cout<<(ans*2022)%mod << endl;
you need to take mod of (4n-1) and also you inverse of 6 in 3rd and 4th line
can u please give a working code based on my solution. :)
186997927
In this solution I have not used the concept of modular multiplicative inverse. I am not able to use MMI in this problem. can u write the code for MMI?
(1/a) mod m=(a^(m-2)) mod m if m is prime.
add them
wolfram alpha
I just thought about the squares as actual squares. If you line them up the squares to create a side, one side will be n(n+1)/2. Now a trick I knew was that adding the next odd number will give you the next perfect square. So if we want another n perfect squares, we could add n ones n-1 threes etc… This can be arranged where the nxn square gains an additional length of one and the n-1 gains three etc. We can then add an additional square with side lengths 1 to n to make the side length of each square 2n+1(basically the side of n gains n+1, n-1 gains n-1 + 3 etc). Therefore, it can be computed as (n(n+1)/2 * (2n+1))/3. Hopefully my explanation is not horrible. My submission with the formula:https://codeforces.net/contest/1731/submission/186917982
Maybe it's just me, but it seemed like the constraints for C were very tight
It's not just you, but at least they did a great job at preventing $$$O(n\sqrt{n}\log{n})$$$.
Yes I had 10 wrong submissions :")
haaa!!! Simple Minded People. I had 16 wrong submissions
can someone please tell me how to solve modulo problems in c++ T-T
usually you calculate the answer and after each operation,you get ans %= modulo. if you have a division, you can use the modular inverse and multiply by it.
C just taking the piss bro with the tle
On problem D, there is a 2d segment tree solution (i know it is worse than binary search with 2d prefix sum) which takes O(n⋅m⋅logn⋅logm⋅)). For every (i, j), you only check if you can create square with size bigger than current maximum value and increase maximum while possible. There are at most min(n,m) increases(1000 at max) so it is negligible.
like seriously? it was just matter of using array instead of map and i could not solve till end bcz i thought it needs to be more optimized than n*sqrt(n).
India top ❤❤❤
Why is your rating <1500 and it's showing that you are a legendary grandmaster? Is it some glitch of cf??
It is the Magic of Santa Claus.
magic didn't happened with me XD
cuz u stupid
Its happening to all bro go to your profile you'll be able to magic right of submission button
In problem D,regard of convert it into 0 and 1 , can we find prefix min and check that smaller than m for each binary search value of m ?
I mean anyone use set to do create table in problem D ?
got TLE in C because of using map instead of array for storing prefix xor
another sad day..
same bro
There's another method for D that uses the largest square submatrix dynamic programming solution and binary search over $$$n$$$.
Method
Solution (using ranges for the binary search part) 186935611
Great solution..
For problem B
can anyone point out the shortcomings in my code? it was not returning the correct answer for n=1000000000
Multiplying n*(n+1)*(2*n+1) is bigger than n^3. n^3 does not fit into long long, so therefore it's a wrong answer.
How to come up with the right side of this equation after having found the left side?
$$$\sum_{i = 1}^n{i \cdot i} + \sum_{i = 1}^{n-1}{i (i + 1)} = \frac{n(n + 1)(4n - 1)}{6}$$$
Or how to google it?
rearrange left side to form
2*i*i + i
Now you only need to know the formula of sum of squares of 1 to n and sum of numbers of 1 to n which are pretty well known.
How to rearrange the left side to anything?
How, what is the formular for $$$\sum_{i = 1}^n{i \cdot i}$$$ ?
(n*(n+1)*(2n+1))/6
Here is one proof: Your text to link here...
Try looking up in OEIS. https://oeis.org/A002412
My intuition here was to look at a ratio between your desired sum and $$$1+2+...+n$$$. Then notice a constant increment in this ratio and derive both $$$\sum{i*i}$$$ and $$$\sum{i*(i+1)}$$$ from there
You played very dirty game with 2022 in problem B, idiot me just can't see it. I went for modular inverse of 6 rather than doing (2022/6) :(
D was a lot easier than C this contest.
I disagree with that, honestly I tried to write up a DP solution, failed, and looked at it for 1 hour and a half to no success (I knew I might have needed some data structure but I don't have a template for 2d segtrees)
Well to be fair I’ve done a similar problem before https://atcoder.jp/contests/abc203/tasks/abc203_d?lang=en. But I also found this problem to be easy so I don’t know.
Yes, very tight constraints on C. I usually don't think that much about implementation in ABC, but this has taught me a lesson that you should use vectors and arrays instead of maps and hashmaps
Problem C: "Number of subarrays with a given XOR sum can be calculated in O(n)". How this can be solved ?? This line is just put in tutorial without any explanation
That's a problem. Can be solved using prefix xor-s. I can give an easier explanation of this technique. Let's forget about xor and think about sums. How many subarrays are there with a given sum? You just count prefix sums and for each prefix i you need to find the number of prefixes j (j <= i) such that pr[i] — pr[j] == sum. With xors it's done in a similar way
Oho.. Great. Thank you.. Finally got it
I made video Solutions for A-E.
Congratulations on becoming master.
Congratulations for orange man!
How to solve problem D with sparse table? From custom invocation, the maximum size of a
vector
with $$$512$$$ MB memory limit is just $$$10^7$$$, meanwhile the size of sparse table in this problem can even reach $$$10^8$$$ (when $$$n = m = 10^3$$$, the size is approximately $$$n \cdot m \cdot log(n) \cdot log(m) \approx 10^8)$$$.Even when I flattened the input array to 1 dimension and used 1D sparse table on it, it still got MLE ($$$ >512$$$ MB) in custom invocation.
You can get rid of one $$$\log$$$ using the fact that we are interested only in squares. Just let $$$m[i][j][k]$$$ be the minimum in square $$$(i, j) - (i + 2^k, j + 2^k)$$$.
That's neat! Forgot that we only consider squares.
I managed to squeeze such solution after this contest by using
short
instead ofint
(any values bigger than1000
can be changed to1000
) and remembering only every other level of sparse table (so you have to look up 4 instead of 2 values each time), but that barely fits and feels not like what was intended.Can someone help me know what's wrong in this code..? For ques B, I have used the same formula from which the given formula is derived.. https://codeforces.net/contest/1731/submission/186959618
You can't use normal division for modular arithmetic operation. You need to use modular multiplicative inverse, for any divisional operation when you are using MOD
Instead of using modulo inverse, you can just find out where to divide out the number first. Easiest way is in 2022, but I didn’t find this observation in contest.
Can someone explain how we can check for arbitrary side length s if a required square exists?
I think that I solved D with complexity O(nm), am I wrong?.. Solution: 186936631
Idea is to solve 1d version of problem: For all elements of a row / column calculate maximum length K of segment starting at it that all numbers inside are >= K.
And you just apply that for rows, then apply for columns in a table of results.
And this subproblem can be solved with linear min in moving window (with deque).
is it me or we can't open the codes?
Hm, I can't see them too
.
Another solution (with motivation) for B (the solution is not recommended, but the technique used in it may be very useful in other instances):
When I looked at the problem, the first thing that popped into my mind is that the solution would be some formula in terms of $$$n$$$, because of the constraints. I was too lazy to think.
The first thing I tried was some brute force to collect some values. My brute force was classical dynamic programming to find the maximum-sum path in a grid. The values I got for
ascendingly, were:
now, take the difference between each two adjacent values
take the difference between each two adjacent values one more time
I'm sorry, do that one more time :D
we see that the difference is constant, and this happened the third time we took a difference. From here, we can note that our solution is a polynomial of the third degree. This method is called the method of differences..
Now, what I did in-contest was declare that my answer is
and plugged in 4 values that I know to construct a system of 4-equations in 4 variables:
and then dumbed down the equations on Wolfram-Alpha, and got the values for $a$, $$$b$$$, $$$c$$$ and $$$d$$$, coded it and got AC. But that was too slow.
Note that from the values above, and the fact that for any $$$k + 1$$$ values $$$(x_1, y_1), (x_2, y_2), \dots, (x_{k + 1}, y_{k + 1})$$$, we can know for sure that there exists a unique polynomial of degree $$$k$$$ satisfying these values, we can know for sure that
Why? First, observe why
(for example, for $x = 2$, we can note that all fractions except the first one become $$$0$$$, and the first fraction becomes $$$1 \cdot 7$$$, and so the answer is $$$7$$$). Second, observe that $$$p$$$ is a polynomial of the third degree, and there can only be one such polynomial satisfying these four values, so it is the polynomial we are looking for :D.
This method is called Lagrange Interpolation. And, this was very useful in a problem like this, since we can hard code about 10 values (a guess for a sufficient number of values) for the polynomial, and this code will automatically evaluate the polynomial for you using the same method (just change in the global vector of pairs of $$$x$$$ and $$$y$$$ values, and everything will be fine). Note that more correct values will not — at-all — harm or corrupt the polynomial.
Note that if you plug in $$$k$$$ values, both precomputation and evaluation are done in $$$O(k^2)$$$, so if $$$k=10$$$, we do 100 operations per test case, which is not much.
We can also extract the polynomial itself from the method of differences.
Using the 0-indexed sequence {7, 22, 50, 95, ...}
p(X) = 7 + 15 (X choose 1) + 13 (X choose 2) + 4 (X choose 3)
Consider X choose 1 = X, X choose 2 = X * (X — 1) / 2 and so on. This might be some abuse of that naming in order to make it work for negative values, but it works.
This method of differences is something that I (re)discovered by myself when playing around with sums of powers during high school classes. I wouldn't expect it to be mentioned in codeforces lol. My current opinion on it is that it's fun but not practical since lagrange interpolation seems way more practical when solving problems.
Also, you can get one evaluation using lagrange interpolation in O(N) given that the points you took to interpolate are equidistant (as in for x in [0, 1, 2, 3, 4, 5, ...]). That's the whole idea of the following problem: https://codeforces.net/problemset/problem/622/F
I do not quite understand how the method of differences directly concluded the polynomial you have.
I mean, I do understand where the coefficients $$$7, 15, 13,$$$ and $$$14$$$ come from, but I do not understand where $$$\binom{x}{1}$$$ and $$$\binom{x}{2}$$$ and so on came from.
With regard to the linear Lagrange Interpolation. I have never seen it that way, and it is a great idea. Thanks!
If we force the differences to be a sequence like [0, 0, 0, 1] we have this:
you can prove that each row is a row of the pascal triangle, and we take (X choose difference of columns) as the column is fixed. This works because the resulting sequence depends on all the orders of differences using only the operation +, so it's a sort of a linear system and we can isolate the contribution from each of these positions.
it says, "you're not allowed to view the requested page" for codes
mathforces :))
In F, one of the major parts is calculating $$$\sum_{i=1}^{k} i^p$$$ for some $$$p$$$. Note that here $$$k$$$ is fixed.
As $$$p$$$ is quite less in the problem statement, we can avoid interpolation.
So suppose $$$S(k,p)=\sum_{i=1}^{k} i^p$$$.
Now let's try to expand $$$(x+1)^{p+1}$$$.
We know that $$$(x+1)^{p+1} = \sum_{i=0}^{p+1} {{p+1} \choose i} \cdot x^i $$$.
Now it's not hard to observe that $$$S(k+1,p+1)-1=\sum_{i=0}^{p+1} {{p+1} \choose i} \cdot S(k,i)$$$
$$$S(k+1,p+1)-1-S(k,p+1)=\sum_{i=0}^{p} {{p+1} \choose i} \cdot S(k,i)$$$
So we get $$$ {{p+1} \choose p} \cdot S(k,p) = (k+1)^{p+1}-1-\sum_{i=0}^{p-1} {{p+1} \choose i} \cdot S(k,i)$$$
Now we know that $$$S(k,0)=k$$$.
So if we move in increasing order of $$$p$$$(from $$$p=1$$$ to $$$n$$$), we can find $$$S(k,p)$$$ for all $$$p(0 \leq p \leq n)$$$.
Do note that $$$k$$$ is fixed here.
Actually, apparently P(x) always have degree 2. I just don't know how! So we just have to calculate for p=1 and p=2;
There is a small typo here. In the last line of the formula, it should be $$$ \sum_{i = 0}^{p - 1} \binom{p + 1}{i} $$$ instead of $$$ \sum_{i = 0}^{p} \binom{p + 1}{i} $$$
Fixed, thanks
Could you explain further how I can use this formula to find the sum of the multiplication of some power terms?
Suppose you need to evaluate something like $$$\sum_{x=1}^{k} (x-c_1)^{p_1} \cdot (x-c_2)^{p_2} \ldots (x-c_n) ^{p_n}$$$
Suppose $$$T=\sum_{x=1}^{k} (x-c_1)^{p_1} \cdot (x-c_2)^{p_2} \ldots (x-c_n) ^{p_n}$$$
You can expand all $$$(x-c_i)^{p_i}$$$ and multiply them altogether.
You can represent $$$(x-c_i)^{p_i}$$$ by a vector(say $$$vec$$$) of size $$$p_i+1$$$ such that $$$vec[j]={p_i \choose j} \cdot (-c_i)^{p_i-j}$$$. Basically $$$vec[j]$$$ denotes the coefficient of $$$x_j$$$.
Now suppose $$$poly$$$ is the final vector which you get after multiplying all vectors.
So your answer is just $$$\sum_{i=0}^{len} poly[i] \cdot track[i]$$$, where $$$len+1$$$ is the size of vector $$$poly$$$. Here $$$poly[i]$$$ denotes the coefficient of $$$x^i$$$ in $$$T$$$.
Note that $$$track$$$ is same as the one used in my original comment.
You can refer to this submission for implementation details.
Constraints on C were too tight :/
Are you saying that n*m*log(min(n,m)) does not work in D? Well, in principle, yes, but then how does n*m*log(max work (from the entire table))? Isn't it the same thing in the worst case? I'm sorry if I don't understand something, maybe I'm stupid, correct me, here are 2 of my codes. Sorry for the template :) 186931737 186933967
Panvel, in problem D, is not part of Mumbai.
ModuloForces
Given that it is a cornerstone of the whole solution and not some widely known fact it is worth providing a proof...
You can find many proofs on the internet.
It's pretty well known imo.You can pair up divisors d and n/d. Only way a divisor would by paired with itself is when n is a perfect square.
An alternative proof: A number can be represented by its prime factorisation $$$ x = {p_1}^{a_1} {p_2}^{a_2} ... {p_n}^{a_n} $$$. Then the number of factors are $$$(a_1 + 1)(a_2 + 1)...(a_n + 1)$$$. This product is odd only in the case when all the terms are odd. That happens only when for all $$$i$$$ it is the case that $$$a_i$$$ is even i.e it is of the form $$$a_i = 2 k_i$$$ . So we get $$$ x = {p_1}^{2k_1} {p_2}^{2k_2} ... {p_n}^{2k_n} $$$. There you have your number to be a square.
Why is the proof in the editorial of B so long and complicated?
Here is a simpler proof.
Label each cell $$$(i,j)$$$ by number $$$i+j$$$. We will walk on each cell of label $$$2$$$ to $$$2n$$$ exactly once. For a fixed label $$$L$$$, the maximum value is $$$(L-x)(x)$$$, this value is maximized when $$$x$$$ is closest to $$$\frac{L}{2}$$$. This gives us an upper bound on our answer as $$$\sum\limits_{L=2}^{2n} (L-\lfloor \frac{L}{2} \rfloor)(\lfloor \frac{L}{2} \rfloor)$$$. This upper bound is also achieved by our construction.
omg errorgorn proof
For some reason, when I click on editorial submission link, it says
"You are not allowed to view the requested page". Is that a bug or something?
Edit: Its fixed now, Thanks adedalic for fixing it
include <bits/stdc++.h>
using namespace std;
long long fun(int n){ long long ans = 337; int temp = 1e9+7; ans = (ans*(n*(n+1)%temp))%temp; ans = (ans*((4*n-1)%temp))%temp;
}
int main() { int n, t; cin>>t; while(t--){ cin>>n; cout<<fun(n)<<endl; } }
This is my code for B. It gives incorrect answer only for n = 1e9. What's wrong?
Stupid problem F.
Auto comment: topic has been updated by adedalic (previous revision, new revision, compare).
Auto comment: topic has been updated by adedalic (previous revision, new revision, compare).
Auto comment: topic has been updated by adedalic (previous revision, new revision, compare).
Found a closed formula for F but I didn’t prove it: answer = ((N-1)K^N -NK^{N-1}+1)K(K+1)/(6(K-1)) Here's an AC using that: https://codeforces.net/contest/1731/submission/186979120
Besides proof, the other important question is, how did you find it?
The point is that by some reason the polynomial found is always of degree 2 and once you assume it's of degree 2 it's easy to find it because you know it needs to have roots 0 and k. So you're up to find "a" in ax(x-k) and you can find it with an easy case like x=1.
Oh... You mean if we look at it as a polynomial of N instead of K.
Hopefully someone shows up with a proof.
Why is my hash table so slow?
(https://codeforces.net/contest/1731/submission/186902410)
Using this code will lead to TLE on problem C, but I think this code should pass.
187084936
187085094
Thank you!
For Question C, the solution states this:
For the given constraints for elements in the array, the maximum possible XOR sum of any subarray will be less than 2n
How do we know that the maximum possible XOR sum of any subarray is less than 2n?
Suppose the binary representation of $$$n$$$ has $$$k$$$ bits, then the max possible xor sum has $$$k$$$ bits, whereas $$$2\cdot n$$$ has $$$k + 1$$$ bits.
So,in problem C,why could we calculate the number of subarrays with a given XOR sum with o(n)?I don't know how I can figure it out in such a low complexity...
I also struggled a bit to get it, so here's what I understood.
You can iteratively compute the prefix XOR for the array and keep track of how many times each value came up before in a table (set t[0] = 1 to account for the empty prefix). For each of the n prefixes you XOR the current target value and check the table for previous prefixes. That works because the XOR between a prefix up to position x and a prefix up to position y represents the XOR on the [x+1, y] subarray, so you end up checking every subarray in O(n).
good contests,but boring problem c
In problem-C,why the maximum possible XOR sum of any subarray will be less than 2n?
For proving this case, Suppose the number is, n=16, Binary representation of n is = 10000. Now using or | operation among some numbers which always less than or equal to n including this, maximumly we can get all bits set, that is 11111. Which is = 2*n-1.
In E, there is one more way to calculate the dp, using euler totient (phi) function, first calculate the normal euler totient function in sieve manner for the given n and storing it in array phi. Now observe that by definition prefix sum on phi[i], would store all pairs (x,y) less than i, such that gcd(x,y)=1. Now if gcd(x,y) = k, then gcd(x/k, y/k)=1, thus to find all pairs less than n whose gcd is k, it is simply phi_prefix[n/k], and since prefix array is non-decreasing, hence we can observe that this value would be non-increasing. Rest of the approach is now same as solution, following greedy approach due to monotone nature of packets array (s[i]>=s[i+1]).
Implementation
I thought of the same solution. But note that the sieve works in $$$O(nloglogn)$$$ time complexity, instead of the $$$O(nlogn)$$$ mentioned in your code. Since the rest of the code is $$$O(n)$$$, this solution is actually asymptotically better than the one in the editorial.
In problem C, I don't understand, why it is enough to canculate only prefix xors and pair them with all perfect squares to check if their xor is less than 2n
Can anybody tell me how O((n^3/2)*T) works in problem C? I am always confused about what extent the time is in the acceptable range; O (N log N) or O(N(log N)^2) is pretty understandable that it would work, but n^3/2 seems a bit more
My rule of thumb is if the time limit is less than 1e8 operations than the algorithm is ok.
N^(3/2) being faster than Nlog^2N isn't rare. In the end, it might end up depending on the constant.
in problem c, why is it : For the given constraints for elements in the array, the maximum possible XOR sum of any subarray will be less than 2n, explain please
maximum possible xor is all bits of n becoming 1. 2n will have one extra bit than n which will be greater than all bits of n becoming 1.
let's take N = 32 as given in question all the elements will be less then n. Let's assume there is an element 32 and the next element 31 XOR of these elements will be 63 (<2*n) which is the maximum allowed XOR as all the allowed bits are turned on.
How to come up with formulas like in B ? It's confusing
I just thought about the squares as actual squares. You have to kinda start with an assumption, so I’m going to assume we can somehow build an area of all the perfect squares into a rectangle(so the ans would simply be the length times width). If you line up the squares to create a side, one side will be n(n+1)/2. Now a trick I knew was that adding the next odd number will give you the next perfect square. So if we want another n perfect squares, we could add n ones n-1 threes etc… This can be arranged where the nxn square gains an additional length of one and the n-1 gains three etc. We can then add an additional square with side lengths 1 to n to make the side length of each square 2n+1(basically the side of n gains n+1, n-1 gains n-1 + 3 etc). Therefore, it can be computed as (n(n+1)/2 * (2n+1))/3. (Which is the same as n(n+1)(2n+1)/6). Hopefully my explanation is not horrible. My submission with the formula:https://codeforces.net/contest/1731/submission/186917982.
Can anyone please prove this submission? https://codeforces.net/contest/1731/submission/186970125
I wrote this code for 1731B but still on the last testcase when n = 10e9 I am not getting a correct answer Can somebody help me what i did wrong.
1348*n*n*n overflows before you apply the modulo operation
Can anyone please explain to me the dp formula in problem E? I still didnt get it :(
# { (x, y) where gcd (x, y) = d } = # { (x, y) where d|x , d|y } − # { (x, y) where gcd (x, y) > d }
= # { (x, y) where d|x , d|y } — sum( # { (x, y) where gcd (x, y) = k*d } where k>=2)
Is there anywhere where I can learn the 2d RMQ implemented in solution 2 of problem D?
Can anyone clarify on polynomial interpolation in problem F?
You can start with easy example of polynomial interpolation. This problem — https://codeforces.net/contest/622/problem/F
Here we have to find $$$1^k + 2^k + 3^k + ... + n^k$$$ where $$$n$$$ is large but $$$k$$$ is small. Now you know that for $$$k = 1$$$ this value is a 2-degree polynomial in $$$n$$$ (specifically $$$n * (n + 1) / 2)$$$. For $$$k = 2$$$ it is a 3 degree polynomial and so on. So we know that the answer is a $$$k+1$$$ degree polynomial in $$$n$$$. We find the values of this polynomial at $$$k+2$$$ different points using brute-force and then interpolate to find the value at $$$n$$$.
Similarly in problem F, you can infer that the final answer is a $$$n+2$$$ degree polynomial and then do the same
On Problem D, using only Binary search and clever optimizations in the check function could get you an Accepted. I was surprised when this worked.
187053184
UPDATE: Never mind I got hacked :)
s_jaskaran_s nishkarsh could you elaborate a little bit on "be a polynomial whose degree will be <= n+2" in the editorial to problem F because I am getting the degree for the polynomial as
n
.Actually the degree is n + 1, so you will need n + 2 points to interpolate, the polynomial F is an n degree polynomial as you found, but the polynomial P(u) is an n + 1 degree polynomial of u. this is explained here
Yeah, I got it. Thanks!!
I wasn't merging the terms of the P(u). I was only looking at the individual terms of F(t).
I do not understand these lines from the editorial of the C problem.
For the given constraints for elements in the array, the maximum possible XOR sum of any subarray will be less than 2n, so the number of possible elements with odd divisors ≤2n−−√. Number of subarrays with a given XOR sum can be calculated in O(n)
.suppose we have array [2, 4] then their xor is 6 which is greater then 2*n(i.e 4)
Can anyone give example to understand me myself better?
Read the question again. 1<= ai <= n. So in your case, 4 shouldn't be there in the array
In problem B,
for calculating n(n+1)(4n-1)/6 some guys multiply it by 166666668. instead of dividing by 1/6.
How it is working and what is the logic behind this?
166666668 is the modular inverse of 6 in mod 1e9 + 7
Where can I read about "technique of polynomial interpolation" used in this question?
Somebody help me?My code WA on test 3,thanks! https://codeforces.net/contest/1731/submission/187155992
Sry,it is out of bounds on
ll x=min(dp[k+1],m)/k;
In problem E how to show that s[k] is non-increasing?
I have the exact same question. If you read this explanation here which uses totient function, it is obvious why it should be non-increasing. The dp[k] array which we initially create itself is non-increasing. But if you use the dp approach mentioned in the editorial (and not prefix-sum of totient function), I don't know how people figured it out. It would be great if someone could provide more intuition into this.
Look at any pair $$$(x, y)$$$ ($$$1 \le x, y \le n$$$) with $$$\gcd(x, y) = g > 1$$$. It means we can write them as $$$x = g x'$$$ and $$$y = g y'$$$ with $$$\gcd(x', y') = 1$$$. Now let's make another pair $$$(x'', y'')$$$ where $$$x'' = (g - 1)x'$$$ and $$$y'' = (g - 1)y'$$$. Obviously $$$1 \le x'', y'' \le n$$$ and $$$\gcd(x'', y'') = g - 1$$$.
In other words, from any pair $$$(x, y)$$$ with $$$\gcd(x, y) = g$$$ we induce a valid pair $$$(x'', y'')$$$ with $$$\gcd(x'', y'') = g - 1$$$. So, the number of pairs with $$$\gcd = g - 1$$$ is greater or equal to the number of pairs with $$$gcd = g$$$.
Current (unofficial) Rank 1: ttklwxx's submission fails on Ticket 16619 from CF Stress. Can someone hack it for me, or let me know if I constructed an invalid testcase? Thanks.
Submission Link
It worked. https://codeforces.net/contest/1731/hacks/878656
In F, I think max degree of polynomial P(u) is n+1 instead of n+2 which is mentioned in the editorial. As by this article's first theorem under generalisation, $$$ \sum_{k=1}^n k^a $$$ is a polynomial of degree a+1 and since the max degree of t is n so max deg(P(u)) = n+1.
Ps: In the given solution I just replaced n+3 by n+2 and this solution also passed,
In problem F shouldn't the degree of P(u) <= n, I can't understand why it is <= n+2, can someone explain it please?
Full proof for $$$E$$$:
Throughout the proof, will use the notations used by the editorial, i.e., $$$dp[i]=$$$ number of pairs with GCD $$$i$$$, $$$s[i]=$$$ maximum number of groups of $$$(i-1)$$$ edges where each edge has a weight $$$i$$$.
Proof that $$$dp[i]$$$ and $$$s[i]$$$ are non-increasing:
Any pair with GCD $$$g+1$$$ can be represented as ($$$a\cdot (g+1)$$$, $$$b\cdot (g+1)$$$) ($$$a<b$$$ and are coprime). Thus, for every such pair we can have another pair with GCD $$$g$$$, i.e., ($$$a\cdot g$$$, $$$b\cdot g$$$). Hence, $$$dp[g]\ge dp[g+1]$$$ and $$$s[g]\ge s[g+1]$$$.
Proof that greedily choosing the maximum available group is optimal:
Suppose for some $$$m$$$ the maximum available group we can choose is with gcd $$$g$$$, assume an optimal solution $$$sol$$$ that does not choose $$$g$$$ exists, let's anyway still use a group from $$$g$$$ then proceed in descending order of GCDs making choices like in $$$sol$$$, we will reach some $$$k<g$$$ where choosing one more group of $$$k$$$ will exceed $$$m$$$.
At that moment we must choose at least $$$2$$$ more groups as part of $$$sol$$$, because if only $$$1$$$ group $$$last$$$ ($$$last<g$$$) is remaining for $$$sol$$$, this means that after choosing that group we will have chosen $$$g-1+m$$$ edges, which means that before $$$last$$$ we have chosen $$$g-1+m-(last-1)$$$, which means we already exceeded $$$m$$$ before choosing $$$last$$$, which is a contradiction.
Let's return to $$$k$$$ that we are currently stuck at, since $$$s[i]$$$ is non-increasing, for sure we can choose our last group from some $$$l$$$ ($$$l<k$$$), which means we introduced $$$2$$$ more groups ($$$g$$$ and $$$l$$$) but we removed at least $$$2$$$ other groups, which means solution can't get any worse by choosing $$$g$$$.
C can also be solved in $$$O(n \log n)$$$ using the Walsh-Hadamard transform.
How , What is this can you please explain?
Construct the prefix xor array and prepend 0 to it. Then xor of any subsegment in original array is xor of some pair of numbers of the prf xor array. Use FWHT to compute the polynomial f(x) where the coeff of x^a is how many ways can you select two indices i < j s.t. prfxor[i]^prfxor[j] = a. Then simply iterate on a and check if its valid, if it is then add coeff of x^a to ans.
Problem Ratings (Difficultly) When?
In Problem 1731B i think In solution code you have to use void instead of int in solve function.
Right, but it doesn't matter
In problem F, what is the name of polynomial interpolation technique did you use in your code, sir?
In problem D
A minimum can be calculated in O(1) using sparse tree.
Can anyone explain, how?E can be solved in O(n) using mobius and sqrt stuffs. Link
For problem F, here is a proof for why $$$\deg F(t) \leq 2$$$. This method requires some generating function technology. I may translate the solution to English.