Vladithur's blog

By Vladithur, history, 3 years ago, In English

You can also find video editorials for problems B-D on ak2006's Youtube channel!

1632A — ABC

Hint 1
Hint 2
Solution
Solution codes

1632B — Roof Construction

Hint 1
Hint 2
Solution
Solution codes

1632C — Strange Test

Hint
Solution
Solution codes

1632D — New Year Concert

Hint 1
Hint 2
Hint 3
Solution
Solution codes

1632E2 — Distance Tree (hard version)

Hint 1
Hint 2
Hint 3
Hint 4
Solution
Solution codes

P. S. Solution codes will be published a little later.

P. P. S. Do not forget to evaluate the problems in the announcement.

UPD: Solution codes have been posted.

  • Vote: I like it
  • +228
  • Vote: I do not like it

| Write comment?
»
3 years ago, # |
Rev. 2   Vote: I like it +9 Vote: I do not like it

damn, that was fast!

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +40 Vote: I do not like it

    We tried our bfast :)

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +36 Vote: I do not like it

      Problems were really good, but i am stupid.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Authors, please look at this solution for problem C. I have written a bruteforce solution (determined $$$b'$$$ with a loop).

      Is there a possibility that the solution can get TLE for some cases? If not then why?

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it +1 Vote: I do not like it

        Notice that $$$b' \le 2b$$$, since $$$a < b$$$. This means that your first loop will run at most $$$b$$$ times, and your second loop will run at most $$$2b$$$ times, so at most $$$3b$$$ times in total. Since $$$\sum b \le 10^6$$$ over all test cases, your solution will run well within the time limit and won't TLE.

»
3 years ago, # |
  Vote: I like it +2 Vote: I do not like it

Thanks for the fast editorial!!

»
3 years ago, # |
  Vote: I like it +20 Vote: I do not like it

Order of Speed Cf editorial > light > bolt

»
3 years ago, # |
  Vote: I like it +16 Vote: I do not like it

somebody reads tags

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

someone mind explaining the 2nd question?

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    hi, so basically it says, for any permutation of numbers, the minimum of max xor's you will get is 2^k, i.e for range 0-9, for any permutation you cannot go below 8 = 2^3 for range 0-22 you cannot go below 16 = 2^4

    so the nearest 2^k is the value you need

    you can try doing it manually for smaller ranges eg 0-5 and then increase the range till 15 or 17, the largest bump you get is when you xor 2^k and (2^k)-1, eg 7^8 or 15^16, you need to avoid this , therefore you need to place the 0 appropriately

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      But if we want to avoid 2^k and (2^k)-1, why don't we divide them? and why the minimum of max I get is 2^k. I can't understand it I am new.

      • »
        »
        »
        »
        3 years ago, # ^ |
        Rev. 2   Vote: I like it +1 Vote: I do not like it

        yes, if you need to avoid 2^k and (2^k)-1, you need to separate them, i.e put 0 between them for eg for 0,1,2,3,4 — you can rearrange as 1,2,3,0,4

        although this is how I did it and not exactly the editorial suggested but i guess idea is the same

        for knowing why 2^k is the min of the max, you can try that yourself manually with smaller ranges

        or consider this example, range 0-18 here the highest power of 2 is 16(2^4) 16- 10000 (binary)

        if you do xor with any number between (0-15), it will end up in a number >=16

        16^2=10010=18 16^5=10101=21 16^0=10000=16

        do some yourself

        so here the concept of highest set bit comes into play here for 16 its the 5th bit from the right (10000) this is present in all numbers from 16-18 in the sequence

        so if you do xor of any number in 16-18 with any numbers in 0-15, they will result in >=16 always as for these smaller numbers(0-15) the 5th bit is always 0

        17- 10001 6 — 00110 xor-10111 — 23

        you gotta minimize this...why? because for any permutation there will be at least one number among (16-18) which will be adjascent to a number in (0-15) whose xor will blow up the value (eg 1,2,3,4,**17**,5,6,...)

        now interestingly if you xor 16^17 or 17^18 or 16^18 , the values are small cuz they have the 5th bit as 1

        16- 10000 18- 10010 xor-00010 -2

        so for a very simple solution, put them serially 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18

        the highest value you get is from 15^16, just put 0 between them 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,0,16,17,18 and it should work

        try doing the xor's manually and you will recognize the pattern

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it +3 Vote: I do not like it

          I sincerely thank you for such a detailed explain.I totally understand it.I will study hard and make progress everyday. Thank you.

»
3 years ago, # |
  Vote: I like it +2 Vote: I do not like it

Fast editorial and good problems!

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Very curious about the bonus in the tutorial. So hard!

»
3 years ago, # |
Rev. 2   Vote: I like it +1 Vote: I do not like it

I'm quite confused about D.

Can anyone hack me for the following method, as system test wouldn't be completed in quite a while?

I have observed that for each $$$r$$$, at most 1 $$$l$$$ should make $$$[l,r]$$$ a bad segment, so I used binary search too. If an $$$l$$$ can be find, I would check if there is already a point between $$$[l,r]$$$. If there isn't, mark a point on $$$r$$$ using BIT, as it would be optimized if I put a point on the right-most position; otherwise, there is no need to ans++, we can use former points. Each point would be changed into a big prime.

upd: it turned out that when I was using ST-chart to store range wise gcds I brainlessly put the loop for $$$2^j$$$ in the inner loop. And I (un)fortunately ACed it now,

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Problem B:

Please check My Submission I don't know why this construction of permutation incorrect.

Thank you!

»
3 years ago, # |
  Vote: I like it +2 Vote: I do not like it

I failed to solve C

»
3 years ago, # |
Rev. 2   Vote: I like it +3 Vote: I do not like it

Giving hint is a good thing for beginner contestant like me and thanks for the fast editorial.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Love the contests more, when tutorial comes this fast.

»
3 years ago, # |
Rev. 4   Vote: I like it +20 Vote: I do not like it

Simpler (in my opinion) solution for D in O(n log^2 A). We'll replace all elements with huge and different prime numbers. Let's keep a stack for GCDs of suffixes on current prefix, but we'll keep it compressed — we'll compress all suffixes with the same GCD into 1 element. Let's notice that GCD can decrease at most log A times (each time at least divides by 2), so we will have maximum of log A elements in stack. After moving to current prefix we'll recalculate GCDs for all blocks of previous prefix, and we'll also add new suffix of 1 element. Then we'll compress new stack (you can also do it while recalculating GCDs). Now let's look through stack. If there's a block with GCD x, and suffix of length x is in this block, then we'll replace current element (you don't need to do it by actually replacing element, you can just add 1 to answer if you clear stack) and clear stack (all GCDs of suffixes which include current element will become 1, but their lengths are more than 1). If there's no such block, we won't replace element and clear stack. This solution ran in 124 ms on round testing.

Update: it seems like it's actually O(n log A) solution. We have n different begins for suffixes, and for each suffix there is at most log decreases, so total amount of decreases of gcd is O(n log A). But we still need to do compression, because otherwise we will do at most n operations per prefix (if gcd of suffix is not decreasing, we still do O(1) operations in gcd). So solution above has complexity of O(n log A).

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Sorry that I still don't understand your idea well. Can you share your submit? Many thanks!

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      144614536. Here is the implementation of glebustim approach.

    • »
      »
      »
      3 years ago, # ^ |
      Rev. 3   Vote: I like it 0 Vote: I do not like it
      Code

      But I think that sultan__'s implementation is also right

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    if implemented correctly, you can achieve O(n log A) time complexity. Let's store in basic array all blocks in increasing order of gcd and start of suffix. Then, for each new element, we will do two operations. First operation is to update all gcd by taking gcd with new element from last suffix to the first (in reverse order). Second operation is to remove duplicates from array by shifting (it works in O(length of the array) in worst case). Now, consider second operation: its total time complexity is O(n log A) because we know largest possible length of array will be log A, and the number of elements we will remove from array is not greater than number of elements we add into array. Now, consider first operation. We calculate log A times gcd. We need to use following fact: gcd(a[0], a[1], a[2], ..., a[n-1]) calculated in following manner g = 0, then for each i: g = gcd(g, a[i]) complexity is O(n + log(max(a)). This is because each time g is updated, it's at least twice lower, so we can't decrease g more than log(max(a)). Using this fact, first operation if we keep last result of gcd of last block we calculate, then it should be O(log A + log A) where first log is length of array, and second log is total running time of gcd. In the end, two operations is O(n log A + n log A) = O(n log A).

    • »
      »
      »
      3 years ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Yes, I think idea is right and this will work in O(n log A).

      Update: I wrote this and submitted my old submission and this one at one time, but I got 124 ms on both. Maybe I wrote something wrong, but it feels that optimization shouldn't give huge performance boost, because for that you need sprecialized big tests and they don't usually appear in problems (only then authors want to kill one of the wrong solutions, which works on random testset).

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Perhaps without this change it's still n log A we just don't know how to show it. Or, making test is really hard. Oh, I started to think of it right now, and here is what.

        looks like a proof
        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Yes, I just understood that solution without optimization is still O(n log A). I will update solution.

»
3 years ago, # |
  Vote: I like it +15 Vote: I do not like it

Editorials with hints are a gem :D

»
3 years ago, # |
  Vote: I like it +20 Vote: I do not like it

I'll cope up my inability to solve C in contest with new eps of demon slayer and AOT T__T

»
3 years ago, # |
  Vote: I like it +8 Vote: I do not like it

Video Editorial if anyone is looking for video format

»
3 years ago, # |
Rev. 2   Vote: I like it +12 Vote: I do not like it

There was an unexpected fact that in problem C, instead of reading the problem as b = b + 1, I read it as a = a-1 . Then I solved this problem by DP algorithm and passed system tests ?_?. I was really shocked when I discussed it with my friends after the contest. Is this really the solution to this problem or is it just a luck? if so, can someone prove the correctness of this solution ?

This is my solution: 144542918

Sorry for my bad englisk

  • »
    »
    3 years ago, # ^ |
    Rev. 3   Vote: I like it 0 Vote: I do not like it

    lmao glad to see im not the only one who read b++ as a-- :p. i looped from 0 to b-a and checked if you could obtain b in i operations. tho similarly to u, i have no idea why it works xd

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    We have to make $$$a$$$ a submask of $$$b$$$, then apply the second operation once.

    Makeing $$$a$$$ a submask of $$$b$$$ can be done by incrementing a some times, or incrementing b some times. The number of times we would increment b is also the number of times we would decrement a. This is why a-- works like b++ in this problem.

»
3 years ago, # |
  Vote: I like it +6 Vote: I do not like it

I solved Problem C slightly differently. I used binary search.

Firstly we can see $$$a|b >= max(a,b)$$$.

So if $$$b>a$$$ , then $$$a|b>b$$$. Now we can see that if perform the third type of operation than $$$a>=b$$$.

Once this has happened we can see that increasing a will not be optimal and so we will increase b by 1. These means that after the one operation of type 3 we will use use the operations of type 2 only($$$b=b+1$$$)

So this leaves us with two types of operations
Type 1-> (a++ a++ a++ a++ ... a|b b++ b++ .....)
Type 2-> (b++ b++ b++ b++ ... a|b b++ b++ ... )

Once we know about this we can simply binary search to find the minimum number of operations required to make $$$a=b$$$.

My code using this approach->144558213

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    When you wrote

    So this leaves us with two types of operations
    Type 1-> (a++ a++ a++ a++ ... a|b b++ b++ .....)
    Type 2-> (b++ b++ b++ b++ ... a|b b++ b++ ... )
    

    How did you reach the conclusion that we will only increase a or only increase b before operation of type 3?

    It is clear that after type 3 operation we only need to increase b.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I don't have a proof for this but I can try to explain with the help of an example.

      Suppose we are increasing the number a x times and increasing the number b 1 times. In this case (a+x)|(y+1)

      As opposed to this if we simply increase the number a x times then the new a after the or operation is (a+x)|y

      I don't have a proof for this but (a+x)|y <= (a+x)|(y+1)

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        (a+x)|y <= (a+x)|(y+1) it seems to be wrong, check this case: a+x=2 and y=1

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I think this is the logic:
      Our aim is to match the bit values in corresponding positions of a and b.

      Proof for Type1:
      Consider the highest bit which is set in a but not in b. This can only set in b by incrementing b. But we can neglect setting that bit by making that bit 0 by incrementing a.
      Example: let binary representation of a = 01110 and b = 10000. Here, it is clear that using operation a|b would be good only after incrementing a upto 10000, else doing a|b = 11110 will cost b++ operations more. Now, making b++ operations in between while incrementing a upto 10000 would be of no use.

      Proof for Type2:
      lets take example a = 01010 and b = 10101.
      If we do a|b first, a would be 11111, after which we need to do b++ for b to match 11111.
      But we can reduce the operations by incrementing b to 11000 (why-> we save the later b++ operations making the third highest bit 0. Then a|b = 11010. And then incrementing b upto 11000.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Good observation about 2 types of operations. But I feel no need binary search, and just iterate operation times from 0 to (b — a) is good enough? Codes like:

    int main(int argc, char* argv[]) {
    	int T; cin >> T;
    	for (int t = 0; t < T; t++) {
    		int a, b; cin >> a >> b;
    		int ans = b - a;
    
    		for (int i = 0; i <= (b - a); i++) {
    			int times1 = i + 1 + (((a + i) | b) - b);
    			int times2 = i + 1 + (((b + i) | a) - (b + i));
    			ans = min(ans, min(times1, times2));
    		}
    
    		cout << ans << endl;
    	}
    
    	return 0;
    }
    
»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

In problem D,How to reach O(nlogn) by using a segment tree?(Cant understand the editorial) Could anyone give me a help?

»
3 years ago, # |
  Vote: I like it +9 Vote: I do not like it

More cheat today.

@ MikeMirzayanov

we need to ban the cheat accounts,this kind of account makes imfair to cp

What's more , I find that most of accounts which submitted their code in 01:59 of E1 were suspect of cheat.

»
3 years ago, # |
  Vote: I like it +24 Vote: I do not like it

I read $$$C$$$'s editorial as:

" It is optimal to have either a′=a or b′=b. Bonus: Prove it. "

»
3 years ago, # |
Rev. 2   Vote: I like it +1 Vote: I do not like it

Problem C: It could be easily solved using SOS DP, I couldn't come up with the idea during the contest, but I got it in up-solving phase.

Code:

#include <bits/stdc++.h>

using namespace std;

int main(){
	ios_base::sync_with_stdio(0); cin.tie(0);
	int T;
	cin>>T;
	while(T--){
		int a, b;
		cin>>a>>b;
		int ans = 1e9;
		vector<int> dp(2*b + 10, 1e9);
		for(int i=a; i<b; i++){
			dp[i] = i;
		}
		for(int i=0; i<=21; i++){
			for(int j = 0; j<=2*b+2; j++){
				if(j & (1<<i) ){
					dp[j] = min(dp[j],dp[j^(1<<i)]);
				}
			}
		}
		for(int i=b; i<=2*b; i++){
			ans = min(i-b + 1 + dp[i] - a,ans);
		}
		cout<<min(ans, b-a)<<'\n';
	}
	return 0;
}

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    why upto 2 * b?

    • »
      »
      »
      3 years ago, # ^ |
      Rev. 2   Vote: I like it +4 Vote: I do not like it

      You don't want to check numbers greater than $$$2*b$$$ because it means that you have to do $$$b := b + b$$$ and since $$$a<b$$$, so is optimal to $$$a := a + (b-a)$$$

      Note: $$$b-a < b$$$

      Also, $$$a:=a+x$$$ should be submask of $$$b:=b+y$$$ and it is easy to prove.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Can you please explain your approach?

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Well, if you iterate over a number (let it be $$$b'$$$), you want to find other number that is submask of $$$b'$$$ and it should be near as possible to $$$a$$$.

»
3 years ago, # |
  Vote: I like it +14 Vote: I do not like it

can anybody prove or disprove my solution for problem D ?

my solution :

Spoiler

I used testers and could not find any counter cases for it

here is the submission : 144576517

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I've solved it exactly this way during contest, however I was sure it was O(n^2).

    Seems like the proof has a lot to do with the fact that when we go back gcd can decrease up to log times.

    Maybe if I get some time, I'll try to prove it

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    -No bad segment lies completely in another one-

    proof

    -each time we select the first segment and erase the last element(put some prime number) and we continue from that one-

    proof that its optimal strategy
    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      yes, but what about case where we don't find any bad segments in go back step? Then we cannot move our left pointer entirely to the right.

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I dont quite get what you mean by -we don't find any bad segments in go back step-

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          I might be misunderstanding something but as far as I understand as long as gcd > l — r + 1 we increase our r.

          The moment gcd becomes smaller than l — r + 1, we have to build new gcd and we do that by going back from r. Now if we manage to find a bad segment this way then everything is fine as we'll be able to move our l pointer to r + 1.

          But what if we only find segment [new_l, r] such that l < new_l < r and

          gcd(new_l,...,r) > r — new_l + 1

          and

          gcd(new_l — 1, ..., r) < r — new_l + 2

          Then we have to continue analyzing from l = new_l and r = r + 1

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

is it possible to solve problem C, using dp and bitmask?

»
3 years ago, # |
Rev. 2   Vote: I like it +14 Vote: I do not like it

I don't know why this solution works for C: Either answer is:
1. b-a or
2. increase a until a is a subset of b (i.e. a|b = b) and then do OR operation or
3. increase b until b is a superset of a (i.e. a&b = a) and then do OR operation

I just found it intuitively correct during the contest and checked on various test cases before submitting it, I have no clue of the proof.

Bonus Task 3: Prove it :)

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    I also used the same approach, but I'm also not able to prove this.

    If you get any proof in the future, then plz do share.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Why are there so less submissions for C?,it was the easiest question in this round

»
3 years ago, # |
Rev. 3   Vote: I like it +1 Vote: I do not like it

C upsolved in "$$$O(\log b)$$$". Converted numbers to 1-0-strings and did string manipulations first. (144595427,Perl).

Solution
»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

can someone help me to understand c more? in this equation : (a′−a)+(b′−b)+((a′ | b′)−b′)+1

i understand that we use the first term to find number of steps required to turn a into a' and the second term to find number of steps required to turn b into b' and the "1" is the step when we turn a' into b' using (a' | b') but what about the third term ((a' | b') — b') can someone explain it more?

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    We are not turning a' into b' using a'|b' , rather we are turning it into some value >= b', (a'|b'-b') tells us the number of moves we still have to do (using increment operation) to convert a'|b' into b'.

»
3 years ago, # |
Rev. 2   Vote: I like it +1 Vote: I do not like it

Problem C. Can someone proof, that optimal solution can be achieved if and only if we perform one of two procedures (i. e. one of two procedures always represents optimal solution):

Procedure 1: (a++) ... (a++) (a|b)
Procedure 2: (b++) ... (b++) (a|b)

Why it is correct?

»
3 years ago, # |
  Vote: I like it +8 Vote: I do not like it

i feel sorry for kids studying in school 179

»
3 years ago, # |
  Vote: I like it +10 Vote: I do not like it

My solution to E1 which is somewhat different from the editorial (although shares some ideas):

Think about computing the answer for a particular $$$x$$$. Similar to the editorial, we will see if we can achieve an answer of $$$ans$$$. This can be done with binary search or the two pointers method (although actually the former TLE'd for me). :(

We consider the nodes with depth greater than $$$ans$$$. Unlike the editorial which deduces where we should add the edge $$$(1, u)$$$, my solution tries all nodes (i.e. $$$1 \leq u \leq n$$$). The new distance to some node $$$v$$$ with depth greater than $$$ans$$$ is now $$$x + dist(u, v)$$$. We can achieve $$$ans$$$ if the maximum value over all $$$v$$$ of this expression is $$$\leq ans$$$. In other words, if the max $$$dist(u, v)$$$ is $$$\leq ans - x$$$.

How do we quickly query this max value? In precomputation, calculate the distance from each node to every other node. If we're adding the edge $$$(1, u)$$$, we only want to think about the distance from $$$u$$$ to nodes $$$v$$$ with $$$depth[v] > ans$$$. So sort the distance array (with source node $$$u$$$) by the nodes' depth. Now, we can create a suffix maximum array out of this array, which, along with knowing which index each depth starts at in this array, will allow us to answer the queries above.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Hi, robinz62,

    May I have a look at your code following the idea you mentioned here? Since I hold the same idea, however it returns TLE for 6th test, here is my code

    The time complexity I used is O(n*nlogn), which should be very close to O(n^2) while n <= 3000. but the submission result proves that's not true (at least for problem E1 here ... )

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I think the timing might be a bit tight, maybe also depending on the language. As I mentioned, my solution also TLEd if I used binary search instead of two pointers. Here is my submission: https://codeforces.net/contest/1632/submission/144598040

      It's probably a bit hard to read:

      • you can ignore the random comments to myself, where I was just thinking
      • $$$depth$$$ is the depth of nodes, rooted at 0
      • $$$dist[u]$$$ is the array of distances from $$$u$$$ to everyone else
      • $$$st[u]$$$ is the same information as $$$dist$$$ but sorted by the depths (given that name because originally I used segment trees until I realized that wasn't necessary)
  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Great explanation! Thanks a lot!

»
3 years ago, # |
  Vote: I like it +20 Vote: I do not like it

really good contest, my respect to the authors

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

In the editorial of D, what does this line mean? "In our case, this is easy to do because our segments are not nested."

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    I meant that it is easy to remove all segments that have $$$r$$$ in them since we can keep deleting the first one while the condition holds.

    Why segments are not nested. Suppose they are, $$$[l_2, r_2]$$$ is inside $$$[l_1, r_1]$$$. We have $$$\gcd(a_{l_2} \ldots a_{r_2}) \ge \gcd(a_{l_1} \ldots a_{r_1})$$$ and $$$r_2 - l_2 + 1 \le r_1 - l_1 + 1$$$. Since the segments are bad, this is possible only when both of the inequalities have the right and the left side equal. This means that the segment lengths are equal => they can not be nested.

»
3 years ago, # |
  Vote: I like it +11 Vote: I do not like it

Here's another (Harder) solution for E:

Instead of calculating the answer for $$$X$$$, Calculate the maximum $$$X$$$ for each answer, then do a suffix max.

Finding the maximum $$$X$$$ is just getting the node that has minimum distance to all nodes with $$$f(v) \ge ans$$$, this is easy with an $$$O(N^2)$$$ brute force by iterating $$$ans$$$ from $$$N$$$ to $$$1$$$ and having addition events for nodes.

Solving it in $$$O(N)$$$ just needs an extra observation that all optimal nodes form a chain up to the root so you just need to start at the node with the largest $$$f(v)$$$ and whenever it's better to consider the parent just do that you can maintain the distances of nodes inside and outside the subtrees by using sets.

»
3 years ago, # |
  Vote: I like it +34 Vote: I do not like it

Problem C
Proof of Bonus 2: a'=a or b'=b

We know that once we apply the third operation, we have to increase $$$b$$$. Let's see how many of these operations are required after the third operation.
Let's say a' = (a|b). So we would have to add $$$2^i$$$, for all bits $$$i$$$, where $$$a'$$$ is 1 and $$$b$$$ is 0. This also means that the $$$i_{th}$$$ bit in $$$a$$$ is 1 and $$$b$$$ is 0.
So, we have to look for the bits where $$$a$$$ is set and $$$b$$$ is not. Let's start greedily. We will look for the highest such bit. Let's say that bit is the $$$x_{th}$$$ bit. Now we have two ways to handle it. We can either increase $$$a$$$ till its $$$x_{th}$$$ bit becomes 0 or we can increase $$$b$$$ till its $$$x_{th}$$$ bit becomes 1. Let's go with the first way. (Second could be explained in a similar way).
Now to make the $$$x_{th}$$$ bit in $$$a$$$ as 0, we would have to keep increasing it till all bits from $$$0$$$ $$$to$$$ $$$x$$$ become 1, and then we increase once more to make all these bits 0 and some bit $$$j$$$ becomes 1,for $$$j>x$$$.

Now if the $$$j_{th}$$$ bit is already set in $$$b$$$, we have found our solution without using the second operation. Else, if the $$$j_{th}$$$ bit was not set in $$$b$$$, we again have a similar problem, now with $$$x=j$$$.
WLOG, this time let's choose to increase $$$b$$$. So we have to increase $$$b$$$ till its $$$j_{th}$$$ bit becomes 1. In doing so, first we will most definitely reach a state in which the bits from $$$0$$$ $$$to$$$ $$$j-1$$$ in $$$b$$$ become a superset of these bits in the original $$$a$$$ (the $$$a$$$ we were given). (One such example is when all these bits become 1).
So we see that could originally just have increased $$$b$$$ to make $$$b$$$ a superset of $$$a$$$, which would have taken less operations, instead of first increasing $$$a$$$.

Following this logic, we can see that it's best to either just increase $$$a$$$ or just increase $$$b$$$ before doing the third operation.

»
3 years ago, # |
  Vote: I like it +6 Vote: I do not like it

The idea of problem D is similar to 475D.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

the time limit for B is too tight. Even with O(N), I got over 700 ms. When the time limit is 1 second.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    I copied you code and submitted it again.only 46 ms i dont know why

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

[1632D — New Year Concert] You can also use the fact that for a prefix, there at most O(logA) different suffix gcd values. This leads to another way to find the bad segments.

Does anybody have the code with this method? thanks.

»
3 years ago, # |
Rev. 2   Vote: I like it +1 Vote: I do not like it

1632D - New Year Concert

How to solve with segment tree and two pointers with complexity $$$O(n (\log n + \log A))$$$?

For a query on the segment tree, there are $$$O(\log n)$$$ times of $$$\gcd$$$. So it will cost $$$O(\log n\log A)$$$ for each query, the total complexity should be $$$O(n\log n\log A)$$$?

Similarly, the code of segment tree and binary search in the tutorial may be with complexity $$$O(O(n\log^2 n\log A))$$$, although we can do binary search on the segment tree itself to decrease it to $$$O(n\log n\log A)$$$ as well.

The complexity of the method with ST is ok.

  • »
    »
    3 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Oh, the complexity of ST with 2-pointers seems not accurate too. The preparation of ST does $$$O(n\log n)$$$ times of gcd and costs $$$O(n\log n\log A)$$$.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

I have a doubt in the solution code of problem C

for(int a1 = a; a1 < b; a1++) {
            int b1 = 0;
            for(int i = 21; i >= 0; i--) {
                if((b >> i) & 1) {
                    b1 ^= (1 << i);
                } else {
                    if((a1 >> i) & 1) {
                        b1 ^= (1 << i);
                        break;
                    }
                }
            }
            ans = min(ans, a1 - a - b + (a1 | b1) + 1);
        }

why have they chosen the inner lop to range from 21 to 0? Can somebody pls explain it to me

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    a and b are limited to 1e6, that means they have at most 20 bits.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anybody explain how to solve E1?

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

A general doubt. My codes output on codeforces differs from local output in 1632B — Roof Construction. https://codeforces.net/contest/1632/submission/144622647

On Codeforces for n = 9

1 2 3 0 4 5 6 7 8

Locally I get (Which is correct) n = 9

1 2 3 4 5 6 7 0 8

I couldn't find any early termination and I don't employ global variables. There are also no uninitiallized variables looking at the code. I'm guessing It to be compiler specific? (Tried for both C++14, C++17).

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it
    int bit = log(n-1)/log(2) + 1 ;
    

    This is error prone to rounding errors since floating numbers are used. Instead use a simple log2(int) function based on bit counting instead.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

B Bonus solution is 2*(n-2)

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Too small and seems to work only for $$$n = 3$$$.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

For problem A shouldn't complexity be O(1) (instead of O(n))? We compute answer purely by looking and the length of the string (the n parameter) and that must give us YES or NO in constant time.

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

why is my C Solution is working?

»
3 years ago, # |
  Vote: I like it +3 Vote: I do not like it

What's the use of giving contests, when all you see is Maths and Bits? Great Problems?

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Is there a way to download the CF test cases? This submission on D is wrong, but I can't debug it. The idea is that I store the (prime, power) pairs and count in how many last entries they appear. Then I create a map with GCDs and I propagate it downward. If I find the pair where the number of last occurrences == GCD, it is bad. If the smallest number of occurrences is >= the value of GCD it is also bad.

I tried generating some small random test cases, but couldn't repro the issue.

»
3 years ago, # |
  Vote: I like it +1 Vote: I do not like it

can anybody help me with the proof for C. that a'=a or b'=b i am not able to get it.

»
3 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

My approach to C: make $$$a$$$, submask of $$$b$$$, that is $$$(a | b) = b$$$.

1) we can do $$$a = a+1$$$, keep doing that until $$$(a | b) = b$$$. 2) we can do $$$b = b+1$$$, which is the same as $$$a = a-1$$$, keep doing it until $$$(a | b) = b$$$.

Explanation:

If $$$ith$$$ bit of $$$a$$$ is on, but in $$$b$$$ it is off, then $$$a$$$ is not a submask of $$$b$$$. Either you turn off $$$ith$$$ bit of $$$a$$$ (by doing $$$a = a+1$$$ enough times), or you turn on $$$ith$$$ bit of $$$b$$$ (by doing it $$$b = b+1$$$ enough times). Yes, you might turn on other bits in the process, but you can always turn them of by doing these operations enough times. now if $$$x =$$$ turn on $$$ith$$$ bit of $$$b$$$ by doing $$$b$$$++, $$$y =$$$ turn off the $$$ith$$$ bit of a by doing $$$a$$$--, you can actually observe $$$x = y$$$.

Code

»
3 years ago, # |
Rev. 2   Vote: I like it +10 Vote: I do not like it

A proof for bonus $$$2$$$ in $$$C$$$ (it also proves why it is always optimal to only increase $$$a$$$ until it is submask of $$$b$$$ or only increase $$$b$$$ until it is supermask of $$$a$$$):

Some observations:

  1. $$$x+y$$$ is equivalent to finding two masks $$$m_1$$$ (its set bits are not set in $$$x$$$) and $$$m_2$$$ (its set bits are set in $$$x$$$) where $$$x+y=x+m_1-m_2$$$.
  2. Any application of $$$b:=b+1$$$ after $$$a:=a|b$$$ is replaceable by $$$b:=b+((a|b)-b)$$$ first then $$$a:=a|b$$$ and the cost will be the same. So, we can always find an equivalent solution with $$$a:=a|b$$$ being the last applied operation.

Suppose the set of bits set only in $$$a$$$ is $$$set_a$$$, bits set only in $$$b$$$ are $$$set_b$$$, and bits set in both $$$a$$$ and $$$b$$$ are $$$set_{ab}$$$. Our final goal is to make $$$set_a$$$ empty.

Scenario $$$1$$$: If we apply $$$a:=a+1$$$ only, this means we want to unset all the bits belonging to $$$set_a$$$ in $$$a$$$. If the $$$MSB$$$ of such bits is the $$$i^{th}$$$ bit, it can be seen that is enough to find a $$$k^{th}$$$ bit ($$$k>i$$$) which is in $$$set_b$$$, then do the following:

  1. For all the bits with index $$$<k$$$, unset them in $$$a$$$.
  2. Set the $$$k^{th}$$$ bit in $$$a$$$.

Scenario $$$2$$$: If we apply $$$b:=b+1$$$ only. For all the bits in $$$set_a$$$, set them in $$$b$$$. If the $$$MSB$$$ of such bits is the $$$i^{th}$$$ bit, for all the bits in $$$set_b$$$ with index $$$<i$$$, unset them in $$$b$$$.

Scenario $$$3$$$: If we apply both $$$a:=a+1$$$ and $$$b:=b+1$$$, this is equivalent to doing the following:

  1. Choose some index $$$j$$$ that is not set in $$$a$$$, where we have bits $$$<j$$$ and bits $$$>j$$$ in $$$set_a$$$.
  2. If the $$$j^{th}$$$ bit is not set in $$$b$$$, set it.
  3. Apply Scenario $$$1$$$ on the bits whose $$$MSB$$$ is $$$j$$$.
  4. If the $$$MSB$$$ of bits in $$$set_a$$$ is the $$$i^{th}$$$ bit, apply scenario $$$2$$$ on the bits from $$$i$$$ to $$$j+1$$$, and for all the bits with index $$$<j$$$, unset them in $$$b$$$.

Claim: Scenario $$$2$$$ is always better than Scenario $$$3$$$.

Proof: Let's see which bits are set and which are unset in Scenario $$$3$$$ compared with Scenario $$$2$$$. In both scenarios the cost is the same for the bits from the $$$i^{th}$$$ bit to the $$$(j+1)^{th}$$$ bit.

Starting from $$$j^{th}$$$ bit (iteration order is from $$$MSB$$$ to $$$LSB$$$), we can notice the following differences:

  1. If the $$$j^{th}$$$ bit is set in $$$b$$$, Scenario $$$2$$$ unsets it in $$$b$$$ while Scenario $$$3$$$ sets it in $$$a$$$. If the $$$j^{th}$$$ bit is not set in $$$b$$$, Scenario $$$2$$$ keeps it unset while Scenario $$$3$$$ sets it in both $$$a$$$ and $$$b$$$.
  2. Staring from the $$$(j-1)^{th}$$$ bit:
  • For all the bits in $$$set_b$$$, Both Scenario $$$2$$$ and Scenario $$$3$$$ unset them in $$$b$$$.
  • For all the bits in $$$set_a$$$, Scenario $$$2$$$ sets them in $$$b$$$ while Scenario $$$3$$$ unsets them in $$$a$$$.
  • For all the bits in $$$set_{ab}$$$, Scenario $$$2$$$ keeps them set in both $$$a$$$ and $$$b$$$ while Scenario $$$3$$$ unsets them in both $$$a$$$ and $$$b$$$.

The previous differences imply:

  1. The contribution of the $$$j^{th}$$$ bit to the cost is $$$2$$$ times higher in Scenario $$$3$$$ whether the $$$j^{th}$$$ bit is set in $$$b$$$ or not.
  2. Starting from the $$$(j-1)^{th}$$$ bit, the contribution of the bits in $$$set_a$$$ and $$$set_{ab}$$$ is $$$2$$$ times higher in Scenario $$$2$$$, and the contribution of the bits in $$$set_b$$$ is similar in both scenarios.

Conclusion: Since $$$set_a$$$ and $$$set_{ab}$$$ are disjoint sets representing bits set in $$$a$$$, the contribution of the $$$j^{th}$$$ bit will be always higher.

»
3 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

I don't understand why my div2B solution works of using the highest power bit and looping down. Can someone give me a proof?

https://codeforces.net/contest/1632/submission/144798711

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    So no matter what a number w/ the max bit is next to a number w/o max bit so the answer is at least 2^max bit. So you put all the max bits to the left and those don't matter, put the lowest max bit number (that one matters) next to the 0, which should be the answer, and then the rest which don't matter.

»
3 years ago, # |
Rev. 2   Vote: I like it +19 Vote: I do not like it

Alternative solution for 1632E2 - Distance Tree (hard version) (probably someone explained it here).

Horrible wall of text
»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Why in C taking | at last only is optimal like if we take general case we will apply x1,y1 operation of 1st and 2nd kind and then taking | and again x2,y2 no of operation and so on...

»
3 years ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Would BFS work for problem C?

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

For problem C, I don't understand how to construct the optimal version of b prime

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Interestingly problem D can be implemented using $$$O(\log A)$$$ memory, no need to even store the array.

»
2 years ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

Hi, All. Would you provide me a rationale behind problem C's editorial that getting optimal b' using a' and b?

»
19 months ago, # |
  Vote: I like it 0 Vote: I do not like it

E can be reduced to the following problem:

Given a tree, perform following queries:

  1. Add a new child v to node u
  2. Query diameter of tree.

(We can solve this by creating another version of tree which we build in a "bottom up" manner) Implementation: Link

»
10 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I solved problem C

with time complexity O(log(b)^2) 239809162