SomethingNew's blog

By SomethingNew, history, 15 months ago, translation, In English

We apologize for the technical difficulties in Task B. We hope you enjoyed the rest of the contest. We will add hints soon.

1870A - MEXanized Array

Tutorial

1870B - Friendly Arrays

Tutorial

1870C - Colorful Table

Tutorial

1870D - Prefix Purchase

Tutorial

1870E - Another MEX Problem

Tutorial

1870F - Lazy Numbers

Tutorial

1870G - MEXanization

Tutorial
Proof of asymptotic complexity for the query

1870H - Standard Graph Problem

Tutorial

Please rate the problems, it will help us make the problems better next time!

Problem Feedback
  • Vote: I like it
  • +205
  • Vote: I do not like it

| Write comment?
»
15 months ago, # |
  Vote: I like it +15 Vote: I do not like it

FastEditorialForces

»
15 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Thanks for fast tutorial!

»
15 months ago, # |
  Vote: I like it +11 Vote: I do not like it

bad tests for E

»
15 months ago, # |
  Vote: I like it +2 Vote: I do not like it

Can someone explain the editorial of D in more detail?

  • »
    »
    15 months ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Basically, the idea in this problem is to greedily choose which prefix to use. As the editorial explains, we first choose the minimum prefix that's the farthest from start (say it's like 3 4 3 6 5, k = 11, we choose the last 3). Now, whatever we have left over is k%3 because we are going to use as many as possible. With this remainder, we can improve our solution. in the example above, remainder is 2 and we can upgrade one of the 3s to a 5 because the difference is <= remainder. We repeat this process and each time the remainder keeps getting smaller. Also, another requirement is that the number of upgrades you can make from previous step to next step is upper bounded by the previous step. Think about why this makes sense. You cannot upgrade more things than that are currently existing. You can checkout my profile for a simple solution using maps and a special type of suffix array. Let me know if you need any clarifications.

    • »
      »
      »
      15 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Hello SaiAnoop, can you help explain why it is possible that the number of upgrades you can make from previous step to next step is not upper bounded by the previous step when ci>=ci-1? I did not use this restriction and failed.

      • »
        »
        »
        »
        15 months ago, # ^ |
          Vote: I like it +3 Vote: I do not like it

        Actually, I don't know if you read it wrong or if I didn't say it correctly, but the number of upgrades you can make from a previous step to a next step is upper bounded by the previous step. In other words, say I have costs x,y,z x < y < z. First, I maximize number of x, then with the remainder k%x, we will try to replace some of the x with y. this number is remainder / (y-x). We continue this process until remainder equals 0 or we reach end of array. Now, to prove the point about why we need to upper bound, consider this example (this is what I used to figure out the logic during the contest): 99 99 99 10 99 99 15 99 99 16 99 99 99, k = 219 Now, based on our algorithm, we will first do 21 x 10 = 210 remainder = 9 Using these 9, the next best element is 15, so we do 9/5 = 1 9%5 = 4 now, if we don't upperbound we will end up doing 16 4 / (16-15) = 4 but this doesn't quite work because if you remember the original remainder was 9. now if we do 4*(16-10) that comes out as 24. So we are overcounting, so therefore, we need to upper bound by the previous step. Without using the math, to think in logical terms, we are trying to upgrade some of the previous steps into next steps, how can you possibly upgrades more next steps than there are total # of previous steps. I hope this helps.

        • »
          »
          »
          »
          »
          15 months ago, # ^ |
            Vote: I like it +8 Vote: I do not like it

          Thank you very much for your careful reply, especially the wonderful example. I have understood the solution.

          • »
            »
            »
            »
            »
            »
            15 months ago, # ^ |
              Vote: I like it +8 Vote: I do not like it

            ofc. glad I could help!

            • »
              »
              »
              »
              »
              »
              »
              15 months ago, # ^ |
                Vote: I like it 0 Vote: I do not like it
              if(lastind != -1) {
                  q = min(q, ans[lastind]);
              }
              

              can you explain why was this required

              i got WA when i commmented it but i dont understand why

              • »
                »
                »
                »
                »
                »
                »
                »
                15 months ago, # ^ |
                  Vote: I like it +1 Vote: I do not like it

                if you see how i initialize lastind, initially set to -1 just as default value. if it's equal to -1, then it's a dummy value and not actually last index, so we check to make sure it's a real last index and not a dummy value.

    • »
      »
      »
      15 months ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      i have made the same approach as you said, but i got some wrong test cases this is my submission: https://codeforces.net/contest/1870/submission/224067985

      every loop I repeat what I have done with the first minimum element but taken k % last_minimum_element then storing the answer in an vector of pair containing the value to print and the index to print up to it

      why it gives my wrong answer?

    • »
      »
      »
      15 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I have done the same approach as yours for D https://codeforces.net/contest/1870/submission/224250810 Can u say where am I wrong Give me a test case

»
15 months ago, # |
  Vote: I like it +11 Vote: I do not like it

In G, it is possible to replace segtree with DSU to find nearest position with value smaller than $$$p$$$, using the fact that these $$$p$$$ are frequencies so sum of $$$p$$$ is small, and we never decrease them.

»
15 months ago, # |
  Vote: I like it -18 Vote: I do not like it

MEXForces was fun, E and G were nice IMO

  • »
    »
    15 months ago, # ^ |
      Vote: I like it -11 Vote: I do not like it

    O sa vorbim cand ai locul 1 la bacterii2

»
15 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Can anyone help why my submission for D is not working ? Submission

I used binary search to determine the number of operations which must be transferred from ci to c(i+1), is this wrong ?

  • »
    »
    15 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You can take a look at this solution, I also used binary search for finding how much should be transferred, from current to the smallest element to right of current. https://codeforces.net/contest/1870/submission/224001124

    • »
      »
      »
      15 months ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Thanks, in your approach, you start resulting array from maximum element from the left, why don't we start resulting array from minimum element from the left ? Greedily we should start from minimum left I guess. Please correct me if i am wrong

      • »
        »
        »
        »
        15 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I am doing that only, I am taking the minimum element (mn) go to that and after that I get to the smallest element to the right of mn (which is obviously greater than mn as mn was the smallest number yet) make the transfers if possible and go on, if no transfer was possible then break

  • »
    »
    15 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Why is your code so long? :sob:

»
15 months ago, # |
  Vote: I like it 0 Vote: I do not like it

any one lost problem 3 just because of not able to understand the point( must include all positions of same colour in the rectangle) or is it only me

  • »
    »
    15 months ago, # ^ |
    Rev. 5   Vote: I like it 0 Vote: I do not like it

    I've managed to submit it on the last two minutes of the contest. So basically this rectangle should contain every element of same color (number less or equal to k). That means upper bound of this rectangle is equal the minimal i for all common numbers, as well as left bound = minimal j, bottom bound = maximal i, right bound = maximal j.

    So for example, if you have matrix like this:

    1 1 1 1
    1 1 2 1
    1 2 1 2
    1 1 2 1
    

    The both sides of rectangle of color 2 will be equal to 3:

    * * * *
    * # # #
    * # # #
    * # # #
    

    Hope that helps.

»
15 months ago, # |
Rev. 8   Vote: I like it -49 Vote: I do not like it

SomethingNew, In problem 1870E - Another MEX Problem ,Why we cannot think DP in this way ?

dp(i,j) -> max bitwise xor till index i inclusive and the last subarray mex is j in getting that. But after defining this like that it will get stuck afterwards. So what is the intutiton of that you used a bool DP as does not come to my thought process.

Is it just try to apply a concept and then see if it works fine or something else need to consider.

Like another DP I think of was like :

dp(i,j,k) -> maximum bitwise XOR till index i inclusive and the last subarray is of length j and its mex is k and we will precalculate mex of all the ranges from 1 to n i.e. 2D mex table.

Why is it wrong ? Is it just that optimal substructure does not suffice in defining in this DP or we can't make transitions that's why?

Any help will be so kind of you and will be very helpful.
Thanking you

  • »
    »
    15 months ago, # ^ |
    Rev. 5   Vote: I like it 0 Vote: I do not like it

    .

  • »
    »
    15 months ago, # ^ |
      Vote: I like it +20 Vote: I do not like it

    Xor is not like sum, Max xor isn't necessarily obtained by another maximum xor , it can possibly be obtained by any other xor value as well. That's why your dp is wrong.

    • »
      »
      »
      15 months ago, # ^ |
      Rev. 3   Vote: I like it -16 Vote: I do not like it

      devesh_7, thanks for your time but can't we do like this as below :

      let DP(i,j,k) -> max bitwise xor till index i inclusive it and last subarray taken mex is j and of length k. And also pre-calculate 2D mex table so that we can access all mex value for ranges in constant time.

      Now the transitions will be like this :

      DP(i,j,k) = max( DP(i-k,p,q) XOR mex(i-k+1,i) ) where p ranges from 0 to n+1 and q ranges from 1 to i-k;

      What will be wrong in this because i am more interested why this will not work. SomethingNew and glebustim, can you please also look into this.

      • »
        »
        »
        »
        15 months ago, # ^ |
          Vote: I like it +10 Vote: I do not like it

        You missed the whole point of my comment. Max Xor is not necessarily obtained by another maximum Xor. That's why We are interested in all possible Xor values.

  • »
    »
    15 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Curse of low rated questions, thats why it so downvoted:)

  • »
    »
    15 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Why is it wrong ?

    It is not wrong. You can define DP in anyway you like.

    A given DP is said to have Optimal Substructure Property if the optimal solution of the given problem can be obtained by using the optimal solution to its subproblems instead of trying every possible way to solve the subproblems.

    Being able to make transitions and having optimal substructure have similar meaning.

    I think your problem was downvoted because your fundamentals are not clear. And also, you are tagging the author himself for such doubts. Generally, people are busy and don't have enough time to reply.

»
15 months ago, # |
Rev. 3   Vote: I like it +66 Vote: I do not like it

E tests are weak, it was necessary to add a test

0, 1, ..., n/4-1, 0, 1, ..., n/4-1, 0, 1, ..., n/4-1, 0, 1, ..., n/4-1

This breaks my solution

  • »
    »
    15 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Can you please explain your solution and with does this case break your solution? I'm just curious.

    • »
      »
      »
      15 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      If there are two segments [l, r1], [l, r2] (r1 < r2) with the same mex, then in DP I iterate only the first one. There can be O(n^2) such segments and my solution works in O(n^3)

      • »
        »
        »
        »
        15 months ago, # ^ |
          Vote: I like it +8 Vote: I do not like it

        Lol, I did that but with the roles of l and r reversed. Got TLE on pretest 8.

      • »
        »
        »
        »
        15 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Hey Hi, @dimss I saw you solution for problem E its really nice. but I am not getting how its working in O(n^2). can you please explain time comp. once please. I just have doubt where you are updating the other state using vector_pair.

»
15 months ago, # |
Rev. 6   Vote: I like it -12 Vote: I do not like it

A similar easy version of the problem C : Max distance of a number greater than a given number in array

»
15 months ago, # |
Rev. 2   Vote: I like it +63 Vote: I do not like it

Do you remember when the MEX operation felt like something rare and fresh when it appeared ?

Ah, good times

(This is just my salty comment, don't take it too seriously)

»
15 months ago, # |
  Vote: I like it 0 Vote: I do not like it

My approach is failing for the 1346th test in test case 2. Any one else ?

It is based on Boolean Algebra.: (Assume bitwise OR '|' is represented by '+').

  1. For every bi there I got two cases (by expanding XOR and just doing some Boolean Algebra), when n is even and when n is odd a. When n is even : (a1+bi)^(a2+bi)^(a3+bi)^(a4+bi)^... = (a1^a2^...)&() b. When n is odd : (a1+bi)^(a2+bi)^(a3+bi)^(a4+bi)^...^(a(2x+1)+bi) = ((a1^a2^...^a(2x))^())^(a(2x+1)+bi)

As for going through all combinations of or, I have done something similar to the editorial. As the operation of OR is transitive, I sorted b and did the above operation for all is the b-ith and the cumulative Or up-till that point.

If anyone else has experienced the same, or could provide some insight, I'd be grateful.

Thank you very much.

  • »
    »
    15 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    homie are you sure, this is gonna work?, using boolean algebra, complexifies it instead of simplification.

  • »
    »
    15 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    rather think it of as in this way,

    consider the numbers are even, if we, take any number from b and take its OR, then some set bits are gonna remain set, and some unset bits will get set, now xor of two set bits is 0, hence OR is always gonna be less. Whereas it's opposite in Odd's case.

    Now the maximum number that you can take for ORing is OR of array B.

»
15 months ago, # |
  Vote: I like it +11 Vote: I do not like it

Thanks for the contest! I noticed some small typos in the editorial for E:

  • Where it says "there is at most one segment ... where $$$a_l \leq a_r$$$", it should be $$$a_l \geq a_r$$$.

  • Where it says $$$MEX(l, r) > a_{r_2}$$$, it should be $$$MEX(l, r_1) > a_{r_2}$$$.

»
15 months ago, # |
  Vote: I like it 0 Vote: I do not like it

In editorial for E it $$$a_l$$$ is sometimes called smaller than $$$a_r$$$ and sometimes larger. I think it should always be larger for the proof to work

»
15 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Editorial E:

Let's solve the problem using dynamic programming, let's store dp[i][j] such that dp[i][j]=1 if it is possible to obtain an XOR of MEX values from the prefix up to i (excluding i) equal to x.

Did you mean equal to j (instead of x)?

»
15 months ago, # |
Rev. 2   Vote: I like it +139 Vote: I do not like it

Anyway E is really bad for any kind of programming contest, since it's so much easier to come up with the correct solution and decide to submit it than it is to prove that it's correct. Your proof is really simple but it's pretty hard to come up with in my opinion

  • »
    »
    15 months ago, # ^ |
      Vote: I like it +72 Vote: I do not like it

    I think this is the difference between programing and math. you don't need to proof something if it is reasonable.

    • »
      »
      »
      15 months ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      What is reasonable? I came up with the correct solution pretty fast and I think that the fact that I couldn't find the proof for a much longer time means that it was not reasonable to assume that it works. Usually when I don't prove something during contest it's because I understand that I can prove it in a couple of minutes and that's what I would call reasonable. Otherwise it's no different from submitting every idea you have when you don't know how to counter it, and I think that penalties for wrong submissions are meant to disincentivize that kind of behaviour, so it's not intended for a non-IOI style contests

      • »
        »
        »
        »
        15 months ago, # ^ |
          Vote: I like it +77 Vote: I do not like it

        • »
          »
          »
          »
          »
          15 months ago, # ^ |
            Vote: I like it +2 Vote: I do not like it

          Примерно такого ответа я и ожидал) the meme could've been in english though

      • »
        »
        »
        »
        15 months ago, # ^ |
          Vote: I like it +23 Vote: I do not like it

        You shouldn't necessarily submit every idea you can't come up with a break case for, instead if you suspect something is true (but don't have a proof), why not write some code to generate all arrays of size 9, and see that the maximum number of important subarrays is on the order of n rather than n^2 (affirming your suspicion), and then implement your solution. If you're still interested in it, you can work on/learn about the proof after the contest.

        The difference between a programming and math contest is in a programming contest you have a computer, you should use it.

        • »
          »
          »
          »
          »
          15 months ago, # ^ |
            Vote: I like it +40 Vote: I do not like it

          even that is unnecessarily 10mins wasted (assuming youre really fast in coding all that), and small numbers wont even tell you much, since the difference between O(n^2) (with small constant) and O(n) (with high constant) isnt even that large.

          if a proof is harder than the rest of the problem, thats a bad problem for a programming contest, or any contest where you dont need to write a proof.

          • »
            »
            »
            »
            »
            »
            15 months ago, # ^ |
            Rev. 2   Vote: I like it +25 Vote: I do not like it

            I don't think it's a waste of 10 mins if it allows you to solve the problem, but the constant could obfuscate it (agree)

            I see your point about the proof, I personally feel like it's still an interesting problem, and it doesn't necessarily make it bad, but I see why you disagree

  • »
    »
    15 months ago, # ^ |
      Vote: I like it +20 Vote: I do not like it

    I came up with "there should be a small number of expandable intervals" pretty fast but didn't bother to prove it. Solved it in a different way using dp[xor] = the smallest prefix with xor obtainable.

  • »
    »
    15 months ago, # ^ |
      Vote: I like it +15 Vote: I do not like it

    I've come up with a solution that needs no proof though, just wanna share bcz it's really nice. Here is my code.

    (This is my first time writing a solution so sorry if it's too long lol, recommend reading the code first before coming back here for clarification. TLDR: use a traditionally approached dp and make sure not to repeat an $$$i$$$ $$$XOR$$$ $$$j$$$ operation with the same $$$(i,j)$$$ too many times.)

    Here are the steps:

    • Calculate every achievable $$$XOR$$$ value of $$$MEX$$$ of subarrays of prefix up to $$$i$$$-th element. (In other words, replace the initial array with its prefix (1 to $$$i$$$) then calculate all possible values). DP $$$i$$$ from 1 to $$$n$$$ then store it in a vector $$$can[][]$$$.

    • To compute $$$can[i]$$$, iterate backward $$$j$$$ from $$$i$$$ to 1, calculate $$$mex[j,i]$$$ (the transition from $$$j$$$ to $$$j-1$$$ can easily be done in $$$O(1)$$$), and then $$$XOR$$$ $$$mex[j,i]$$$ with every element of $$$can[j-1]$$$, then store the results in $$$can[i]$$$.

    • $$$can$$$ stores $$$O(n^2)$$$ elements, let's make it $$$O(n)$$$: to avoid repetition, every elements of $$$can[j]$$$ with $$$j<i$$$ won't appear in $$$can[i]$$$. To do so, just use a $$$vector<bool> check$$$ if a number was stored while iterating dp. // It will be helpful later on to reduce time complexity.

    • Here's the tricky part: the complexity of the above process is $$$O(n^3)$$$, so let's optimize it to $$$O(n^2)$$$.

      • Note that the value of $$$mex[j,i]$$$ can't exceed $$$n$$$. Therefore, we can use another $$$vector<vector<bool> > CHECK$$$ (sorry I was low on time while coding this lol) where $$$CHECK[i][j]$$$ returns true when the number j was already $$$XOR$$$-ed with all of the elements in $$$can[i]$$$ (and false otherwise).
      • While iterating $$$MEX[j,i]$$$ $$$XOR$$$ with $$$can[i]$$$, (let $$$pt=MEX[j,i]$$$) if $$$CHECK[j-1][pt]==0$$$ then we $$$XOR$$$ $$$pt$$$ with $$$can[j-1]$$$ and then continue with $$$j-2$$$, if $$$CHECK[j-2][pt]==0$$$ then continue with $$$j-3$$$, and so on. It's easy to prove (ok probably the only part we need to prove lol) that after the process, we can ensure $$$pt$$$ was $$$XOR$$$-ed with all possible $$$XOR$$$ values of prefix up to $$$j-1$$$ (aka $$$CHECK[k][pt]==1$$$ for all $$$k$$$ from 1 to $$$j-1$$$)
      • After the optimization, we can ensure that for every $$$j \in [1,n]$$$, every possible $$$XOR$$$ values (that is stored in $$$can[][]$$$) was $$$XOR$$$-ed with $$$j$$$ at most 1 time. Therefore the total number of $$$XOR$$$ operations we have to carry out is at most $$$O(n^2)$$$, resulting in the overall complexity $$$O(n^2)$$$.

    done phew. Thanks for reading all the way here xD. Plz let me know if there are any unclear parts.

    only if I had 15 minutes more to submit it during the contest T_T see you again yellow

»
15 months ago, # |
  Vote: I like it -8 Vote: I do not like it

why my code gives wa on test 4

PROBLEM D

https://codeforces.net/contest/1870/submission/223922693

»
15 months ago, # |
  Vote: I like it +5 Vote: I do not like it

tried to solve problem D using binary search for the contribution of each element, but getting wa on test 3.
Can someone point out the mistake if you have also implemented the same idea 223895614

  • »
    »
    15 months ago, # ^ |
    Rev. 2   Vote: I like it +1 Vote: I do not like it

    We made the same mistake.

    This is happening because we started replacing elements from the end, which might lead to a smaller array lexicographically. This happens when you use the same number of coins, but there exists a position between two positions in your solution such that you could've taken extra coins from the larger index to increase the prefix spanned by the smaller index. For example-

    Consider n = 5, c = [10 2 2 2 3] & k = 7

    The answer is [3 3 2 2 0] in this case

    but you find the answer to be [3 3 1 1 1]

    Edit: This example is wrong. However, a correct example is in the following comments!

    • »
      »
      »
      15 months ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      But the ans will be [3 3 3 3 1] and my code is working right for this test case.

      • »
        »
        »
        »
        15 months ago, # ^ |
        Rev. 3   Vote: I like it +1 Vote: I do not like it

        Ah, okay, so my example was wrong, but the whole idea why the solution doesn't work is the same.

        Try this instead-

        n = 4

        c = [40 41 46 47]

        k = 253

        Your code(I tested this time) answers [6 2 2 1]

        But if I buy five c2 and one c4, I'd use a total of 243 coins and get the answer [6 6 1 1]

        And that is lexicographically larger.

        • »
          »
          »
          »
          »
          15 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Thank you so much for pointing the mistake.

        • »
          »
          »
          »
          »
          15 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Excuse me, but isn't the answer for this test is

          $$$ans = [6,6,1,1]$$$

          You can buy $$$c_2$$$ $$$5$$$ times which equals $$$205$$$, so you have remaining coins equals $$$253 - 205 = 48$$$ which is sufficient to buy $$$c_4$$$ once

          So by that we obtained larger lexicographically ans

          Correct me if I misunderstand something

          • »
            »
            »
            »
            »
            »
            15 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Yes, thank you for pointing out the mistake. I was trying to show that there exists a better answer- I wasn't looking at the final solution. My bad!

            • »
              »
              »
              »
              »
              »
              »
              15 months ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              No problem my friend, thanks for that test btw, it showed me the wrong in my approach and it pointed me to the right solution!

              So, I'm the one who needs to thank you ^^

»
15 months ago, # |
  Vote: I like it +185 Vote: I do not like it

I have an alternative solution to problem E.

Notice that the maximum MEX of subarray will be O(n), and the maximum XOR of MEXs of subarrays will also be O(n).

You can notice that for a fixed Y, you only care about the first position X, such that you can make the XOR of chosen subarrays equal to Y, while the last subarray ends at position X. This is easy to prove, and I will leave that as an exercise for the reader.

You can also precalculate nxt[l][M] for each pair of (subarray starts at least at position l, MEX of subarray is M). This will store the least position R, such that there exists a subarray [L;R], where MEX is equal to M, and L >= l, if it exists, and -1(something to show that it's not possible) otherwise.

Now what you can do, is write a Dijkstra in O(n^2) where in dist[Y] you store the least position, such that you can chose some subarrays where the last of them ends at position dist[Y], and XOR of their MEXs is equal to Y. To make a transition, you iterate over value of the MEX of the next subarray, and using the precalculated nxt[l][M], you know whether you can make such a transition or not, and what the least next position could be.

  • »
    »
    15 months ago, # ^ |
      Vote: I like it +38 Vote: I do not like it

    Oh, this is so much more intuitive than the intended solution. Honestly, I even feel bad now for not thinking about that during the contest...

  • »
    »
    15 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Could you share the intuition why this approach works?

  • »
    »
    15 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I implemented it but got tle at test 10

    • »
      »
      »
      15 months ago, # ^ |
      Rev. 2   Vote: I like it +3 Vote: I do not like it

      Hey, so, first of all, you don't have to use set to get MEX of all subarrays, you can do it in $$$O(n^2)$$$ total time complexity, by fixing one border of subarray, iterating over the other, and MEX can only increase, so it's possible to use two pointers(store in some bool array values that you have already, or smth like that).

      Secondly, you don't need to use Dijkstra with priority_queue, the given graph is not sparse(in fact it's complete), so it would be better to use Dijkstra in $$$O(n^2 + m)$$$, without priority_queue.

»
15 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I'm confused about the explanation for E and would appreciate it if anyone could explain where I am going wrong.

From my understanding, the tutorial is saying that for each position, it will only show up as the smallest extreme in only one irreplaceable segment. However, take [0, 1, 2]. Isn't 0 the smaller extreme in both [0, 1] and [0, 1, 2] which are both irreplaceable?

  • »
    »
    15 months ago, # ^ |
    Rev. 3   Vote: I like it +1 Vote: I do not like it

    It should actually be, "$$$a_i$$$ can be the larger element and the left extreme atmost once, and it can be the larger element and the right extreme atmost once." SomethingNew

»
15 months ago, # |
  Vote: I like it -8 Vote: I do not like it

Can anyone tell me what is wrong in this soln? It is giving WA in test case 4 and idk how to debug it.

223949565

»
15 months ago, # |
  Vote: I like it +9 Vote: I do not like it

I came up with the solution to the problem $$$E$$$ with the complexity $$$O(NP)$$$ where $$$P$$$ stands for the number of segments $$$[l, r]$$$ so that $$$mex[l, r]$$$ is equal neither to $$$mex[l+1, r]$$$ nor $$$mex[l, r-1]$$$. Can anyone bound the number of such segments better than $$$O(N^2)$$$?

»
15 months ago, # |
Rev. 2   Vote: I like it +10 Vote: I do not like it

I think calculating $$$\text{dfs}(x)$$$ in $$$O(\log n)$$$ is the hardest part of 1870F - Lazy Numbers. However, some people managed to compute it in $$$< 10$$$ lines using some magic (for example, getid in 223885585). Could anyone elaborate?

  • »
    »
    15 months ago, # ^ |
      Vote: I like it +20 Vote: I do not like it

    Suppose that $$$x$$$ has $$$d$$$ digits, we count all $$$y$$$ such that $$$y$$$ comes before $$$x$$$ in dfs order. We iterate over the number of digits of $$$y$$$, let's denote it by $$$e$$$. We have inequality $$$k^{e-1} \leqslant y \leqslant min(x', n)$$$, where:

    Case $$$e \leqslant d$$$: $$$x'=floor(x/k^{d-e})$$$, i.e. the prefix of $$$x$$$ of length $$$e$$$.

    Case $$$e > d$$$: $$$x'=x\cdot k^{e-d} + k^{e-d} - 1$$$, i.e. $$$(k-1)$$$'s appended to the back of $$$x$$$.

  • »
    »
    15 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Maybe you can read jiangly's code, I think it's clear enough.

»
15 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone explain how we can find the irreplaceable segments in Problem E? I understand how it' s upper bounded by 2n but how can we find those segments so we can iterate over them.

  • »
    »
    15 months ago, # ^ |
    Rev. 2   Vote: I like it +8 Vote: I do not like it

    You just need to check $$$[l, r-1]$$$ and $$$[l+1, r]$$$.

    Proof: if $$$\text{mex}(l, r) = \text{mex}(l', r')$$$, and $$$l \leq l' \leq r' \leq r$$$, then $$$\text{mex}(l'', r'') = \text{mex}(l, r)$$$ for every $$$l \leq l'' \leq l'$$$ and $$$r' \leq r'' \leq r$$$.

»
15 months ago, # |
Rev. 3   Vote: I like it +25 Vote: I do not like it

I had a different approach for problem E. In my approach dp[x] means minimum length of prefix from which it is possible to obtain x as XOR of MEX values from the prefix. I've also precalculated nxt where nxt[a][b] means minimum value of r for which there is an l such that $$$(a <= l <= r)$$$ and mex of subarray(l, r) is $$$b$$$.

Here's my submission

»
15 months ago, # |
Rev. 2   Vote: I like it +10 Vote: I do not like it

For each $$$a_i$$$, there is at most one irreplaceable segment where it is the smaller of the two extremes, and at most one where it is the larger. Therefore, the total number of irreplaceable segments is no more than $$$2\cdot n$$$.

I think it's not true. Think about $$$a=[0,1,2,3,4,5]$$$. For $$$a_1$$$,there are $$$6$$$ segments where it is the smaller of the two extremes because $$$0$$$ is always smaller, they are $$$[1,1],[1,2],\dots,[1,6]$$$.

I think it can be explained by a way like:

For each segment, we use a pair $$$(i,0/1)$$$ to describe that its greater extreme is $$$i$$$, and the other extreme is at its left side/right side. For a specific pair, there is at most one segment corresponds to it.

»
15 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Mysterious E

»
15 months ago, # |
  Vote: I like it +10 Vote: I do not like it

Nowadays difficulty rating of the recent contest problems is not getting updated.

»
15 months ago, # |
  Vote: I like it +28 Vote: I do not like it

MEXForces

»
15 months ago, # |
  Vote: I like it 0 Vote: I do not like it

another solution for E. is to for each value v track the first position i where it is possible to create v.

we loop from 1 ~ n keeping track of the the minimum position to create a mex value 0 ~ n with n + 1 sliding windows. Then, if the current index is the first possible position to create a value v we xor the value v with every possible mex and update the min index array.

each value v will only have 1 possible first position so updates are bounded by O(n^2) and maintaining the sliding windows are also bounded by O(n^2) as it costs O(n) each so the final complexity is O(n^2).

https://codeforces.net/contest/1870/submission/223975557

  • »
    »
    15 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    whoops, did not notice someone else alr posted about this

  • »
    »
    15 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    alternative proof for number of irreplaceable sub-arrays:

    let mex[l][r] be the mex of a[l], a[l + 1], ... a[r],

    the array mex[l][1 ... n] can have at most n distinct values

    each first distinct value of mex[l] that differs from what it becomes in mex[l + 1] will be counted as an irreplaceable subarray

    notice that each value that changes between mex[l] and mex[l + 1] will become the same value namely a[l] in mex[l + 1] because we're removing a[l] from being considered in the mex calculations.

    let dec[i] be the number of distinct numbers that change between mex[l] and mex[l + 1], the number of distinct numbers of mex[l + 1] will be less than the number of distinct numbers in mex[l] — dec[i] + 1.

    the number of distinct numbers in mex[n] will be less than the number of distinct numbers in mex[1] — dec[1] + 1 — dec[2] + 1 ... — dec[n] + 1. This implies the sum of dec[1 ... n] is less than n (initial distinct number count) + n (possible increase in distinct numbers).

»
15 months ago, # |
Rev. 3   Vote: I like it +3 Vote: I do not like it
Noob solution for B

223974675

In order for:
Maximising final XOR: we try maximising every bit. Keeping every bit on in the best case scenario.
Minimising final XOR: we try minimising every bit. Keeping every bit off in the best case scenario.

Imagine each of $$$a_i$$$ and $$$b_j$$$ in terms of their bitsets/binary representation.
What is needed for the $$$i^{th}$$$ bit of final XOR to be high/on ?:
At the end of the optimal set of operations (whatever that might be), if we look at the $$$i^{th}$$$ bit of all $$$a_i$$$'s and say $$$count$$$ is the number of high bits, then this $$$count$$$ must be odd.
Similarly, for minimising the $$$i^{th}$$$ bit of final xor: this $$$count$$$ must be even.

Suppose it is possible to get the best-case maximum final XOR (where all bits are high) with just a single operation if we choose an optimal or best number. Lets call the binary-representation of this best number as the $$$optimal$$$ string.

Defining $$$optimal$$$ binary-string/bitest to maximise final XOR:

  • if $$$n$$$ is odd: if $$$i^{th}$$$ bit has on-count = odd, then optimal string's $$$i^{th}$$$ bit is unimportant (because no matter what you do, this on-count will stay odd only). Else it must be $$$1$$$ necessarily for optimality.

  • if $$$n$$$ is even, then: if $$$i^{th}$$$ bit has on-count = odd, then optimal string's $$$i^{th}$$$ bit must $$$0$$$ necessarily. Else its unimportant(same reason as before).

Defining $$$optimal$$$ binary-string to minimise final XOR:

  • if $$$n$$$ is odd, then: if $$$i^{th}$$$ bit has on-count = even, then optimal string's $$$i^{th}$$$ bit must be $$$0$$$ necessarily. Else its unimportant.

  • if $$$n$$$ is even, then: if $$$i^{th}$$$ bit has on-count = even, then optimal string's $$$i^{th}$$$ bit must is unimportant. Else it must be $$$1$$$ necessarily.

At this point, I should have pivoted to notice/reinterpret the situation in terms of the fact used by editorial. But anyways since I couldn't see it, here is an alternate approach, although dumb, unnecessary and laborious.

The $$$optimal$$$ strings just defined may or may not actually exist in the array $$$b$$$. So, we have to find that set of elements of $$$b$$$ whose cumulative $$$OR$$$ satisfies these $$$optimal$$$ strings as best as possible.

Strategy for finding the best fit set of elements of $$$b$$$ for the optimal strings:

  • Convert all elements of $$$b$$$ in to their bitsets or binary strings before going to next step.
  • Those strings of $$$b$$$ that defy the $$$optimal$$$ strings at the positions where the $$$optimal$$$ string required a $$$0$$$ or off bit to be present, cannot be the part of the best-fit set of elements. (Because a high bit once produced cannot be reverted no matter what).
  • However those strings of $$$b$$$ that defy the $$$optimal$$$ strings "only" at the position where the $$$optimal$$$ string required a $$$1$$$ to present, can be forgiven and included in the best-fit set.

Thus, we have the two different best-fit set of $$$b$$$-elements for maximising and minising the final XOR. Now, the solution is straightforward, just calculate the final XOR by applying the operation in problem statement for all elements of the best-fit sets.

223974675

»
15 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Please give the code for F, as "To find dfs(x) , we can traverse up from the trie node corresponding to x , and at each step, add the sizes of the subtrees that we have traversed in DFS before the subtree with the node x ." is too ambigious for me to make sense of.

Thanks for the good contest!

»
15 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Problem-D : Why is my code wrong ?? 223975030

  • »
    »
    15 months ago, # ^ |
      Vote: I like it -11 Vote: I do not like it

    try case

    1

    4

    10 12 13 14

    99

    answer should be 9 3 2 1

    • »
      »
      »
      15 months ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      i think the answer should be 9 4 1 0 ,and this is my code 224008681,i can't find my fault,can anyone tell me what is wrong in my code?

      • »
        »
        »
        »
        15 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        You need to perform ans[0]=1e9+5 inside the while(t--) loop, since ans[las] -= tmp; may change the value of ans[0]. Also, inside the while(1) loop, you need if(cnt>=n) break; instead of if(cnt>n) break;.

»
15 months ago, # |
  Vote: I like it +3 Vote: I do not like it

Can someone please tell what is wrong in my code for problem D? my submission

I think i have implemented the exact same solution as given in editorial, but still it is failing in test 2. Any help is appreciated.

»
15 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Any counter Test, or place at what i done mistake for D

https://codeforces.net/contest/1870/submission/224114880

»
15 months ago, # |
  Vote: I like it 0 Vote: I do not like it

can any one explain why this brute force got accepted. https://codeforces.net/contest/1870/submission/224151942

»
15 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Another (more complicated but still pretty cool) solution to G:

One of the initial ideas that I had was to define the value of a number which will be 2^v. Then if the sum of the values of all the numbers is = 2^m where m is the mex, then potentially we can create m.

This does not work though, because lets say we try to create m, and we have two (m — 1)'s. This, though has the same value sum as 0, 1, 2,.... m — 1, does not create m. This leads us to another definition: space. Let space be equal to the maximum number of is that we can have. This then gives us that after processing number i, x = 2 * x — min(x, cnt[i]). Then, if we compute the space of all i (in reverse) up to 0, the space must be equal to or less than the number of zeros we can create. Now, you can create a bunch of functions f(x), which essentially represents this: 2 * x — min(x, cnt[i]) for each i. Now, an issue is that we also need to count the number of zeros we can create. The number of zeros we can create is the number of this not in the range [1, m — 1] PLUS when calculating f(x), which x < cnt[i], we wont use cnt[i] — x numbers, and we can turn those into zeros aswell. This means we need to a datastructure to support the following: maintain f0(f1(f2(f3(...fn(x)...))) (where fi(x) is the space function for number i), but also the zeros, which will be g0(f1(x)) + g1(x) for the n = 2 case. We can simply do this with a segment tree. This leads to a solution (with a couple constant optimizations) that passes. The issue is, theoretically, if each of these functions that we maintain have size O(n) (which I do not this is true but not sure how to prove) re-merging all of the functions every time an update happens should theoretically be O(n). This can be fixed with sqrt decomp, splitting it up into sqrt(n) functions, where to compute the final answer we use binary search on each of those functions. Every update is O(sqrt(n)) but every query is O(sqrt(n) log n) because of binary search, but this, with optimal blocking will result in O(sqrt(n log n)) (the logn factor is quite optimal because its simple binary search on an array). This leads to a slightly faster solution, saving around 150ms.

Without sqrt decomp: 224529457

With sqrt decomp: 224527332

»
14 months ago, # |
  Vote: I like it 0 Vote: I do not like it

One can solve D recursively:

  • Take the rightmost element with the lowest cost $$$c_i$$$ as much as possible.

Now, upgrading to an index $$$j$$$ further to the right costs $$$c_j-c_i$$$.

  • If we have coins left, decrease all costs to the right by $$$c_i$$$ and solve the new problem (with remaining coins) recursively until no further upgrade is possible.