Блог пользователя yud08

Автор yud08, история, 5 недель назад, По-английски

Thank you so much for participating in our round! We really hope you enjoyed it. For us, it was an amazing and educational experience, and we look forward to the possibility that we could one day help to or hold a round again. Thanks again to abc864197532 and Vladithur for making the process so smooth and enjoyable.

2027A - Rectangle Arrangement

Tutorial
Code

2027B - Stalin Sort

Tutorial
Code

2027C - Add Zeros

Tutorial
Code

2027D1 - The Endspeaker (Easy Version)

Tutorial
Code

2027D2 - The Endspeaker (Hard Version)

Tutorial
Code
Tutorial (Segment Tree)
Code (Segment Tree)

2027E1 - Bit Game (Easy Version)

Tutorial
Code

2027E2 - Bit Game (Hard Version)

Tutorial
Code
Разбор задач Codeforces Round 982 (Div. 2)
  • Проголосовать: нравится
  • +61
  • Проголосовать: не нравится

»
5 недель назад, # |
  Проголосовать: нравится -166 Проголосовать: не нравится

D1 is the worst problem i've seen in recent times

»
5 недель назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

Problem B is very tricky.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Same, I found it very hard, tried for 1 hour and 30 minutes in contest and did not get it. In contest I always seem to be able to do problem A but then I struggle with problem B.

»
5 недель назад, # |
  Проголосовать: нравится +10 Проголосовать: не нравится

Good to know my first approach to 2027B was nlog(n). I couldnt think of O(n^2) solution for some reason.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Can you explain your solution? I am unable to understand what you did in your solution ?

    • »
      »
      »
      5 недель назад, # ^ |
      Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

      Using ordered_set in C++

      O(n²) Approach:
      Iterate from left to right, finding all elements to the right of the current element that are greater than it.

      For each index i from 0 to n — 1: 1. Initialize ans to 0 for each element.

      1. For each index j from i + 1 to n — 1, increment ans if ele[i] is greater than ele[j].

      2. The goal is to minimize ans across all elements.

      O(n log n) Approach:
      This approach is equivalent to iterating from right to left, inserting elements into an ordered_set as you go.

      For each element: 1. Use the function x = ordered_of_key(element + 1), which returns the count of elements in the ordered_set that are smaller than or equal to (element + 1) (log n)

      1. The number of elements greater than element is n — x.
      2. Minimize this value across all elements.

      Here is my sol

      • »
        »
        »
        »
        5 недель назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        do you know why my solution doesn't work? basically any element Ai that is not the maximum of (A1, A2... Ai-1) could be removed from the vector by stalin sort, so I don't even add it to my vector. Later I just look for the longest non increasing subsequence in the vector. The answer would be the difference between the size of the (new) vector and the LDS length. It passes the sample but WA's later.

        code

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится +8 Проголосовать: не нравится

      See we wanted to sort the array in non-increasing order upon minimum deletion that means we have to have larger elements at the start so I sorted the array in decreasing order.

      then I recorded the first occurrence of every element in a map (first occurrence only because having 2nd occurrence means we have to delete more elements but question asks for minimum deletion).

      Now total deletions would be (total elements greater than an element and number of elements before that element).

      for eg; 3 6 4 9 2 5 2 when u consider 6 element only one element is greater than 6 i.e. 9 and one element is behind 6 i.e 3 but for 9 no element is bigger but 3 elements have to be deleted prior to it.

      to get this i just iterate over the sorted array and then add the elements behind that arrays first occurence and element greater than it and take minimum for all the elements in the array and print it.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    iterate over each index and check the number of elements greater than a[index] in the right of the index. And delete all the elements at the left of the index. check this for every index and find the minimum deletions.

»
5 недель назад, # |
  Проголосовать: нравится +42 Проголосовать: не нравится

B and C were pretty good problems...D1 felt like a common DP problem

»
5 недель назад, # |
Rev. 4   Проголосовать: нравится -13 Проголосовать: не нравится

Can anyone explain why changing map to unordered_map in 2027C - Add Zeros causes a TL?

I even took the solution from the editorial and ended up with a TL.

Editorial solution with unordered_map 288191624 (TL 16)

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Worst case access for unordered_map is O(n), whereas worst case for ordered_map is O(log n). In unordered map, the values are stored in buckets modulo a big number. If the input is constructed in such a way that all elements get stored in the same bucket, the worst case time complexity to access a single element becomes O(n). Hence, causing the TLE

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    test cases are made in such a way that they cause collisions , as the worst time complexity is O(n) so it can't pass. If you still want to use hashmap you would have to make your own hash function. https://codeforces.net/contest/2027/submission/288190713

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +2 Проголосовать: не нравится

    One way to bypass this issue is to declare a custom_hash. The worst case time complexity for this still remains O(n), but you could atleast pass the current input values since you can use a random number generator to hash the values. Add the following piece of code and let me know if it passes or not.

    struct custom_hash {
        static uint64_t splitmix64(uint64_t x) {
            x += 0x9e3779b97f4a7c15;
            x = (x ^ (x >> 30)) * 0xbf58476d1ce4e5b9;
            x = (x ^ (x >> 27)) * 0x94d049bb133111eb;
            return x ^ (x >> 31);
        }
     
        size_t operator()(uint64_t x) const {
            static const uint64_t FIXED_RANDOM = chrono::steady_clock::now().time_since_epoch().count();
            return splitmix64(x + FIXED_RANDOM);
        }
    };
    
    unordered_map<int, int, custom_hash>
    
    
  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +2 Проголосовать: не нравится

    Thank you for your replies!

    I also found this amazing post explaining that.

    Blowing up unordered_map, and how to stop getting hacked on it

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится +2 Проголосовать: не нравится

      Yeah, that's a great post. In the last contest, my submission to D was hacked for the very same reason..so this time, I made a custom hash function to minimize the risk of collisions for my solution to C

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    i also had the same thing.

    My guess is that based on this blog, https://codeforces.net/blog/entry/62393 test cases were created with large numbers of hash collisions.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    demn this shit doubt after every contest consider googling sometimes

»
5 недель назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

problem B

4

1 1 3 2

if we apply stalin then 2 will be automatically deleted and now we just have to remove 3 , then the resultant array will be 1 1 . The answer should be 1 .

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You have to remove some/none of the elements then apply operations on resulting array.

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I couldn't find this particular line in the problem statement. Can you please highlight which statement points to this?

      • »
        »
        »
        »
        5 недель назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        read this: ** determine the minimum number of integers which must be removed from the array to make it vulnerable.**

        he need minimum integers to it be vulnerable vulnerable means when Stalin sort is applied it will be in non-increasing order

        i also fall to it

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Thanks for fast editorial

»
5 недель назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

B was really difficult to analyze (for me); Only able to solve A ,hoping to perform better in upcoming ones.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    because N<=2000 then you should think about an n^2 solution now we have no idea what that solution might be but every time we think about an n^2 solution we tend to fix some number in the array and loop. and here comes the idea of fixing the max element in the resulting array or the left most element in the resulting array. this was my thought process

»
5 недель назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

If anyone could explain D2 to me in detailed manner would be really helpful, thanks in advance..

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    I solved this using segment trees. You need to define each node of the tree to have two parameters, "cost" and "count". When you combine the two nodes, if the costs are identical, you return a new node with the same cost and the sum of the two counts, otherwise, you return a copy of the node with the lower cost.

    Now, the problem is much simpler. Define dp as a list of length m+1 of these trees (each of size n+1), and initially set all nodes to have infinite cost. Then, update it such that dp[i][n] has cost 0 and frequency 1, for all i. Now, if we define dp[j][i] as the minimum cost to remove all elements from a[i] onwards, using all elements from b[j] onwards, it is clear we need to consider two transitions:

    1. Switching from b[j] to b[j+1] with no cost — we do this by filling in the dp with the outer loop going from n-1 to 0, and the inner loop going from m-1 to 0. Then we can just update dp[j][i] to be equal to dp[j+1][i].

    2. Transitioning from a[i] to a[i+k], for all k that it is possible to do in one go (which is solved the same way as D1, I did this using a sliding window approach in O(mn)). This transition is simply equivalent to querying the segment tree dp[j] from i+1 to i+k, because we have made the segment trees such that by combining nodes, you're automatically picking the ones with the lowest cost, and adding up the frequencies. This is all done in O(logn) time, adding up to a O(mnlogn) solution, which passes comfortably.

    I am told there is a much nicer solution but I'm far too daft to figure that out myself, so bashing the problem with a few too many segment trees seems to be the only option.

»
5 недель назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

can anyone please outline the two pointer approach for D1?

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +4 Проголосовать: не нравится
    void solve()
    {
        int n, m;
        cin >> n >> m;
        vector<ll> a(n + 1);
        vector<ll> preSum(n + 1);
        ll maxA = 0;
    
        for (int i = 1; i <= n; i++)
        {
            cin >> a[i];
            maxA = max(maxA, a[i]);
            preSum[i] = preSum[i - 1] + a[i];
        }
    
        vector<ll> b(m + 1);
        for (int i = 1; i <= m; i++)
        {
            cin >> b[i];
        }
    
        // If the maxA is greater than b[1], it's impossible to perform any operations to empty a
        if (maxA > b[1])
        {
            cout << "-1\n";
            return;
        }
    
        vector<ll> dp(n + 1, inf);
        //  deleting 0 elements is 0 cost
        dp[0] = 0;
    
        // Iterate over array b
        for (int j = 1; j <= m; j++)
        {
            int l = 0, r = 1; // l is the left boundary of the prefix sum, r is the right boundary
            while (r <= n)    // While the right boundary does not exceed n
            {
                // Check if the current prefix sum difference exceeds b[j]
                while (l < r && preSum[r] - preSum[l] > b[j])
                    l++; // If the prefix sum exceeds b[j], move the left boundary
    
                // If l is less than r, it means we can delete the prefix
                if (l < r)
                    dp[r] = min(dp[r], dp[l] + m - j); // Update dp[r] with the minimum cost
    
                r++; // Move the right boundary
            }
        }
        cout << dp[n] << "\n";
    }
    
    
»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can we solve B using Monotoic Stack to find the maximum sub array that have the first element is largest? Time complexity will be reduce to O(n)?

»
5 недель назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Having an answer be 0 mod 1e9+7 for D2 is kind of crazy

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Some contests ago there were some copied solutions which started with ans = 1 and printed ans — 1 in the end but if the answer was mod — 1, then they printed -1 and got it wrong so i think it was to counter it

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      Actually, it's to counter an incorrect solution for D2. Consider the first editorial solution: we have a map of ways to a pair of {num_updates, count}. If you just remove the entry when count falls to zero, it might be the case that num_updates is nonzero if the answer is $$$\equiv 0$$$, so it would be a mistake to remove it. We tried to find testdata to break this solution but it proved to be almost impossible. In the end, we found a test where the number of ways ended on something $$$\equiv 0$$$, and it breaks some solutions which are wrong in a similar way, but sadly we couldn't fully counter it.

      It might be the case that some incorrect solutions slipped through, but it turned out that there are many alternative solutions to D2, so this didn't end up making a significant difference.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Why my solution for Question C getting TLE

My solution 288152538

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Problem c have also another solution

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

can some one please explain the last part in problem C if one didn't use a map

»
5 недель назад, # |
  Проголосовать: нравится +9 Проголосовать: не нравится

problem B "this will always break the non-decreasing property."

this should be "non-increasing property" ?

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Yes, you're right. The blog should update soon. Thanks for pointing it out.

»
5 недель назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Why these indian cheaters do cheating by youtube and telegram groups, this leads to bad rank even on solving 3 problems ;(

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I have different 2027E1 - Bit Game (Easy Version) solution that I can't really prove, and don't know if it's actually correct, so I will leave it here if someone is interested, it is also a little bit simpler than official solution:

Let's look at some pile $$$i$$$ with numbers $$$a$$$ and $$$x$$$. Denote largest bit of $$$a$$$ as $$$b$$$, if there's bit in $$$x$$$ that is greater than $$$b$$$ we can discard it because we can never do anything with it. We discard all useless bits.

Now if $$$ a >= x $$$ we can pretend there is no restriction so our grundy number will be $$$popcount$$$ of $$$x$$$.

If $$$ a < x $$$ we cannot always do whatever we want so we use function $$$solve(i, s)$$$ where $$$i$$$ is just an index and $$$s$$$ is number of suffix bits that we turned off in $$$x$$$. Note that both $$$a$$$ and $$$x$$$ will have $$$b$$$ bit on (because of $$$ a < x $$$ and discarding useless bits).

Transitions are:

1) Destroy largest bit ($$$b$$$) in $$$x$$$ and some number of additional bits (taking care of not exceeding $$$a$$$) (we go to no restriction case $$$ a >= x $$$)

2) Destroy some more bits on suffix in $$$x$$$ (no restriction because it cannot exceed $$$a$$$) (we go to $$$solve(i, s+p)$$$ where $$$p$$$ is number of bits we destroyed)

Now you calculate grundy number using grundy theory (take mex of grundy numbers you can transition to). That's literally it. Nothing special.

Intuition is based on: when you don't destroy largest bit, it's optimal to destroy suffix because it leaves opponent with least additional moves. I can't prove it tho. When you destroy largest bit, it only matters how much more bits you can destroy (to not exceed $$$a$$$, greedy).

Submission: 288167141. Note that is a little bit messy because I tried to code it fast.

I'm happy I got single digit placement :)

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    This is very interesting, and I don't know if I can prove the suffix idea either.

    The editorial solution is quite complicated because you need to be able to easily classify the SG values in order to count efficiently for E2; I don't think a solution like yours would follow on as nicely. However, taking E1 as its own problem, I think your solution is quite nice though, and avoids casework/inspection :)

    I'll have a think about how to prove it!

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

LOL,very good problem b,made my score down.:(

»
5 недель назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

for question C



void Solve() { int n; cin >> n; vector<int>a(n+1); for(int i = 1; i<=n; i++){ cin >> a[i]; } vector<int>option; map<int, int>map; for(int i = 1; i<=n; i++){ int size = a[i] + i-1; if(size == n){ option.push_back(i); } else{ if(size > n){ map[size] = i; } } } int maxsize = n; int k = 0; for(int i = 0; i<option.size(); i++){ int currsize = n+option[i]-1; while(map.find(currsize) != map.end() && k < map.size() + 5){ currsize += map[currsize]-1; k++; } maxsize = max(currsize, maxsize); } cout << maxsize << endl; }

can someone tell why my solution is wrong. what i am doing is first of all i am checking in which elements i can do first operation. if the required size of for doing operation in a element is n so we can start with the element. then i am just calculating how much element we can do operation if i start with that element. and we will store the maxsize of array ever reached.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For D1, why does my top down TLE but bottom up only takes 100ms? Shouldn't both have O(nm) states?

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You should differentiate between cases when:

    • the dp value hasn't been calculated yet
    • the calculated value is actually inf (i.e. no possible answer)

    Otherwise, it can visit the same impossible state exponential times. Initializing the dp value to -1 works. 288223019

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится
for (int k=0; k<m; k++) {
      int r = -1, sum = 0;
      for (int i=0; i<n; i++) {
        while (r < n && sum <= B[k]) sum += A[++r];
        nxt[i][k] = r;
        sum -= A[i];
      }
    }

This seems wrong to me. If r==n-1 then won't A[++r] raise an error ? Say a=[1,2] and b=[4].

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Thank you for the contest.

»
5 недель назад, # |
  Проголосовать: нравится +7 Проголосовать: не нравится

MikeMirzayanov I would like to report suspected cases of cheating in Codeforces Round 982 (Div. 2). I have identified multiple pairs of submissions with identical or nearly identical solutions. Here are the relevant submission links:

Pair 1: https://codeforces.net/contest/2027/submission/288161794 https://codeforces.net/contest/2027/submission/288145385

Pair 2: https://codeforces.net/contest/2027/submission/288152323 https://codeforces.net/contest/2027/submission/288137323

Pair 3: https://codeforces.net/contest/2027/submission/288163896 https://codeforces.net/contest/2027/submission/288155729

Each pair of submissions shows a high degree of similarity.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

B is very tricky. Anyways, if i understand correctly, only the max length subsequence which contains the max element as the first element is reduciable to a non increasing array? so we find this max length subsequence and then subtract this length from the total n. which gives us the min number of elements to be deleted??

»
5 недель назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

In Problem B, I bet that the trial test cases explanation made it even more trickier. BTW, a good logical problem in B.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

can anyone explain binary search approach for problem d1 ?

  • »
    »
    5 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    See my implementation 288212421

    It is DP with Binary Search. If you cannot understand anything, let me know.

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      without using dp only binary search

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Hey first of all your code is really clean! In my opinion the editorial didn't do a great job explaining the implementation and I was wondering if you could explain your solution further, mainly this part:

        for (int i = 1; i <= m; ++i) {
          int ind = n;
          for (int j = 1; j <= n; ++j) {
            if (a[j] > b[i]) {
              dp[i][j] = dp[i - 1][j];
              --ind;
              continue;
            }
            auto it = ub(all(pre), pre[ind - 1] + b[i]) - pre.begin();
            int diff = it - ind;
            int prev = dp[i - 1][j], curr = j - diff;
            dp[i][j] = min(prev, dp[i][curr] + (m - i));
            --ind;
          }
        }
      
      • »
        »
        »
        »
        5 недель назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Sure. We define a 2d dp array dp[i][j]. It means the minimum cost to solve the problem if we consider only the first i elements of array b and upto first j elements of array a. Then we look if the current element in array a is bigger than our current element in array b. If it is, then we cannot use the current element of array b here. So we take our dp value of the current j from our previous element in array a. That is basically dp[i — 1][j]. Because the array b is reverse sorted. And the previous element is sure to be bigger.

        Now, if we can use our current element of array a in our dp[i][j], we will use the upper bound method to search how far back can we stretch our reach. We will use a prefix array of the reversed array a (because we want to see how far back we can go). After the that we will get an index in our prefix array where our reach ends. Now we have two choices here, either include the current new value that we got in our dp[i][j], or use the value of the previous element of the array b. That is dp[i-1][j]. We will use the minimum of both of them. It will look something like this. dp[i][j] = min(dp[i-1][j], dp[i][the first element back that we cannot reach] + the cost to do this current operation.

»
5 недель назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

while (r < n && sum <= B[k]) sum += A[++r]; — From D1 editorial

if r=n-1 and condition satisfies , A[n] will be accessed leading to RTE right.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Isn't there another solution for C by using greedy method. I didn't do it but saw that one on youtube.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For E1 testcase2, the last move that Alice did was to take 12 out of 14 from the last pile, why didnt she take all 14, so that Bob cannot move anymore

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    The sample explanation is an example of how a game might progress, to ensure you understand the conditions on $$$d$$$. It does not accurately reflect any optimal strategy; indeed, Alice could have taken all $$$14$$$ so that Bob could not move anymore — but in an optimal strategy, Bob would ensure that they never got to that point.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Caution: They have testcases in problem C which touch worst case complexity for Unordered map and sets !! ```

#pragma GCC optimize("O3")
#include <bits/stdc++.h>
 
#define FOR(i,n) for(i=0;i<n;i++)
#define FOR1(i,n) for(i=1;i<=n;i++)
#define FOR2(I, A, B) for (int I = (A); I <= (B); ++I)
#define pb push_back
#define unique_sort(c) (sort(c.begin(),c.end()), c.resize(distance(c.begin(),unique(c.begin(),c.end()))))
#define print(v) for(auto it:v)cout<<it<<" ";cout<<endl
#define fast() ios_base::sync_with_stdio(false);cin.tie(NULL);cout.tie(NULL)
#define endl "\n"
#define min3(a,b,c) min(a,min(b,c))
#define forr(n) for(int i=0;i<n;i++)
#define frr(n) for(int i=1;i<=n;i++)
#define MAX 200005
 
using namespace std;
 
typedef long long ll;
typedef long li;
typedef vector<int> vi;
typedef vector<ll> vli;
typedef vector<vi> vvi;
typedef vector<vli> vvli;
typedef pair<int,int> pi;
typedef pair<ll,ll> pli;
typedef vector<pi> vpi;
typedef vector<pli> vpli;
 
 

//const ll MOD=1e9+7;

/*
ll fast_exp(ll base, ll exp) {
    ll res=1;
    while(exp>0) {
       if(exp%2==1) res=(res*base)%MOD;
       base=(base*base)%MOD;
       exp/=2;
    }
    return res%MOD;
}
bool comp(const pi&i, const pi&j)
{
    return i.first < j.first;
}*/

unordered_map<ll, vector<ll>> ump;
unordered_set<ll> visited;
ll ans;

void dfs(ll n){
    if(visited.find(n) != visited.end()) return;
    visited.insert(n);
    ll new_n;
    if(ump.find(n) != ump.end()){
        for(int l:ump[n]){
            new_n = n + l - 1;
            ans = max(ans, new_n);
            if(new_n !=n && ump.find(new_n) != ump.end()){
                dfs(new_n);
            }
        }
    }
}

int main() {
    fast();
    int t;
    cin>>t;
    while(t--){
        int n; cin>>n;
        vli a(n + 1);
        ans = n;
        frr(n)cin>>a[i];
        frr(n){
            ll x = i + a[i] - 1;
            if(ump.find(x) != ump.end())
            ump[x].push_back(i);
            else
            ump[x]  = vector<ll>(1,i);
        }
        dfs(n);
        cout<<ans<<endl;
        ump.clear();
        visited.clear();
    }
    
    
 
}

``` Upsolving after contest and this code gives me TLE, couldn't figure a better approach went to tutorial, approach is exactly the same but uses normal map and set instead of unordered, changed my code to use normal instead of unordered and it's accepted. So glad this wasn't happening in a contest I'd be enraged post contest for this.

They have testcases which touch worst case for Unordered map and sets !!

»
5 недель назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Can anyone explain problem D2 solution without segment tree method ??

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Has anyone tried question B in NlogN? I am not able to get an idea '_'

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    What you can do is, maintain a multiset of the element on right of index i, then just find upper_bound of the current elemrnt in that multiset, and use distance(it,s.end()) to get how many elements are present in a multiset which are greater than a[i] and as you move forward keep erasing the previous elements.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

How to know the intuition behind problem C ? Yes, after knowing it's a graph problem it was easy ,but how someone can develop intuition like this ??

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Think like this, choose a test case and draw what happens if you choose a valid index,what all happens: if you do this you'll draw a directed graph(tree) like structure and from there on it becomes obvious you need max sum path(memoized dfs).

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Is there any dp solution for C? I have a thought that dp[if someone reach this number] = I can get this number. But I have problem on where can the dp get start? I can go neither left to right nor right to left, and from largest number to smallest number also not work.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I made a directed graph from the data, then applied memoized dfs on it. DP[node] tells max 0 creatable if you choose that node(that index).

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I am wondering that is there any dp solution that without graph concept?

      • »
        »
        »
        »
        5 недель назад, # ^ |
        Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

        My solution was a DP. DP(len) returns the longest array that can be built starting from an array of length 'len'. Below I explain how to compute DP(len):

        Let 'result' be the result for DP(len). You can initialize it as 'len', because, if you cannot extend the array any further, then its maximum length is 'len' itself. Then for every index 'i' such that it fits the question requirements, you do

        result = max(result, DP(len + i));

        But how to find all 'i' such that it fits the requirements? Well, from the question statement, you have

        a_i = |a| + 1 — i (one indexing)

        So the following is true:

        a_i = |a| — i (zero indexing)

        a_i + i = |a|

        And you can precompute a_i + i for every 'i' and put it in a map<int, vector> so you efficiently find all 'i' that have that sum equal to |a|.

        Implementation: 288133528

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Sorry for necro-posting but someone else confirm that for D1/D2, the constraint on b being decreasing is not necessary to solve the problem?

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    IMO if you solve D1/D2 fully by range-update dynamic programming, that condition isn't required. It's there to enable at least the greedy observation in D1 to reduce the segment tree stuff (or anything doing similar such range updates).

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Got it, thanks for corroborating. I was wondering why that condition was there at all.

»
4 недели назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Considering how much I complicated D1 using DP-Tabulation, Pre-Computation and Binary-Search only to find out it was basic DP Implementation :(

»
3 недели назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I tried in O(n) complexity,please someone review this code it is failing in some cases https://codeforces.net/contest/2027/submission/291305482