Блог пользователя Gheal

Автор Gheal, 18 месяцев назад, По-английски

1831A — Twin Permutations

Author: Gheal

Hints
Solution
Code (Gheal, C++)
Rate Problem

1831B — Array Merging

Author: tibinyte

Hints
Solution
Code (tibinyte, C++)
Rate problem

1830A — Copil Copac Draws Trees

Author: alecs

Hints
Solution
Code (Gheal, C++)
Code (alecs, C++)
Rate problem

1830B — The BOSS Can Count Pairs

Author: Gheal

Hints
Solution
Code (Gheal, C++)
Code (Vladithur, Python)
Rate problem

1830C — Hyperregular Bracket Strings

Author: Gheal

Hints
Solution
Code (Gheal, C++)
Rate problem

1830D — Mex Tree

Author: Gheal

Hints
Solution
Code (tibinyte, C++)
Rate problem

1830E — Bully Sort

Author: valeriu

Thanks to errorgorn for the solution!

Solution
Code (tibinyte, fenwick tree + ordered_set C++)
Code (tibinyte, sqrt decomposition, C++)
Code (errorgorn, divide and conquer, C++)
Rate problem

1830F — The Third Grace

Author: tibinyte

Thanks to errorgorn for the editorial!

Solution
Code (errorgorn, C++)
Code (valeriu, C++)
Rate problem

If there is anything wrong or unclear in this editorial, feel free to ping me or any of the other authors in the comments.

Разбор задач Codeforces Round 875 (Div. 1)
Разбор задач Codeforces Round 875 (Div. 2)
  • Проголосовать: нравится
  • +153
  • Проголосовать: не нравится

»
18 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

"If there is anything wrong or unclear in this editorial, feel free to ping me or any of the other authors in the comments."

One thing
  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Can you explain how to do Div2 E (Hyperregular Bracket Strings). I can't understand the editorial. So, i understood the logic of splitting the overlapping intervals, then how did you proceed further? Also, if I have 3 intervals (1,20),(3,12),(4,9) -> (1,2),(3,12),(13,20),(4,9) -> (1,2),(3,3),(4,9),(10,12),(13,20) -> this should be impossible right? As (3,3) will never be a regular interval

    • »
      »
      »
      18 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      They don't mean to split intervals like that. Imma illustrate their idea. As an example, let's say $$$n=10, k=3, (l_1, r_1) = (1, 8), (l_2, r_2) = (3, 4), (l_3, r_3) = (7, 10)$$$. I'll use the letter $$$x$$$ to denote an empty space. Let's write a sequence of $$$n=10$$$ empty spaces: $$$xxxxxxxxxx$$$. If a position is covered by the $$$1^{st}$$$ interval, put $$$1$$$. If a position is covered by $$$1^{st}$$$ and $$$2^{nd}$$$ intervals, put $$$2$$$. If a position is covered by $$$3^{rd}$$$ interval, put $$$3$$$. If a position is covered by $$$1^{st}$$$ and $$$3^{rd}$$$ intervals, put $$$4$$$. So the sequence of empty spaces turn into this: $$$1122114433$$$. Now look at subsequences containing only $$$1$$$, $$$2$$$, $$$3$$$, $$$4$$$: $$$1111$$$, $$$22$$$, $$$33$$$, $$$44$$$. We count ways to make RBS on these four subsequences and then multiply them together: $$$2 \cdot 1 \cdot 1 \cdot 1 = 2$$$,which is the correct output.

      • »
        »
        »
        »
        18 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Thanks. Got it. Can you explain the step after that too? Regarding hashing?

        • »
          »
          »
          »
          »
          18 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          The main idea is every time we input an interval, we modify a subarray (the array here is the sequence $$$xxxxxxxxxx$$$) according to the interval so that each position knows which intervals cover it. So like after inputting $$$l_1 = 1$$$ and $$$r_1 = 8$$$, we do something in the subarray $$$[1,8]$$$, so every positions from $$$1$$$ to $$$8$$$ knows they are covered by $$$(l_1, r_1)$$$. The editorial does this by assigning each interval a random number and XOR every element in the subarray $$$[l_i, r_i]$$$ by it, but some solutions I saw use addition instead of XOR.

          • »
            »
            »
            »
            »
            »
            15 месяцев назад, # ^ |
            Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

            But why they don't have to fix hash conflict?

            There are 2^30000 subsets of intervals in total, hashed into a 64-bit integer. Won't that be lots of conflicts?

            I don't really understand.

  • »
    »
    9 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    why cant we simply just sort the array in the a problem ?

    • »
      »
      »
      2 месяца назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      If we will sort the array and then we will add corresponding elements with array a than there is chance that sum of two corresponding previous elements might be greater than upcoming element. For ex- (1+1) (2+2) (4+3) (5+4) (3+5).Here 5+4 became greater than 3+5.

»
18 месяцев назад, # |
  Проголосовать: нравится +36 Проголосовать: не нравится

i love this round

»
18 месяцев назад, # |
  Проголосовать: нравится +68 Проголосовать: не нравится

I had almost solved Div-1C, I had the idea of overlapping and included intervals! but I couldn't find a way to implement it :(

»
18 месяцев назад, # |
  Проголосовать: нравится +28 Проголосовать: не нравится

Can Div-1B be solved with smth like CHT or LiChao tree? We can reduce the problem to two operations:

  1. $$$add \_ line(k, b)$$$ — add line $$$kx + b$$$ to our structure.
  2. $$$count(x, y)$$$ — count number of lines s. t. $$$kx + b = y$$$ (or $$$\leq y$$$).

Is there anything what can support these operations?

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Tried thinking along that line too. But i don't think LiChao can do operation 2 due to overcounting?

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Solved Div2 C in practice using BFS. Or is it also DP in disguise?

https://codeforces.net/contest/1831/submission/207670475

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Yes. It will consider two cases using BFS. It actually corresponds to two cases of dp.

»
18 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Please add this to the contest material of Div 2 :) currently it can only be seen from Div1.

»
18 месяцев назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

Can someone try hacking https://codeforces.net/contest/1830/submission/207655131 , or explain why it’s good. It is solution for Div1C, and i am basically keeping a stack to calculate answer. But when two intervals partially intersect, i delete the latter , but add back a needed interval to my set of intervals.

»
18 месяцев назад, # |
  Проголосовать: нравится +7 Проголосовать: не нравится

Div1B is terrible. The goal of a problem should be figuring out the solution, and not trying to squeeze your code to pass after you know the solution.

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится +15 Проголосовать: не нравится

    The model solution runs in under 1.2s, which I think is fairly reasonable for a problem with a 4s TL.

    • »
      »
      »
      18 месяцев назад, # ^ |
        Проголосовать: нравится +23 Проголосовать: не нравится

      It's not about the complexity of your solution. It's about the people who had the same idea as the editorial but failed to solve it because they used for instance hashmap against map, or some tiny constant.

      • »
        »
        »
        »
        15 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Dear contestant,

        Please don't worry if you had to make some formulas shorter in order to pass a time limit that was set as a triple of our solution's running time. In a programming competition, the way you express your ideas in code actually matters. Apologies if this does not fit your view.

        Thanks for the feedback.

    • »
      »
      »
      18 месяцев назад, # ^ |
        Проголосовать: нравится +146 Проголосовать: не нравится

      Why does the model solution use 500MB of memory out of the allowed 512MB?

      • »
        »
        »
        »
        18 месяцев назад, # ^ |
          Проголосовать: нравится +23 Проголосовать: не нравится

        +1. I resubmitted (with 2 extra WAs which were my fault) because I was afraid 518k KB would MLE systests. I don't think it's a bad problem but I feel allowing 1024MB would've been ok here. Especially since the model solution uses a similar amount of memory.

      • »
        »
        »
        »
        18 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        We also have solution with linear memory, but it's not posted there.

        • »
          »
          »
          »
          »
          18 месяцев назад, # ^ |
            Проголосовать: нравится -6 Проголосовать: не нравится

          pls share.

          • »
            »
            »
            »
            »
            »
            18 месяцев назад, # ^ |
              Проголосовать: нравится +3 Проголосовать: не нравится
            Code
        • »
          »
          »
          »
          »
          18 месяцев назад, # ^ |
          Rev. 2   Проголосовать: нравится +36 Проголосовать: не нравится

          Yeah, but why didn't you think that it is a good idea to set ML to 1024MB if one of the authors solutions uses $$$n \sqrt{n}$$$ memory?

          • »
            »
            »
            »
            »
            »
            18 месяцев назад, # ^ |
              Проголосовать: нравится -23 Проголосовать: не нравится

            Authors test 1000s (a bit of exaggerated) solns, its not necessary to allow all of them to pass.

    • »
      »
      »
      18 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Can someone explain time complexity of 207670551?
      I do not understand why its fast enough (run in <350 ms), worst case I can think of will take $$$O(n*sqrt(n))$$$.

  • »
    »
    18 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

    .

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    i felt the same

»
18 месяцев назад, # |
  Проголосовать: нравится +14 Проголосовать: не нравится

div1B killed

  • »
    »
    18 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    for me, honestly, finding solution for div1B was much easier than understanding problem statement for problem Div1A/Div2C.

    I spent more than 30 minutes to understand problem statement and test case '1' in problem C.

    I solved problem Div1B,Div2D within 28 minutes. xD

»
18 месяцев назад, # |
Rev. 10   Проголосовать: нравится +1 Проголосовать: не нравится

Since there is lot of discussion going on about problem D with tight timelimit, I personally feel that time limit was sufficient to pass O(N * log N) + O ( N * sqrt(N) * log N ) , which is more than sufficient.

Solution with Simple map and binary search
  • »
    »
    18 месяцев назад, # ^ |
    Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

    I think that TL was enough but the great problem was in the ML. By the way you're code is very clean. got AC with this time complexity!! Take a look at my code which should have $$$O(n\sqrt n)$$$ with time complexity of 1123ms 207646321

  • »
    »
    16 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Where did shard[k] come from? Shouldn't it be shard[p]?

    Thank you for your explanation you are a legend.

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

https://codeforces.net/contest/1830/submission/207653094

The code is giving runtime error with GNU G++ 17.No runtime error with GNU G++20.

Why is it happening ??

»
18 месяцев назад, # |
Rev. 2   Проголосовать: нравится +29 Проголосовать: не нравится

thanks for the contest, solving D1D was very fun

»
18 месяцев назад, # |
Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

Yesterday's abc_E tilted me towards a leaves-first idea for div2C, counting up the number of out-of-order edges on the way to the root. It's an approach that can work, but I seemingly stumbled into every possible way to make it break, even after getting a fake AC.

TLE: long path from root into many leaves risks repeatedly traversing that path

WA: attempts to bail early on lack of update worked, but (hindsight) since I was updating and testing on different vertices, I had to leave in the == no-update case... which meant it was still very possible to TLE as before

uphackTLE: in attempting to test vs. that worst case, I neglected to clear out all shuffling/randomizing code in my generator, so the 'far' root 1 was always randomly somewhere convenient and/or the number of passes really did need updating

technical-debt-TLE: (hindsight again) a bug-free version of my per-leaf approach was still vulnerable because there could be a tree that actually did have progressively better passcounts per traversal obviating my early-out flailing. What I really needed was the post/return half of a dfs/bfs, only proceeding rootward after all children were accounted for... OR not getting the leaf-first idea stuck in my head from yesterday.

tl;dr free uphack and maybe some giggles...

»
18 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

The idea of Div1B was fine, but the tests were so weak. Even the system tests cannot save it as most solutions got TLE on tests provided by hackers, and I don't even think that my solution is good enough to get AC :D

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Good round, I like it.

»
18 месяцев назад, # |
  Проголосовать: нравится -6 Проголосовать: не нравится

The Memory limit Div1 B was soo tight, but it was fun to solve tho

»
18 месяцев назад, # |
Rev. 4   Проголосовать: нравится +19 Проголосовать: не нравится

This was my first Div. 1 contest. I solved A, but not very fast because I wanted to get accepted with one submission. Then I started to think about problem B. In about 1 hour and 20 minutes I got accepted (I was happy to increase my rating on my first Div. 1 contest). But, unfortunately, I got FST (TLE on test 15) during the system testing. And I want to share with you my solution and a bug, which was tricky to find.

The main idea of my solution is similar to many of the other solutions.

I keep the answer in $$$ans$$$. First, I define an array of vectors $$$c$$$, where I store all $$$b_i$$$-s for every element in $$$a$$$ (i. e. for every pair of $$$(a[i], b[i])$$$, I push $$$b[i]$$$ to $$$c[a[i]]$$$) and then I sort every $$$c[i]$$$.

Then I brute force every $$$i$$$ and $$$j$$$, such that $$$i*j <= 2*n$$$ and $$$i<j$$$. My goal is to find every pair $$$(x, y)$$$: $$$x$$$ from $$$c[i]$$$, $$$y$$$ from $$$c[j]$$$, such that $$$x+y=i*j$$$. Now, for every pair $$$(i, j)$$$, I go through all the elements of $$$c[i]$$$. For element $$$x$$$, I find $$$l$$$ and $$$r$$$ (with binary search): the borders of $$$i*j-x$$$ in $$$c[j]$$$. Then I update the answer with $$$r-l$$$ (or $$$r-l+1$$$, depending on how you choose the borders). After that I update the answer for $$$i=j$$$, and we are done!

But wait!, this solution gets TLE. And the problem is, that if there are many elements in some $$$c[i]$$$, we will brute force through them for every $$$j$$$. To avoid this, we find one with fewer elements than the other one (I mean $$$c[i]$$$ or $$$c[j]$$$) and go through the elements of that one. And at last here we are done.

Time complexity: $$$O(2n*\sqrt{2n} + C * log(n))$$$. Here $$$C$$$ is the maximal possible value of $$$\displaystyle \sum_{i<j}^\mathbb{}$$$ $$$min(size(c[i]), size(c[j]))$$$.

My code.

I hope this was helpful.

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится +11 Проголосовать: не нравится

    I did exactly what you did, but I can't prove the time complexity. Could you please explain it clearer how choosing the smaller between c[i] and c[j] results in that time complexity?

    • »
      »
      »
      18 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I apologize that I can't prove it formally, but intuitively, if there is one with many elements, there has to be another with few elements. Also, it is obviously better to brute force through the one with fewer elements.

»
18 месяцев назад, # |
  Проголосовать: нравится -32 Проголосовать: не нравится

Div2 D has an O(nlog^2n) solution. Let cnt(x, y) be the number of indexes i such that a[i] = x and b[i] = y. b[i] + b[j] = a[i] * a[j] <= 2 * n, so for fixed i there are 2 * n / a[i] possible values of a[j]. Thus, iterating over all possible pairs of values (a[i], a[j]) takes O(nlogn) operations (2n/1 + 2n/2 + ... + 2n/n = 2n * (1 + 1/2 + 1/3 + ... + 1/n) = 2n * logn). For given pair (x, y) we can iterate over all i such that a[i] = x (totally O(n) time) and add to the answer cnt(y, x * y - b[i]). cnt(x, y) can be calculated for O(logn) using binary search.

  • »
    »
    18 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +8 Проголосовать: не нравится

    I don't think that would work. For example, when iterating $$$a[i] = 1$$$, based on my understanding, you find all $$$i$$$ such that $$$a[i] = 1$$$, then also iterate over all $$$y = 1, 2, ...2n$$$, and then add $$$cnt(y, 1 * y - b[i])$$$ to the answer. But this step is actually $$$O(n^2)$$$, because there could be $$$O(n)$$$ indices $$$i$$$ such that $$$a[i] = 1$$$.

    • »
      »
      »
      17 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      But what if I apply a little optimization?

      For a $$$(a_i, a_j)$$$ calculation, we get two (multi)sets $$$S = \{ b_k | a_k = a_i \}$$$ and $$$T = \{ b_k | a_k = a_j \}$$$.

      Then if $$$|S| > |T|$$$, swap them, and iterate $$$S$$$.

      It finally get passed(211334263), but I can't figure out it time complexity, or whether it's a truly correct approach. Could anyone tell me this?

»
18 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

https://codeforces.net/contest/1831/submission/207670314

Could someone point out mistake in my code been trying for very long ,dont see the mistake getting WA in test 2 ,i did a bfs like soln

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    did you try cleaning the global variables after solve()? Probably it's bugged when you have more testcases

    • »
      »
      »
      18 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      i did that at the end of the code after cout

      • »
        »
        »
        »
        18 месяцев назад, # ^ |
        Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

        Sorry, i didn't see that.
        Took a while but i found an actual counterexample:

        ans is 2, your code says 3

        You do some weird stuff with ok. You're looking for the minimum index edge, but i think you should set it while you do the bfs: something like

            ok[c.first] = c.second;
            if(c.second > ok[x]) dis[c.first] = dis[x];
            else dis[c.first] = dis[x]+1;
        
»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Problem Div 2D has a good idea IMO. But my solution of $$$n \sqrt{n}$$$ was not passed the time limit because I used unordered_map to count the frequency (207642815)

I have used the unordered_map optimization using custom_hash function referring to this blog. Which got me to think :

  1. Did my solution wrong? I used set to maintain unique elements only. The difference with author solution is I didn't skip when $$$a_i > \sqrt{n}$$$. But I did the break the loop tho

  2. If assuming my implementation indeed is $$$n \sqrt{n}$$$ What kind of test case that breaks the unordered_map constant factor?

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Your solution is $$$O(n^2)$$$,here is a sample. If $$$a[1] \dots a[\frac n 2]$$$ are $$$1$$$,the other $$$\frac n 2$$$ of $$$a[i]$$$ are large and different from each other,when you find the answer of $$$a[1] \dots a[\frac n 2]$$$,all of the elements will be found,then your solution become $$$O(n^2)$$$

    • »
      »
      »
      18 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Ah I guess that makes sense. Didn't cover that kind of test case. Thanks!

  • »
    »
    18 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    yeah i too tried many $$$n\sqrt{n}log(n)$$$ and then tried with hashing (not custom), and I'm of the opinion that either im a really poor implementor or the limits are too strict

    • »
      »
      »
      18 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Asking to remove the log factor of the map is somewhat strict imo. You had to use an array instead of a map to erase the log in the complexity. Usually if i can use a map to make it faster to implement i do it and for this reason i got a lot of TLEs before realizing that O(1) access gives ~1/4 time of map

      • »
        »
        »
        »
        18 месяцев назад, # ^ |
          Проголосовать: нравится +8 Проголосовать: не нравится

        if they didnt force to remove log, the problem would definitely be solvable by n^2 (already was)

        i think they should have just forced linear memory, so it becomes clear this solution is out instead of being on edge

        • »
          »
          »
          »
          »
          18 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Fair enough. Effectively n² is close enough to n^(3/2)log n. Actually plugging n it's ~1.5e9, which is obviously too big

          • »
            »
            »
            »
            »
            »
            18 месяцев назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            I have solved it with O ( N * log N) + O ( N * sqrt(N) * log N) .

            I have used O(log N) complexity for searching element in sorted array with binary search. and my solution passes easily.

            I dont understand why you guys couldn't pass the solution ?

            Can you please share your solution which is getting TLE ?

            • »
              »
              »
              »
              »
              »
              »
              18 месяцев назад, # ^ |
              Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

              I was using a vector s (s[a][b] = count of the pair a-b for a <sqrt(2n)). Actually some of the solution passed the time but gave wa for overflow. But they were very close. One was not in the time because i had min() in the loop condition. Moving it out made the code faster enough to get wa on another case. It was kind of a panic moment actually (still don't know if it would still get tle after fixing the overflow too)

            • »
              »
              »
              »
              »
              »
              »
              18 месяцев назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              207656497, if you wish

»
18 месяцев назад, # |
  Проголосовать: нравится +14 Проголосовать: не нравится

Is it any problem in 1C's editorial?if $$$l_1\leq l1_2\leq r_1\leq r_2$$$,the groups formed by two intervals are $$$[l_1,l_2-1],[l_2,r_1],[r_1+1,r_2]$$$?But it doesn't matter :D

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Want to point out that, for Div1 C, to derive the number of regular bracket strings of length 2n, this is essentially equivalent to having a simple random walk that start at 0 and end at 0 while never touching -1. The reflection principle is commonly used to calculate it.

»
18 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Can 1B exist any polylog solution?I guess the answer will no more than $$$O(n\log n)$$$ or $$$O(n)$$$ if all pairs of $$$a,b$$$ are distinct,but I have no idea how to find them in no more than $$$O(n\log^2n)$$$ :(

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I'm wondering why I got TLE for div2 C.

I pretty much did the same thing as the editorial. I have an array to store the index of each edge, and then I did BFS. At each node, I compare the index of the edge that connects that node and the parent with the index of the edge that connects the parent and the parent of the parent.

code
  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    nvm I figured out why, apparently I was creating vectors of size 2e5 for every testcase, so for some reason creating a vector of size n is O(n)??

    • »
      »
      »
      18 месяцев назад, # ^ |
        Проголосовать: нравится +5 Проголосовать: не нравится

      "the sum of n over all testcases is 2e5" But if you have 10,000 testcases and make an array 2e5 long you have that the total complexity is 2e5 *1e4 = 2e9, which is a big overhead considering that to have t=1e4 you have average n = 20

      • »
        »
        »
        »
        18 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        207786285

        Is this the reason why my solution gets a TLE too ? (I have tried to maintain an ans value (current.first in the submission) along with the edge-number for each node. And update the ans everytime a node is fully "explored").

        • »
          »
          »
          »
          »
          18 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          It's possible. Just try and change resize(N) to resize(n+1) and see if it works

          • »
            »
            »
            »
            »
            »
            18 месяцев назад, # ^ |
            Rev. 2   Проголосовать: нравится +5 Проголосовать: не нравится

            Holy shit, that worked. Thankyou so much. I would have wasted the entire week trying to find out why if it wasn't for your comment.

            The TLE was happening because effectively the complexity was more like O(n*t) where t is the number of tests in one big-test-case ? Or is it that making such a long array for each test case is just inherently very "slow" to do ?

            • »
              »
              »
              »
              »
              »
              »
              18 месяцев назад, # ^ |
                Проголосовать: нравится +5 Проголосовать: не нравится

              Your complexity is O(maxn*t) and if t = maxt you have the worst case. Btw you can check in the docs the complexity of resize (it's linear in the difference to the current size)

»
18 месяцев назад, # |
  Проголосовать: нравится +6 Проголосовать: не нравится

F can be solved with lazy propagation on kinetic segment tree: code

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Really liked div 1, C and D. Also, I think there is a simpler way to implement div 1 C using segment tree instead of xor hashing, for the union part of solution. Was close but could not solve it in contest though :(

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Would someone be able to check why my solution to Div 2 D TLEs on case 8?

It's basically the same as the official solution and runs in O(n*sqrt(n)), so I'm unsure why it TLEs.

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can somebody debug my problem B code where i'm doing mistake: Approach: I stored the max frequency of each number of a and b in mpa and mpb. After that i calculated the ans as the sum of frequency from a and b.

code
  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You gotta keep the counts of the longest subarray of similar integers, for every integer in a and b find out the maximum length of of all equal elements, do this for both array and in the end just store the sum of length of contiguous equal subarrays for all integers in both arrays. Why are you using stacks?

    • »
      »
      »
      18 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      i used stack to count the max frequency of each number present in a array, btw i figured out the issue with minor changes my code got accepted. If you want to have a look

      code
      • »
        »
        »
        »
        18 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        stacks + maps pretty heavy, I guess the runtime will stay around 500ms anyways

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Only if I proved my solutions :(

»
18 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

What is the time complextity of my solution: 207651808?

First, I iterated through the values of a[i], this part is $$$O(n)$$$

Then, I iterated through the values of a[i] * a[j], until now it's $$$O(n log n)$$$

After that, I iterate through the indices i that have value a[i] and used binary search (this add another log) to count number of satisfied indices j. Here, I optimized it by iterate through indices j instead if there is fewer of them.

Somehow this passed systest, (actually during the systest I saw it TLE on pretest 3, but AC when I checked again this morning).

»
18 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

.

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I am having difficulties understanding the editorial, can you help me out a bit?? thanks!

    • »
      »
      »
      18 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      .

      • »
        »
        »
        »
        18 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        yess, that makes sense. I understood the equation which you gave. But I am still confused with the equation given in the editorial for ai=aj and and ai>aj. Can you explain how those two equations work..

        • »
          »
          »
          »
          »
          18 месяцев назад, # ^ |
          Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

          .

          • »
            »
            »
            »
            »
            »
            18 месяцев назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            assuming you understood the fr[a[i]][a[i]^2 — b[i]] part of the equation, notice that we overcount such cases in the expression: let's say we got a[i] = 2 and b[i] = 2 for some i, then a[i] * a[i] = b[i] + b[i], so the pair (i, i) is counted as good, but since the problem only asks for good pairs where i < j, we need to subtract all such cases, and what do these cases look like? since we have a[i] * a[i] = b[i] + b[i] iff a[i]^2 = 2 * b[i] iff a[i]^2 / 2 = b[i], so we need to subtract fr[x][x^2 / 2] for all 1 <= x <= lim, and that's where the numerator of the equation comes from. now the reason we divide the numerator by 2 is because we're counting all good pairs twice since if (i, j) is a good pair then (j, i) is also a good pair, and we need to count only one of these.

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone explain what's wrong with my submission for Div2B? https://codeforces.net/contest/1831/submission/207612809

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I was so sad that i could not solve b in contest time . but i just realized that the range of array a and array b is <= 2*n .

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится +13 Проголосовать: не нравится

    that fact of a[i] , b[i] <= 2 * n doesnt matter? you can just coordinate compress it to that range anyways

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In div2 D why this soln gives TLE-> Let's traverse on unique pairs of a(i),b(i) and for each a(i) we will traverse for all distinct values of a(j) and find the value of b(j) corresponding to the known a(i), b(i) and fixed value of a(j) and thus can then easily calculate the number of pairs formed using an unordered_map with custom hash in avg o(1) complexity..

Now for fixing the value of a(j) lets see two cases for a(i)-> 1. a(i) >sqrt(2*n)->

in this case since a[i]*a[j] should be less than on equal to 2*n (as shown in editorial), hence possible values of a[j] would be from [1, sqrt(2*n)]. hence for each a[i]> sqrt(2*n) we can traverse all possible values of a[j] in sqrt(n) complexity

  1. a(i)<= sqrt(2*n)->

in this case value of a[j] can be from [1, n] but since all the pairs of a[i] with a[j]>sqrt(2*n) is already covered in case 1 hence we need to iterate only on the values in range [1,sqrt(2*n)].

hence overall complexity is o(n*sqrt(2*n))

please help Gheal

»
18 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

Am i the only onr who solved div2 C using lazy segmenttree with O(nlog(n)) ??

»
18 месяцев назад, # |
Rev. 2   Проголосовать: нравится +11 Проголосовать: не нравится

In Div1D, the maximum size of a connected component is $$$O(\sqrt n)$$$. But what's the exact upper bound of it? Theoretically, it's $$$\sqrt{3n}\approx 774$$$, but my submission, where I suppose it doesn't exceed 400, got accepted. Can anyone prove it or hack it?

»
18 месяцев назад, # |
Rev. 3   Проголосовать: нравится +11 Проголосовать: не нравится

I have been wondering for the past two days how the hell did my submission passed the tests in div.2 C and didn't get TLE: https://codeforces.net/contest/1831/submission/207638978

In a test like this:

1

200000

1 2

1 3

1 4

1 5

1 6

1 7

...

...

...

1 199999

1 200000

The only thing I realized is that this corner test case isn't in the test cases or there are some hidden info in the constrains I didn't notice. But I don't think so.

»
18 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

I think that I have trouble understanding the first code of 1F.

The tutorial says that we should subtract the contribution of some lines, but in the code, the only update is adding new contribution.

Maybe I have made some stupid mistakes. Can anyone please explain it? QaQ

»
18 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

nice contest

»
18 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

I'm new to hashing. In Div1 C, is there a chance of a wrong answer when using randomly generated numbers? And can we choose some numbers on our own to avoid that situation?

»
18 месяцев назад, # |
  Проголосовать: нравится +13 Проголосовать: не нравится

Do you know solution for Hyperregular Bracket Strings without random?

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

https://codeforces.net/contest/1831/submission/207616298 Can anyone help me with the runtime error in my submission. I am not able to figure out the what is the reason for this.

»
18 месяцев назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

In the solution of Div1/C, for the case 2 (partially overlapping intervals), won't the groups formed should be [l1,l2-1],[l2,r1] and [r1+1,r2] instead of [l1,l2-1],[l2,r2] and [r2+1,r1]. Gheal

»
18 месяцев назад, # |
  Проголосовать: нравится +24 Проголосовать: не нравится

In div1 C, I spent most of my time during the contest trying to use sweep line to construct a tree with hierarchy of intervals. I have read the hints in the tutorial, it's mentioned that it is difficult to construct that tree. Gheal, do you know an algorithm that constructs that tree? or if anyone could provide useful resources.

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    Sorry for the late reply. If I'm not mistaken, Vladithur had a deterministic solution while testing, I've reuploaded it here.

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For the solution code of Div 2E/1C, if I change LLONG_MAX to LONG_MAX in uniform_int_distribution<ll> rnd(0,LLONG_MAX), I get WA on test case 4, which is a big-number test case where n = 300000 and k = 300000 (207948273) (I submitted multiple times and it's always WA on test case 4). When I print the hash values, they all seem to be as random and large as the solution code.

Can anyone explain why this happens? Thanks in advance.

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

problem c

vector<pair<int,int>> edg[NMAX]; ... edg[u].push_back({v,i}); edg[v].push_back({u,i});

how does this work? why we create a new pair in the vector element?

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

There is an alternative solution for Div.2 C, where you essentially simulate the process. You keep a std::set of all the edges you can draw and a pointer of where you currently are in the edge list. Using the lower_bound function we can find the next edge we will encounter and when we do, we update the set of edges we can draw and update the pointer to the index of the currently processed edge + 1. In the case when the largest element in the set is smaller than the pointer we set the pointer to 0 and add 1 to the answer. We will process each edge exactly once so the complexity is O(n*log(n)). You can see my submission here: https://codeforces.net/contest/1831/submission/207936976

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For div 2 c question I think the statement of each edge is not accurate, I ran the code I think it is checked the edge below this edge[problem:1831C]

»
18 месяцев назад, # |
Rev. 3   Проголосовать: нравится +3 Проголосовать: не нравится

May someone explain me why the computation of the dp in Div1 D is $$$O(n\sqrt n)$$$ ? it's hard to prove for me, even that I know how to prove that the time complexity of the "7-th trick" is $$$O(n^2)$$$.

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    In the "7-th trick" the time complexity is $$$\sum siz(v) \cdot siz(u) \le n^2$$$ over all dp merges. In our case, we have time complexity $$$\sum \sqrt{siz(v)} \cdot \sqrt{siz(u)}$$$ so we have the sum of square roots of these values. Let's define $$$\sqrt{siz(v) \cdot siz(u)}$$$ as just a value $$$a_i$$$. We know $$$\sum_{i=1}^n a_i^2 \le n^2$$$ (there are actually $$$n-1$$$ summands but it doesn't matter). Dividing both sides by $$$n$$$ and taking a square root we get $$$\sqrt{\frac{\sum_{i=1}^n a_i^2}{n}} \le \sqrt{n}$$$. But on the left-hand side we have the quadratic mean which is at least the arithmetic mean by the means inequality. So $$$\frac{\sum_{i=1}^n a_i}{n} \le \sqrt{\frac{\sum_{i=1}^n a_i^2}{n}} \le \sqrt{n}$$$. $$$\sum_{i=1}^n a_i \le n \sqrt{n}$$$ follows which is exactly what we wanted.

    • »
      »
      »
      18 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Thanks for your generous help! That helps a lot for me. The prove is definitely right and I know exactly how to realize it by code. However, the time complexity of the standard method is $$$\sum_{i=1}^n min(siz[v],lim)\times min(siz[u],lim)$$$, which $$$lim = \sqrt{3n}$$$. If possible, I hope you can provide relevant proof. Thank you very much!

      • »
        »
        »
        »
        18 месяцев назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        Oh, wait, is it? I think in trick 7 it is just the product of sizes. But even if it's not, it is easy to see that just the sum of products of sizes is $$$O(n^2)$$$ because you can think about the product of sizes as uniting two sets and listing all pairs of elements where one element is from the first set and the other elements is from the second set. In this way, every pair of elements in total will be listed at most once (exactly at their LCA), so the total sum is at most $$$n \cdot (n-1) / 2$$$.

        And well, taking min with $$$lim$$$ can only decrease the sum but it is actually irrelevant.

        • »
          »
          »
          »
          »
          18 месяцев назад, # ^ |
          Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

          Is it irrelevant? I believe that if you merge in $$$min(sz_u,lim) \cdot min(sz_v,lim)$$$ the time complexity becomes $$$O(n \cdot lim)$$$. I don't know how to prove it tho but perhaps it's something similar to the $$$lim = \sqrt{n}$$$ case.

      • »
        »
        »
        »
        18 месяцев назад, # ^ |
          Проголосовать: нравится +8 Проголосовать: не нравится

        Ok. Now I understand your question. I guess the solution in the editorial is a bit different from what I read there. Without loss of generality, we can assume that our tree is binary because we kinda assume it anyway: we merge all the children of a vertex one by one which is equivalent to having a bamboo in which we add them one by one to the tree. If you take $$$\sum \min(sz_v,lim) \cdot \min(sz_u, lim)$$$, then you can consider $$$v_1, v_2, \ldots, v_k$$$ to be all the vertices $$$u$$$ in the graph such that all children of $$$u$$$ have $$$sz_u \le lim$$$ but for the parent $$$pr_u$$$ of $$$u$$$ it is not true (so $$$pr_u$$$ has some child (either $$$u$$$ or not) that is large). Then $$$v_i$$$ can't be an ancestor of any $$$v_j$$$ (otherwise its parent would have a big child and then $$$v_i$$$ would also). It means that all subtrees of $$$v_i$$$ s are disjoint so $$$\sum sz_{v_i} \le n$$$. Inside every subtree of $$$v_i$$$ by trick 7 we work in time $$$sz_{v_i}^2$$$. But both children of $$$v_i$$$ have sizes $$$\le lim$$$, so $$$sz_{v_i} \le 2 lim+1$$$. Then Inside all these subtrees we work in time $$$\sum_{i=1}^k sz_{v_i}^2 \le \sum_{i=1}^k (sz_{v_i} \cdot (2 lim+1)) = (2 lim+1) \sum_{i=1}^k sz_{v_i} = O(lim \cdot n) = O(n \sqrt{n})$$$. We are left with all the bigger subtrees that do not lie inside $$$v_i$$$ subtrees (call all these vertices $$$u_i$$$s). There are two types of vertices $$$u_i$$$: they either have at least one of their child as one of the $$$v_l$$$s or they have both of their children as $$$u_j$$$ and $$$u_k$$$. In the first case, we work in time $$$\le v_l \cdot lim$$$ which sums up to $$$\sum v_l \cdot lim \le lim \cdot \sum v_l \le lim \cdot n = O(n \sqrt{n})$$$. In the second case, we work in time $$$lim^2$$$ per vertex but there are at most $$$O(n / lim)$$$ such vertices so we work in time $$$O(n \cdot lim) = O(n \sqrt{n})$$$ in total for them too.

        • »
          »
          »
          »
          »
          18 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          That is right! Amazing! Thanks you!!! :)

        • »
          »
          »
          »
          »
          8 месяцев назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          Sorry for necroposting, but

          but there are at most $$$O(n/lim)$$$ such vertices

          Why is this true?

  • »
    »
    17 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Take a look at this comment, it explains the exact dp formulation used in solution.

    Complexity is $$$O(nk)$$$ where $$$k=\sqrt n $$$. Proof is based on iterating over all pairs such that their preorder differs by at most $$$2k$$$.

    You can imagine it when you merge two subtrees, in the left subtree you "keep" only last $$$k$$$ nodes and in right subtree you "keep" first $$$k$$$ nodes, so their preorder number differs by at most $$$2k$$$, if subtree contains less than $$$k$$$ nodes just iterate through them. Because of that you go through $$$min(sz[u],lim)$$$.

    For each $$$i$$$ we have at most $$$2k$$$ pairs of the form $$$(j,i)$$$ such that $$$j<i$$$ so for all $$$1\le i\le n$$$ it total we have $$$O(n*2k)$$$ = $$$O(n\sqrt n)$$$

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I think the editorial of D D2/C D1 was wrong in case 2, when l 1 < l 2 ≤ r 1 < r 2

The three groups formed by these two intervals are:

  • [l1, l2 — 1]
  • [l_2,r_1]
  • [r1 + 1, r2]
»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hi Gheal & Vladithur, Python solution in the editorial for problem Div2D (1830B — The BOSS Can Count Pairs) throws TLE. Can you please fix it? https://codeforces.net/contest/1831/submission/208150150

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    Never use puthon 3, use pypy3 64 instead. It is 5 times faster

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Still cannot understand the description for 1830F. If there is just one activated point covered by two intervals, the cost of each interval should be the coefficient of that point.

For the first example, however, the last point (30) appears in two intervals ([2, 8] and [3, 8]), but 30 is only added once to the result.

Also, for the second example, why don't we activate the last point? It is covered by [1,6] interval.

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    2 8 in the first test case and 1 6 in the second test case aren't intervals; they're n m, i.e. the number of intervals and number of coordinate points for the test case.

»
18 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone explain what "connected component" means in Div1D please. If the graph is a tree then shouldn't the size of connected component be of size $$$N$$$?

  • »
    »
    18 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    A maximal region with nodes of the same color.

    • »
      »
      »
      18 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      Can you further explain why the the "connected component" have $$$loss \geq \displaystyle\frac{(k)(k+1)}{2}$$$ please.

      • »
        »
        »
        »
        18 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        It's beacause every path in a connected component contains exactly one color, so the mex surely is not 2.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Sorry for necroposting.

I tried doing d1c, and I found a $$$O(klog^{*}k)$$$ and a linear solution, so I thought that maybe someone would find it interesting.

If you look at the prefix sums where open parentheses are $$$+1$$$, closed parentheses are $$$-1$$$, then a restriction $$$[l, r]$$$ says that $$$s_r = s_{l - 1}$$$, and every $$$s$$$ in between must be greater or equal than the $$$s_{l - 1}$$$ (that is, $$$\forall i, l - 1 \leq i \leq r, s_{l - 1} \leq s_i$$$). If we look at how ranges overlap, if they do partially, then we pretty much get the same as in the editorial: $$$s_{l_1 - 1} = s_{r_1} = s_{l_2 - 1} = s_{r_2}$$$. The equality between $$$s_{l_1 - 1} = s_{l_2 - 1}$$$ is obtained with a $$$a \leq b \land b \leq a \implies a = b$$$ argument.

The idea of the solution is to have the ranges, do a sweep line from left to right, and try to store the equivalence classes, while also trying to compute the answer.

The way that I store the equivalence classes is by storing a DSU (the linear solution is obtained from the one with DSU and figuring out that it's actually useless) and a stack. I will store all the equivalence classes, and for each one I will store the following: the position of the last element, the position of the first element, the number of open ranges (this is used to figure out when you're done with an equivalence class and you do not need to store it anymore) and the number of already processed elements (that is, the total length of the included sub-intervals). The classes in the stack should be ordered, from bottom to top, increasingly by the position of the last element. When you insert a range, you just add it as a standalone equivalence class. When you close an interval, you merge all the equivalence classes from the stack that can be merged by the current interval. That is, you pop from the stack as long as the last element of each class is included in the current range, and merge the popped elements from the stack. When popping elements from the stack, you also try update the answer: suppose that the last element in each class from the popped elements are $$$x_1, x_2, x_3, ..., x_n$$$, then you multiply by $$$f(x_2 - x_1 - e_1) * f(x_3 - x_2 - e_2) * ... * f(x_n - x_{n - 1} - e_{n - 1})$$$ where $$$f(N)$$$ is the number of bracket sequences of length $$$N$$$. $$$e$$$ is the number of erased elements from each of the equivalence class.

The count is used to see when you're done with an equivalence class for good. That is, after you merged all the classes and decreased the count with 1, you see if you push it back to the stack, or not, and then update the erased elements for the top element in the stack.

I think that this solution is correct, but I'm not 100% sure. When it comes to cases like "what if you have multiple ranges that share the same endpoints? Don't you want them sorted by the other end, or by their length, or something similar?", it doesn't matter, because if you insert the smaller element first, then you will merge it with the larger ones and they will get trimmed to the last element, and if you do it the opposite way, then you will add the smaller one to the answer and subtract it from the larger range.

Also, to precalculate the function $$$f$$$, you can just compute the factorials, do the inverse of the largest factorial, then do all the other inverses, everything in $$$O(maxN)$$$.

I feel like I rambled here about a stack and a DSU, because the underlying cases are cumbersome and I'm not entirely sure why my solution works, so if you understood what I was trying to say and have an easier way to explain, or any other cases that should be mentioned, please let me know.

Here are the two accepted solutions:

  • $$$O(maxN + n + klog^{*}k)$$$ (a little bit cheated, I precalculated the factorials for each test, so there is also a $$$tlogn$$$ term): link
  • $$$O(maxN + n + k)$$$: link
»
4 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In the solution of "1830A — Copil Copac Draws Trees", the spelling of "Intially" in the head of third line is wrong, which should be "Initially".

»
3 месяца назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Hey Everyone, Could anyone walk me through why the answer for this Testcase is expected to be 3 by the Jury and not 2. 6 2 2 1 2 2 1 1 2 1 1 2 1 This is the 4th case in hidden testcase-2