MikeMirzayanov's blog

By MikeMirzayanov, 4 years ago, translation, In English

Hello!

ICPC Southern and Volga Russian Regional Contest (Northern Eurasia) 2020 has ended on November 15. This year the competition took place online. 108 teams competed, many of them received an invitation based on the results in Qualification Contest.

On Dec/25/2020 14:35 (Moscow time) will start online-mirror 2020-2021 ICPC, NERC, Southern and Volga Russian Regional Contest (Online Mirror, ICPC Rules).

Hope you enjoy the problems. In this contest I play a role of Cheif Judge and the jury teams consists of ex-participants of ICPC from Saratov and jury members from other cities: adedalic, ashmelev, BledDest, DmitryKlenov, DStepanenko, elena, KAN, kuviman, MikeMirzayanov, pashka, PavelKunyavskiy, Дмитрий Мещеряков, Герман Наркайтис. Many thanks to all of them!

I send special rays of gratitude to testers: Merkurev, Um_nik, romanasa, josdas, budalnik, Perforator, Oleg_Smirnov, IlyaLos, Supermagzzz, Stepavly, Igor_Kudryashov, HolkinPV, Edvard, le.mur!

I invite ICPC teams and individual participants of Codeforces competitions to take part!

Of course, the competition will be unrated. I ask the participants of the official competition to refrain from participating in the mirror and discussing problems until its end.

Since this year the audience of participants was wider (due to the online format), we also selected problems a little more accessible than in previous years. If you are from the top team that claims a medal in the ICPC Finals, then it may be more fun for you to take part in this contest personally.

Good luck!

  • Vote: I like it
  • +277
  • Vote: I do not like it

| Write comment?
»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Wow! I always wanted to pass the ICPC level contest in real time. Thank you so much MikeMirzayanov. Good luck to all.

»
4 years ago, # |
  Vote: I like it -12 Vote: I do not like it

What should be the size of the team? I am assuming it to be 3

  • »
    »
    4 years ago, # ^ |
      Vote: I like it -11 Vote: I do not like it

    In such contests there is not interesting to participate alone

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it -11 Vote: I do not like it

      Yes, I am forming a with some friends.... I wanted to know what should be the size. I guess there should be 3 people in a team....Am I right?

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it -11 Vote: I do not like it

        Yes, the ideal size of team is three people. But you can participate by incomplete team (2 people) or individually. More than 3 people is not allowed

»
4 years ago, # |
  Vote: I like it -35 Vote: I do not like it

in part of virtual contests somthing happned wrong!

please fix it.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    it is very hard to handle this type of big website and they are handling it very greatly. you should wait for sometime before this type of comment.

»
4 years ago, # |
  Vote: I like it +33 Vote: I do not like it

Hey,

I'm wondering if there is a strict limit on number of team-mates allowed? My friends and I are looking to do this contest for fun, and since it's unrated, we're wondering if a larger group than 3 people is okay.

Thanks!

»
4 years ago, # |
  Vote: I like it +60 Vote: I do not like it

Maybe a bit irrelevant but Merry Christmas codeforces community!!!

»
4 years ago, # |
  Vote: I like it +21 Vote: I do not like it

stuck in queue with H :(

»
4 years ago, # |
  Vote: I like it +187 Vote: I do not like it

Mike: If you are from the top team that claims a medal in the ICPC Finals, then it may be more fun for you to take part in this contest personally.

Chinese LGMs: Naaah, let's solve it altogether in 2 hours

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +26 Vote: I do not like it

    On the other hand, the blog post name suggests to do it as a team (in Russian), and it is a bit optimistic to expect people to put their time into reading the post itself :)

»
4 years ago, # |
  Vote: I like it +31 Vote: I do not like it

Maybe the contest could have been extended by 10 mins due to long queue :(

»
4 years ago, # |
  Vote: I like it +2 Vote: I do not like it

how to solve J

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +18 Vote: I do not like it

    Find the Minimum Spanning Tree

    3 cases Maximum Edge is K, less than K and Greater than K

    If Maximum Edge is Greater than K then all the lengths greater than K in the MST are to be considered

    Otherwise in other 2 cases would be min(abs(edge weight — k)) for all edge weights

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I first joint the edge which is closest to k and thae applied mst.... But its giving wrong ans why

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      because edges that are smaller than k basically won't be counted, so you should use them as much as possible.

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        can u give any graph example on which it will fail plzzzzzzz

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Check this test:

          1
          4 4 10
          1 2 1
          1 3 1
          2 3 11
          3 4 12
          

          If you join edge 2 => 3 with cost $$$11$$$ the answer will be $$$3$$$. Correct answer is just $$$2$$$.

»
4 years ago, # |
  Vote: I like it +13 Vote: I do not like it

When will the editorial available?

»
4 years ago, # |
  Vote: I like it +6 Vote: I do not like it

What is the solution idea of problem J(road reform)?

thanks in advance & Merry Christmas.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +4 Vote: I do not like it

    the trick from prim's MST + some greedy

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it -8 Vote: I do not like it

      stuck with Kruskal whole time :(

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it +11 Vote: I do not like it

        I solved it using Kruskal.

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it +11 Vote: I do not like it

        We can also solve it using Kruskal.

        Just need to find Minimum Spanning Tree only and some greedy conditions

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it +4 Vote: I do not like it

    There is a property that the Maximum Edge Weight in the MST of a graph is the minimum among all possible spanning trees of that graph. (Not sure if it is true when you find the MST using Prim's Algorithm, but this is definitely true when you find it using Kruskal's Algorithm)

    More Details
»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Solution for M?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +4 Vote: I do not like it

    Divide the sets into "big" and "small" sets (more or less than $$$\sqrt{N}$$$ elements). Then come up with three different methods for checking whether two big sets are similar, whether two small sets are similar and whether a big and a small set are similar.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      How do you do "small vs small" part without introducing additional log factor to complexity? My idea was to generate all pairs of elements for each set and then find a pair that occurs more than once, but IIUC this can't be done in linear time unless we are using some hashsets.

      I have a slight suspicion that if the difference between my TL solution and my AC solution is "let's optimize by discarding all unique input numbers right away", this means I didn't do the model solution :)

      • »
        »
        »
        »
        4 years ago, # ^ |
        Rev. 2   Vote: I like it +26 Vote: I do not like it

        but IIUC this can't be done in linear time unless we are using some hashsets.

        Recall the data structure that behaves like an array but you have the additional operation of making all entries 0 in amortized $$$O(1)$$$ time. You can achieve this by memorizing the number of times the array has been reset and for each entry the number of resets when this entry was accessed the last time.

        Basically you have an array of pairs $$$(u, v)$$$ and you want to check if there are duplicates in linear time. For every $$$u$$$ make a list of such $$$v$$$ that the pair $$$(u, v)$$$ exists. Use one global "array with resets". For each $$$u$$$, first reset the array, then scan through the pairs containing $$$u$$$ to check which $$$v$$$ occurs multiple times.

        Credit goes to .I. for inventing this part of the solution.

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it +1 Vote: I do not like it

          Can't you solve it in O(n) using radix sort?

          • »
            »
            »
            »
            »
            »
            4 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Probably? I am not sure whether the constant is good enough though (in particular, whether it is faster than hashsets).

            • »
              »
              »
              »
              »
              »
              »
              4 years ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              https://codeforces.net/contest/1468/submission/102338964 can you please tell why it is giving runtime error on test 46 i am not able to figure it out my mistake

              • »
                »
                »
                »
                »
                »
                »
                »
                4 years ago, # ^ |
                  Vote: I like it +2 Vote: I do not like it

                Have you tried the following:

                • Write a very naive and simple solution to the problem (maybe exponential complexity) (if you can, you may also just copy someone else's solution)
                • Write a program to generate thousands of small (!) test cases
                • Using those two, find a test case where your program gives the wrong answer
                • Use print statements or a debugger to see where exactly your program does the wrong thing.

                98% of WAs and REs can be resolved this way. People here don't have the time to delve into every code posted here, it's much harder to debug somebody else's code and being able to debug your own code is a valuable skill. It is also a very routine process that can be learned much faster than problem solving and algorithms.

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Nice, thank you!

          Right, I stopped thinking in this direction after hitting "I don't have enough memory for N^2 bucket sort array".

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      I came up with this, but didn't how to do the "big vs small" part.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +14 Vote: I do not like it

    I have an $$$O(n \sqrt{\frac{n}{64}})$$$ bitset solution. You find an implementation of this algorithm in Python here 102487844.

    The key idea is to handle the most commonly occurring elements first (lets say occurring $$$ B=100$$$ times or more) using a bitset. After you have taken care of the common elements, you can forget about them and only keep the uncommon elements.

    Step 0. Use a hashtable or sorting to split the elements into uncommon and common elements.
    Step 1. The algorithm to handle common elements (runs in $$$O(\frac{n ^ 2}{64 B})$$$)

    Let $$$m$$$ be the number of common elements. For each set make a bitset of size $$$m$$$ that marks the commonly occurring elements in that set. Now go over every element $$$a$$$ (both common and uncommon), and for each $$$a$$$ go over all sets that $$$a$$$ is in. You then use the bitsets to find some common element $$$b \neq a$$$ that occurs in 2 or more of these sets.

    Step 2. The algorithm to handle only uncommon elements (runs in $$$O(n B)$$$)

    Go over all sets $$$A$$$, and then go over every element $$$a$$$ in $$$A$$$. Finally go over all sets $$$B$$$ that contain $$$a$$$ and check if $$$A$$$ and $$$B$$$ contains a common element (other than $$$a$$$) using a $$$n$$$ large array.

    Finally, note that by letting $$$B \approx \sqrt{n/64}$$$ you get time complexity $$$O(n \sqrt{\frac{n}{64}})$$$.

»
4 years ago, # |
  Vote: I like it +2 Vote: I do not like it

Is M something related to Bitsets or Bipartite graph?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +24 Vote: I do not like it

    You basically need to find a cycle of length $$$4$$$ in a bipartite graph, and finding a cycle of length $$$4$$$ in a general graph can be solved in $$$O(M\sqrt{M})$$$ by enumerating in a certain way based on degree of vertices.

    • »
      »
      »
      4 years ago, # ^ |
      Rev. 2   Vote: I like it +8 Vote: I do not like it

      We reduced to it down to exactly this. Can you share some resource for learning how to find a cycle of length 4 in O(MrootM)

      • »
        »
        »
        »
        4 years ago, # ^ |
        Rev. 4   Vote: I like it +72 Vote: I do not like it

        we consider a stronger problem: given an undirected simple graph $$$G_0$$$, count the number of cycles of length $$$4$$$.

        sort the vertices in degree-nonascending order, say $$$u_1, u_2, \ldots , u_n$$$ from left to right (that is, $$$u_1$$$ has the largest degree, and $$$u_n$$$ has the smallest).

        create a new graph $$$G_1$$$, which is a directed graph, the edges are identical with the original undirected graph ($$$G_0$$$), but they are all pointing towards the right direction. (from large-degree vertex to small-degree vertex)

        maintain a array $$$b[]$$$, its elements are initially $$$0$$$, and variable $$$\mathrm{answer}$$$ is initially $$$0$$$, too.

        enumerate all $$$x \in V$$$,
        $$$\qquad$$$ enumerate all $$$y$$$ which $$$x$$$ can reach it in the directed graph ($$$(x \to y) \in G_1$$$),
        $$$\qquad \qquad$$$ enumerate all $$$z$$$ which $$$y$$$ can reach it in the original graph ($$$(y \to z) \in G_0$$$),
        $$$\qquad \qquad \qquad$$$ if $$$z$$$ is to the right of $$$x$$$ (in the sequence $$$u_1, u_2, \ldots , u_n$$$),
        $$$\qquad \qquad \qquad \qquad$$$ increase $$$\mathrm{answer}$$$ by $$$b[z]$$$,
        $$$\qquad \qquad \qquad \qquad$$$ and increase $$$b[z]$$$ by $$$1$$$.
        $$$\qquad$$$ clear all modified $$$b[\ast]$$$s.

        === The idea of the procedure above is, $$$x$$$

        is the leftmost vertex in the cycle, and $$$z$$$ is the opposite vertex of $$$x$$$ in the cycle, if two $$$y$$$s ($$$y_1, y_2$$$) both reach the same $$$z$$$, the two halves of the $$$4$$$-cycle merges as one full cycle, as $$$x \to y_1 \to z$$$ and $$$x \to y_2 \to z$$$.

        about the time complexity, consider $$$\deg(y)$$$ is added to the total-time everytime when $$$y$$$ was enumerated in the $$$x \to y$$$ loop. two cases:

        1. $$$\deg(y) \le \sqrt{m}$$$: the number of valid $$$x$$$s is at most $$$m$$$.
        2. $$$\deg(y) > \sqrt{m}$$$: must have $$$\deg(x) \ge \deg(y) > \sqrt{m}$$$, because $$$\displaystyle \sum_{i=1}^{n} \deg(i) = 2 m$$$, that means the number of valid $$$x$$$s is at most $$$2 \sqrt{m}$$$.

        add up the two cases, total $$$\mathcal O (m \sqrt{m})$$$.

        to solve the original problem(just find one cycle is enough, but you need to print it), you only need to slightly modify the $$$b[z]$$$ part. see 102311532 for other details.

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          Hi !! Can you please explain how in case 1, the number of valid x's is at most m. I think i am confusing there. Thanks in advance.

          UPD : Got it , but i think number of valid x's in case 1 is $$$deg(y)$$$ and the $$$\sum_{deg(y)<\sqrt{m}} deg(y)^{2}$$$ is $$$O(m*\sqrt{m})$$$ . Please correct me if i am wrong.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +27 Vote: I do not like it

    if you consider values as the other part of the bipartite graph, the problem now is to find a four-edges cycle.

    and that's a classic problem which takes $$$\mathcal O (m\sqrt{m})$$$ time complexity.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    You can speed things up using a bitset, to run in $$$O(n \sqrt{n/64})$$$ time, see here. But I doubt a brute $$$O(n^2 / 64)$$$ bitset solution would run in $$$< 1$$$ s.

»
4 years ago, # |
  Vote: I like it +16 Vote: I do not like it

How to solve A ?

»
4 years ago, # |
  Vote: I like it -18 Vote: I do not like it

in J I first joint the edge which is closest to k (means abs(k-edgeweight) is minimum) and then applied normal mst.... But its giving wrong ans why?? any idea plzzz

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it +59 Vote: I do not like it

    Have you tried the following:

    • Write a very naive and simple solution to the problem (maybe exponential complexity) (if you can, you may also just copy someone else's solution)
    • Write a program to generate thousands of small (!) test cases
    • Using those two, find a test case where your program gives the wrong answer
    • Use print statements or a debugger to see where exactly your program does the wrong thing.

    98% of WAs and REs can be resolved this way. People here don't have the time to delve into every code posted here, it's much harder to debug somebody else's code and being able to debug your own code is a valuable skill. It is also a very routine process that can be learned much faster than problem solving and algorithms.


    The question you should be asking is "why should this be correct". You should be satisfied with a solution if you have found a proof that it is correct, not if you have no obvious counterexample. I see no reason this should be correct. Why do you think it is correct? Yeah, it seems that it should most of the time give good trees, but there is nothing about this construction that makes it obvious that it always gives the best solution.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it -25 Vote: I do not like it

      this guy has not posted his code here or told to debug his code / He is just explaining his idea and asking if it's correct or not :P , Dude You could have understood what he was saying and told him what's wrong in his idea/approach

      • »
        »
        »
        »
        4 years ago, # ^ |
        Rev. 2   Vote: I like it +39 Vote: I do not like it

        This method can be used equally well to debug ideas.

        I don't think there is any meaningful way to tell "what is wrong with this idea" (because I can't understand why they thought this would be correct) other than giving a countertest. And I have outlined a method of obtaining a countertest above.

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it -19 Vote: I do not like it

          cool , If I try some problem hard enough ( write naive solutions/ write test cases ) and still not able to get it and feel Something is wrong with my Idea , I do not hesitate to ask it here (I get valuable suggestions from some very good coders to "debug my idea" here on codeforces), That's what the mission of codeforces discussion is.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The same happened with me as well. The problem with this approach is that it may happen that you had selected an edge which may be nearest to $$$k$$$ however this was originally greater than $$$k$$$. Due to this, there may remain some edges whose weight is originally less than or equal to $$$k$$$, which might have provided optimal answer if used.

    The correct solution is:

    1. Construct MST only from the edges whose weight is less than or equal to $$$k$$$. Let the closest of such edges used in MST be $$$e$$$.

    2. If everything is connected, you need to check, whether there exists an edge whose weight is greater than $$$k$$$, but is closer as compared to $$$e$$$. If so, it would be better to add this edge to generated MST as well (though original MST was already connected).

    3. If the obtained MST is not connected, consider edges whose weight is greater than $$$k$$$.

    Hope it helps!!

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve A?

  • »
    »
    4 years ago, # ^ |
    Rev. 3   Vote: I like it +3 Vote: I do not like it

    It is obvious that for any almost increasing sequence (AIS) of size 3 $$$b_1,b_2,b_3$$$, we have two cases:

    1. $$$b_3 \geq b_2$$$
    2. $$$b_3 < b_2, b_3 \geq b_1$$$

    Let us consider the first case. We can iterate $$$a_i$$$ from $$$i = 1$$$ to $$$n$$$, and at each point, find $$$a_j$$$ such that $$$j < i$$$, $$$a_j \leq a_i$$$ and the length of the AIS ending at that point is maximum. If that value of length is $$$L$$$, the new value is $$$L+1$$$.

    Now to consider the second case. To do this, we find the greatest $$$j$$$ such that $$$j < i$$$ and $$$a_j > a_i$$$. Now, just need to find $$$a_k$$$ such that $$$k < j$$$, $$$a_i \geq a_k$$$ and the length of AIS ending at that point is maximum. If that value of length is $$$L$$$, the new value is $$$L+2$$$.

    Both of these operations can be done by finding the last greater element with a stack, and using a persistent segment tree to find the best length.

    My submission

    • »
      »
      »
      4 years ago, # ^ |
      Rev. 2   Vote: I like it +6 Vote: I do not like it

      I did it without persistence and still O(NlogN). An almost increasing sequence is just an increasing sequence with some additional elements. Something like indices inc_1, inc_2, .., inc_6 and indeces add_1, add_2, .., add_3 with inc1 < add_1 < inc2 < add_2 < inc_3 < inc_4 < inc_5 < add_3 < inc6(ordered indeces) where indeces on inc corespond to a non-decreasing subsequence and the additional indeces add are bigger than both neighbours in the ordered indeces sequence. And then we just do dp on the increasing subsequence, dp[n] = max(case 1 no additional element, case 2 there is additional element j) = max(max(dp[i] + 1, with a[i] <= a[n]), max(dp[i] + 2, with a[i] <= a[n] and i <= j where j is biggest such that a[j] > a[n], j is additional element)). Then we observe that if there is a k such that a[i] <= a[k] <= a[n] with k between j and i then i doesn't need to be accounted for in the second case. Now let a[k] = biggest such that k >= j and a[j] <= a[i] and we have dp[n] = max(max(dp[i] + 1, with a[i] <= a[n]), max(dp[i] + 2, with a[k] < a[i] <= a[n]) which can be optimized using Segtree on values of dp[i] and a[i] as the position

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How to solve H?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Every time you can remove k-1 elements so count of numbers you want to remove must be divisible by k-1 and now when count of numbers is >k-1 , you can bring them down to k-1 by applying operation between them now if any b[i] is having (k-1)/2 elements in left and (k-1)/2 in right you can do last removal ok k-1 elements so ultimately you just need to check that atleast one b[i] should have >=(k-1)/2 elements in left and in right also. English is not so good :(

»
4 years ago, # |
Rev. 2   Vote: I like it +56 Vote: I do not like it

I is such a troll problem, it could be solved in 5 mins. Browsing through various solutions it seems that many people significantly overcomplicated it (e.g. TOP2 teams), so here's my solution (that took me long time to realize).

Check if absolute value of cross product of (dx1, dy1) and (dx2, dy2) is n. If it is then print $$$[0, g) \times [0, n/g)$$$, where $$$g=gcd(dx1, dx2)$$$

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +59 Vote: I do not like it

    When I got that accepted during testing authors thought that tests (or checker) are wrong.

»
4 years ago, # |
  Vote: I like it +12 Vote: I do not like it

Will be get an editorial?

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Why I am getting WA in J? I had to find an edge that is close to k and then used Kruskal. Is this approach Wrong?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    This may help. Previous Comment

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I have checked the minimum of abs(a[i]-k). So I think I already covered what you're solution implies. So I not looking at greater only. I am looking at the whole 3 scenarios(>,=,<). Is it wrong?

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Yeah. As the edges whose weight is less than or equal to $$$k$$$ don't make the answer any worse (except the tricky part given in #2 in the previous comment). So, we have to use them first.

»
4 years ago, # |
  Vote: I like it +11 Vote: I do not like it

Someone getting an AC on G. Hobbits with Doubles and Binary Search?

I am getting TLE, expected time complexity $$$O(n * log(max(A_i)))$$$. Are doubles really that slow or was this intended not to pass?

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +16 Vote: I do not like it

    102317815 runs in 0.140, and probably can be optimized further.

    Initially I was getting TLE because of cin>> input straight into double instead of using much faster int :)

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      Woaaah, I did the same and got AC, I failed to realise IO was the bottleneck.

      Thanks a lot :)

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +5 Vote: I do not like it

      I had the same issue with G for an hour, until I decided to just submit one with int in the last 5 minutes randomly and it worked. Basically, during the contest I generated random max tests, and ran my solution locally to check for such bottlenecks — it seemed to only take about 66ms on my Linux machine to fully execute. Thus, I thought the bottleneck was that the testcase was handcrafted to have a lot of intersections, and thus I need to convert double operations to float. This sadly gave WA, and I was left flailing about a lot.

      After the contest, errorgorn also tested a maxtest on his windows machine, and it took well over 2000ms. Can anyone explain the discrepancy in these? I have 0 idea about these things.

  • »
    »
    4 years ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    I copied your code and wanted to see if the verdict would have remained same if you had used scanf instead of cin. And that's when it happened:

    your code submitted in c++ 14 gives WA on test 10

    your code submitted in c++ 17 passes that test case.

    I don't understand why that happened ?

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Actually it can be due to architecture, operations on doubles in C++17(64 bit) are more precise than doubles in C++14. I used doubles cause I was getting TLE and thought that could speed things up.

      I submitted the same code in C++14 with long doubles and it worked as expected.

      Code

»
4 years ago, # |
  Vote: I like it +5 Vote: I do not like it

Problem D reminds me of Tablecity

»
4 years ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

Can someone give any hint to B? I reduced the problem to finding substring of maximum length with $$$sum - len \cdot k > 0$$$

  • »
    »
    4 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Nvm, I didn't notice the fact that first element of substring must be greater than $$$k$$$, greedily extending good substrings works knowing that.

    However I'm curious about solutions with D&C/Segment Tree. Is there anyone ready to explain it?

»
4 years ago, # |
Rev. 2   Vote: I like it -8 Vote: I do not like it

.

»
4 years ago, # |
  Vote: I like it 0 Vote: I do not like it

How do you get to the conclusion that only collinear but oppositely facing vision vectors will make eye contact in problem f. How to prove that any other vision vectors wont make it ??

  • »
    »
    16 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    it's late, but if u draw two vectors which makes eye contact, and try to move them by clockwise then u will see that it's always collinear vectors with opposite direction