Блог пользователя BledDest

Автор BledDest, 16 месяцев назад, перевод, По-русски

1849A - Утренний сэндвич

Идея: BledDest

Разбор
Решение (awoo)

1849B - Монстры

Идея: BledDest

Разбор
Решение (Neon)

1849C - Копирование бинарной строки

Идея: BledDest

Разбор
Решение (vovuh)

1849D - Покраска массива

Идея: BledDest

Разбор
Решение (BledDest)

1849E - Максимум справа от минимума

Идея: awoo

Разбор
Решение (awoo)

1849F - XOR-разделение

Идея: BledDest

Разбор
Решение (BledDest)
  • Проголосовать: нравится
  • +102
  • Проголосовать: не нравится

»
16 месяцев назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Is there any way to solve c by following 1) store all prefixes of string 2)store all suffixes of string 3)count 1 and 0 and store them in prefix array

Now when we get query l,r we create a temp string Such that temp= prefix till l-1 + string of zeroes till of length number of zeroes till r-l-1 +same with 1 + suffix [r+1]

Ans put them temp string to set ..

Will this work?

  • »
    »
    16 месяцев назад, # ^ |
    Rev. 3   Проголосовать: нравится +1 Проголосовать: не нравится

    The time complexity of constructing such a string is like $$$O(n)$$$, so I think this will get a TLE.

    However I managed to solve this problem by using string hash, which makes it available to calculate the hash of an operated temp string in $$$O(1)$$$.

    To avoid hash collision, use a double hash.

    Here's my submission 216020500. (Note that the time complexity is $$$O(m+n\log n)$$$ ,because we need a set to count all different hashes.)

    • »
      »
      »
      16 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      If I solve using construction and suppose that constructed string temp;

      How can I use ur hashing on this temp?

      • »
        »
        »
        »
        16 месяцев назад, # ^ |
          Проголосовать: нравится +4 Проголосовать: не нравится

        Your approch to use std::string and concat the four precomputed strings PREFIX+ZEROS+ONES+SUFFIX is still $$$O(n)$$$. (The time of concat is $$$O(\text{len})$$$)

        To solve the problem more efficiently, you need to manually implement a hash function.

        Let's say the hash function $$$f(s)=\displaystyle\sum_{i=1}^{\ell}s[i]\times b^{\ell-i} \pmod M$$$, where $$$b$$$ and $$$M$$$ are constants.

        If you already have the hash of string $$$s_1$$$ and $$$s_2$$$, then $$$f(s_1+s_2)=f(s_1)\cdot b^{\operatorname{len}(s_2)}+f(s_2)\pmod M$$$, which means we can concat two strings in $$$O(1)$$$ time (precalculate all the powers of $$$b$$$).

        So, 1) calculate the hash of each prefix 2) calculate the hash of each suffix 3) calculate the hash of ones.

        Then you can $$$O(1)$$$ calculate the hash of an operated string by concating the four parts using the method above.

  • »
    »
    16 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I have implemented same thing but got memory limit exceed on test 4

    • »
      »
      »
      16 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

      You didn't precomputed the prefixes.

      You used substr() function for getting prefix and suffix it,it's time complexity is o(size of substring) at worst case it will run till N .. somewhere it will lead to tle..

      I was talking about how we can beforehand store prefixes

      Like for example

      Map<int,string>prefix_at_index

      string pf="";

      For i till s.length()

      pf.push_back(s[i]);

      prefix_at_index[i]=pf

      Something like this..

      Atleast log(n) complexity to get prefix Nd suffix

      • »
        »
        »
        »
        16 месяцев назад, # ^ |
        Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

        Storing all prefixes would also lead to MLE:

        What is the total length of all prefixes of a string length $$$n$$$? It is $$$1 + 2 + 3 + 4 + 5 + \dots + n$$$, which is equal to $$$\displaystyle\frac{n(n + 1)}{2}$$$.

        If $$$n = 200\,000$$$, $$$\displaystyle\frac{n(n + 1)}{2} = \frac{200\,000 \cdot 200\,001}{2} = 20\,000\,100\,000$$$.

        The total length of all prefixes would be $$$20\,000\,100\,000$$$ and since each character is $$$1$$$ byte, your code would need to store $$$20\,000\,100\,000$$$ bytes in memory. $$$20\,000\,100\,000$$$ bytes is about $$$20\,000$$$ megabytes, which is more than the memory limit of $$$256$$$ megabytes.

»
16 месяцев назад, # |
  Проголосовать: нравится +14 Проголосовать: не нравится

Hello everybody!

The competition was very interesting, but despite this, I will still get negative points.

Thanks for the competition: adedalic, awoo, BledDest, Neon and MikeMirzayanov

»
16 месяцев назад, # |
  Проголосовать: нравится +2 Проголосовать: не нравится

Any problems simillar to C?

»
16 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Is there any way to solve C by constructing new string for each query? I am getting memory limit exceed

  • »
    »
    16 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Highly doubt it. Constructing new string is O(n) ( not account for any sorting you might want to do ). So if you do that for each query thats too slow.

  • »
    »
    16 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    If you construct a new string each query (and store all of them), your code will store $$$200\,000 \cdot 200\,000 = 40\,000\,000\,000 = 4 \cdot 10^{10}$$$ bytes of data (each character is $$$1$$$ byte, each string has $$$200\,000$$$ characters, there are $$$200\,000$$$ strings).

    $$$4 \cdot 10^{10}$$$ bytes is about $$$4 \cdot 10^{4} = 40\,000$$$ megabytes which is quite a bit larger than the memory limit of $$$256$$$ megabytes.

  • »
    »
    16 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Instead you can construct new Hash for every query. Let (L, R) is my query then,Hash[1 to (L - 1)] and Hash[(R + 1) to N] remains unchanged. So we need to reconstruct only Hash[L to R] which is pretty straightforward,
    Hash[L to R] = PrimePowerSum[ L to ( L + Zero in this range — 1)] + 2 * PrimePowerSum[( L + Zero in this range) to R].

  • »
    »
    15 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    you can use string hash to do what you want to do

»
16 месяцев назад, # |
  Проголосовать: нравится +29 Проголосовать: не нравится

What is that simpler solution for problem F that most of the participants wrote?

  • »
    »
    16 месяцев назад, # ^ |
      Проголосовать: нравится +12 Проголосовать: не нравится

    I'll try to explain my simpler solution.

    The basic idea is that, if I have at least 3 numbers with a common prefix in the binary representation, then no matter how I partition the sets, at least 2 of those will be in the same set, and all bits of the common prefix will be 0 upon xorring those numbers. So the idea is to find the longest prefix such that there are at least 3 elements with that prefix.

    n = 2 is a corner case, which can be solved independently. Now assume n >= 3.

    It is guaranteed that there exists some B such that right-shifting all elements by B will result in 3 or more equal elements, let's find the smallest such B.

    Since I have picked the smallest such B, one can observe that there won't be more than 4 equal elements upon right shifting every element by B. (If there were 5 or more elements, then B-1 also would've worked.)

    After making some cases manually, [VERY IMPORTANT:] one can observe that the minimum xor will be obtained by xorring the elements that become equal after right bitshifting by B. (because any other 2 numbers will have the xor >= $$$2^B$$$)

    So, the solution is simple:

    • find B,

    • then make groups, such that all elements of the group become equal upon right shifting by B (size of all such groups <= 4)

    • within each group, try every way of partitioning them into 2 sets. (can actually be narrowed down to much fewer cases, but this much is enough for a complete solution)

    Here is my solution with some slightly ugly casework at the end, but one can get the general idea from it: https://codeforces.net/contest/1849/submission/216361206

»
16 месяцев назад, # |
Rev. 3   Проголосовать: нравится +47 Проголосовать: не нравится

Problem E can also be solved using the trick from 1156E - Special Segments of Permutation. Firstly, precalculate the "borders" in which $$$p_i$$$ is the minimum or the maximum element using monotonic stacks. Let's fix the maximum element of the segment. As in the problem mentioned above go to the closest end of the maximum border. If it is to the left of $$$p_i$$$ just calculate the number of good segments ($$$minpos(l, r) < maxpos(l, r)$$$) directly. Otherwise calculate the number of bad segments ($$$minpos(l, r) \geq maxpos(l, r)$$$) and subtract it from the total number of segments. You can see the details in my submission (with comments!): 216141125

  • »
    »
    16 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +4 Проголосовать: не нравится

    Can you elaborate on the complexity of your algorithm? It doesn't seem obvious why it isn't quadratic in the worst-case scenario

    UPD: I went by the link to another similar problem that you provided, it is explained there very well. It is indeed, very educational problem, thanks!

»
16 месяцев назад, # |
Rev. 3   Проголосовать: нравится +13 Проголосовать: не нравится

Why in problem E in the author's solution in the line len += it->first - prv->first; (in the first while cycle) do we subtract from the it->first not from me->first? As I understand, in the case it->second == 0, we should firstly subtract from len the value it->first - me->first and then add the value it->first - prv->first.

More surprisingly, both versions get accepted: 216151073 and 216150501.

  • »
    »
    16 месяцев назад, # ^ |
      Проголосовать: нравится +18 Проголосовать: не нравится

    The line len += it->first - prv->first is completely redundant in the min stack, since we are continuously removing minimums, it is impossible for it->second to be 0. Hence, you can even delete that part completely and still get AC.

    • »
      »
      »
      16 месяцев назад, # ^ |
      Rev. 2   Проголосовать: нравится +19 Проголосовать: не нравится

      Oh, that's true. I initially had the solution for min > max, so I hastily moved the lines around to work for the actual problem. That's why it's like that.

      UPD: Apparently, it was as redundant in the first version.

»
16 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone explain C in another way. I don’t really understand the editorial’s solution.

  • »
    »
    16 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится +7 Проголосовать: не нравится

    when we try to sort [l, r], if str[l] to str[mid] are all '0', and str[mid+1] to str[r] are all '1', then the string wouldn't change at all.

    let's use pos_0[i] denote the position of first '0' at pos i or to its left, and pos_0[i] denote the position of the first '1' at pos i or to its right.

    so for the [l, r], if pos_0[r]<pos_1[l], then the string wouldn't change at all. otherwise, the string will change.

    But we can found that if there are several '0' before pos l, and several '1' after pos r. that is str[ll] to str[l-1] are all '0' and str[r+1] to str[rr] are all '1'. Then you can found that sort [l, r] and sort [ll, rr], we get actually the same string.

    Actually pos_0[r]=pos_0[rr] (because str[r+1] to str[rr] are all '1'), pos_1[l]=pos_1[ll]. So if sort[l1, r1] and sort[l2, r2] get different strings, then { pos_1[l1], pos_0[r1] } is different from { pos_1[l2], pos_0[r2] }

    My English is not very good. Please forgive my grammer error and poor typesetting.

    • »
      »
      »
      16 месяцев назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Thanks for the explanation. Your explanation along with some of the editorial helped me understand :)

»
16 месяцев назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

Why not add the editorial to "Contest Materials" ?

Upd: Added.

»
16 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

I am having a hard time understanding the problem C. While I can see that my approach isn't optimized at all, I would have expected either a TLE or an MLE but instead I got WA on test 2. My approach: for each m, sort the string [l,r], store it in an unordered set and later count the size of the set. I understand that if there is no change in the string after applying the op, adding the ti string (where ti==s) to the set means I have added the original string only and that is counted once as the set stores unique values. What am I understanding wrong here? My submission that got WA: 215938621

  • »
    »
    16 месяцев назад, # ^ |
    Rev. 2   Проголосовать: нравится -10 Проголосовать: не нравится

    Deleted comment

    • »
      »
      »
      16 месяцев назад, # ^ |
      Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

      Brother I am very stupid. I still don't understand what this means. Is there some part in that statement that needs deciphering? In my code each copy (ti) of the string s is affected by only one op, that is sort(ti.begin()+l-1,ti.begin()+r-1). Is adding the string's copy to the set counted as another operation? if yes, then how does that matter? Thanks

      • »
        »
        »
        »
        15 месяцев назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Ignore my previous comment. Your code is giving error because you are sorting the string in wrong way. Instead of doing sort(ti.begin()+l-1,ti.begin()+r-1), you should do sort(ti.begin()+l-1,ti.begin()+r). You can understand it yourself if you think for a few seconds.

        suppose you want to sort [l, r] = [1, 2] for some string. sort(ti.begin()+l-1,ti.begin()+r-1) will reduce to sort(ti.begin(), ti.begin()+1) which is wrong.

»
16 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I tried double hashing for problem C im still getting sometimes 1 more than the answer.

216112844

»
16 месяцев назад, # |
  Проголосовать: нравится +6 Проголосовать: не нравится

For what it's worth, for problem E there is another (overly complicated, but perhaps more straightforward) solution.

Let $$$[l_i, r_i]$$$ be the range where $$$p_i$$$ is the maximum element, $$$a_i$$$ be the position of the closest element to the left of $$$p_i$$$ which is less than $$$p_i$$$, or $$$a_i = 0$$$ if it does not exist. Now, let's fix $$$i$$$ and look at the subarrays where $$$p_i$$$ is the maximum element, i.e. subarrays $$$p_l, \dots, p_r$$$ such that $$$l_i \le l \le i$$$ and $$$i \le r \le r_i$$$.

Here comes the main observation: the subarray $$$p_l, \dots, p_r$$$ is good ($$$minpos_{l,r} < i$$$) iff $$$\min(a_i, \dots, a_r) \ge l$$$. The proof should be trivial so I will omit it. Therefore, for fixed $$$r$$$ we have $$$\max(0, \min(a_i, \dots, a_r) - l_i + 1)$$$ good subarrays.

In order to get rid of the $$$\max(0, \dots)$$$ part let's find the rightmost index $$$r_i'$$$, such that $$$r_i' \le r_i$$$ and $$$\min(a_i, \dots, a_{r_i'}) \ge l_i$$$. Since the array $$$a$$$ can be calculated beforehand, this part can be done in $$$O(\log n)$$$ using a sparse table or some other data structure. Then, the number of good subarrays containing $$$p_i$$$ is

$$$ \sum_{r=i}^{r_i'} (\min(a_i, \dots, a_r) - l_i + 1) = \sum_{r=i}^{r_i'} \min(a_i, \dots, a_r) - (r_i' - i + 1) \cdot (l_i - 1). $$$

Subtracting the $$$(r_i' - i + 1) \cdot (l_i - 1)$$$ part is easy, let's focus on the sum. We have the following problem: given an array $$$a_1, \dots, a_n$$$ and $$$n$$$ queries of the form $$$(l, r)$$$, for each query find $$$\sum_{i=l}^r \min(a_l, \dots, a_i)$$$. There must be an easier way to solve it, but here is what I came up with. Let's answer these queries offline by iterating $$$l$$$ from right to left.

We need to support 2 kinds of operations on the array $$$b$$$ which is initially empty: 1) append an element to the left of $$$b$$$, 2) query $$$\sum_{i=1}^{r} \min(b_1, \dots, b_i)$$$. We can use a segment tree for that. Operation 1 can be done using range assignment: when we are appending $$$a_i$$$ all we need is to assign $$$a_i$$$ on the range $$$[i, q_i)$$$, where $$$q_i$$$ is the closest position of an element less than $$$a_i$$$ to the right of $$$a_i$$$ (or $$$n + 1$$$, if it does not exist). And operation 2 is simply a range sum query.

That's it, not the easiest implementation, but it felt very natural to come up with. The final complexity is $$$O(n \log n)$$$ and my submission is 215978080.

»
16 месяцев назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

can someone please explain in c why cant we just sort the array on each iteration will that not be nlogn?

m for all iterations and then long for storing:

This gives WA on 2nd tc

C++

~~~~~

int main() { long t,m,n; string s; cin>>t;

while(t>0){
    cin>>n>>m;
    cin>>s;
    long l,r;
    unordered_set<string> tempstore;
    for(int i=0;i<m;i++){
        cin>>l>>r;
        l--;
        r--;
        string temp=s;
        if (l<=r) sort(temp.begin()+l,temp.begin()+r);
        tempstore.insert(temp);
    }
    cout<<tempstore.size()<<endl;
    t--;
}
return 0;

}

~~~~~~


Python

t=int(input())

while t>0:
    n,m=map(int,input().split())
    strs=set()
    s=input()
    
    for i in range(m):
        l,r=map(int,input().split())
        l-=1
        r-=1
        temp=s[:l]+''.join(sorted(s[l:r+1]))+s[r+1:]
        strs.add(temp)
    print(len(strs))
    t-=1

understood why got tle on python , but got a wa on cpp soln?

tle reason: for the tle part the thing is while sorting the time complexity is nlogn but when we insert in to set it is about logn so the difference of n matters in this case that causes tle (as we know n>logn)

  • »
    »
    16 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I had the exact same doubt. This was my original question. Did you understand how the solution works?

    • »
      »
      »
      15 месяцев назад, # ^ |
        Проголосовать: нравится +10 Проголосовать: не нравится

      sort is [begin;end) in C++ STL, so if (l<=r) sort(temp.begin()+l,temp.begin()+r); should be if (l<=r) sort(temp.begin()+l,temp.begin()+r + 1); (or just not decrement it beforehand).

»
16 месяцев назад, # |
  Проголосовать: нравится +12 Проголосовать: не нравится

Problem E was a very interesting problem. My initial idea was to use a lazy segtree for an NlogN solution, but it turns out that the O(N) solution is much cleaner and simpler.

https://codeforces.net/contest/1849/submission/216349588

  • »
    »
    15 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится
    I do it with divide and conquer , and ordered_multiset when conquering the segment
    time : n(logn^2) , pass with 2932 ms :))))
    
»
16 месяцев назад, # |
Rev. 5   Проголосовать: нравится 0 Проголосовать: не нравится

I've tried C with string hashing, my idea was basically for every query we first subtract the hashed part (l, r) from the full string hash, and then we add the "sorted" hash of (l, r) by counting the number of 1s and 0s in (l, r) using prefix sum. I am able to do it in O(1) as I am precomputing powers of "p" from 1 to n, prefix sum and prefix_hash of the string.

Although I think my logic seems sound, my solution fails on test case 4: link I am not able to recognize my mistake, any help would be appreciated.

My implementation is based on this source: https://cp-algorithms.com/string/string-hashing.html

»
16 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Why do I think D is easier than C?

»
16 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

Cab anyone help me to understand editorial of problem E? I tried but, I failed to understand it.

»
16 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

has anybody done D with DP? If yes could you please explain and share the approach?

  • »
    »
    16 месяцев назад, # ^ |
    Rev. 11   Проголосовать: нравится 0 Проголосовать: не нравится

    I upsolved it with DP. Note that coloring a $$$0$$$ doesn't affect any other elements. Also, coloring a $$$1$$$ or $$$2$$$ may affect other $$$0$$$'s and other nonzero elements beside it. So, a possible strategy is to color some nonzero elements first, then color the $$$0$$$'s that are unaffected in the end.

    The idea is to consider contiguous segments of nonzero elements such that to their right is either a boundary or $$$0$$$ and to their left is either boundary or $$$0$$$. If there is a two in a segment, then we may color the two once, and use the $$$2$$$nd operation on it twice so that will affect nonzeroes on both directions until it hits a boundary or $$$0$$$. Otherwise, we can color from either ends of the segment: if we color the right end, choose the left elements and we may affect a zero to its left; if we color the left end, choose the right elements and we may affect a zero to its right. So, there are $$$2$$$ or $$$3$$$ optimal ways to color a segment.

    Suppose these segments are stored as a tuple $$$(l_i, r_i, t_i)$$$ where $$$t_i = 0$$$ if the segment is only composed of $$$1$$$'s and $$$t_i = 1$$$ if there exists a $$$2$$$ in the segment.

    Let $$$dp_{i,j}$$$ be defined as the maximum number of zeroes you can affect processing the first $$$i$$$ segments such that the $$$i$$$th segment is colored the $$$j$$$th way. If $$$j=0$$$, then we color the right end; if $$$j=1$$$, then we color the left end; and if $$$j=2$$$, then we color the two if it is possible.

    Transitions are pretty straightforward but messy. You have to consider if a segment contains a $$$2$$$, if it is right along a boundary, and what if consecutive segments are only one $$$0$$$ apart. Here's my submission: 216927717.

    Let $$$m$$$ be the number of segments and $$$c$$$ be the number of zeroes. The answer is $$$m + c - \max{(dp_{m-1,0}, dp_{m-1,1}, dp_{m-1,2})}$$$.

    But yeah, editorial solution is so much simpler.

»
16 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

There is another solution in F, a bit hard to code but easy to understand. Let's write binary search on the answer, now we need to check if answer is <= X. Then go from left to right, maintaining a binary trie. Then, on each level of the trie, check the following: if ith bit is set in X, then all of the numbers with which the XOR would have that bit have to be in a different component. Otherwise we can ignore the bigger component. Since we can unite components in $$$O(sz)$$$, and the sum of all sizes is $$$O(NlogMAX)$$$, one check works in $$$O(NlogMAX)$$$. Since the DFS constant is really hig, use DSU.

Working code: https://codeforces.net/contest/1849/submission/216600784

»
16 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone help with finding issue in my code?: 216602874 For each l[i], r[i], I substract hash of section from hash of the string and add hash of sorted section which is calculated using the formula of geometric series sum. I add ultimate hash to the set and output size of the set as an answer

  • »
    »
    16 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    The issue with the code appears to be GSS function and bigInt (__int128) type as I understood. In GSS function, when overflow occurs, division by r-1 will give wrong result. And the problem with __int128 type is that hash stored in it will be calculated erroneous if GSS returns negative number. So instead I used unsigned long long type.

»
16 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For problem E, I have a cool idea, but unfortunately my poor ability can't make it happen. Is there anyone willing to take a look and tell me if this is impossible?

My idea is: for each index $$$i$$$, we can use a monotonic stack to solve $$$Lmin_i,Rmin_i$$$, meaning that when the interval left endpoint is in $$$[Lmin_i,i]$$$, the right endpoint is in $$$[i,Rmin_i]$$$, the minimum value of this interval is $$$a[i]$$$. The maximum value is similar, and can be described by $$$Lmax,Rmax$$$.

Then we try to construct a rectangle $$$A$$$ on a two-dimensional plane. Its lower left coordinate is $$$(Lmin, i)$$$, and its upper right coordinate is $$$(i, Rmin)$$$.

That is to say, this rectangle spans $$$[Lmin_i,i]$$$ on the x-axis, and spans $$$[i,Rmin_i]$$$ on the y-axis. The area of this rectangle is the number of intervals that satisfy the minimum value is a[i].

If we add another rectangle $$$B$$$ in the same way with $$$Lmax_j, Rmax_j$$$. Then the intersection part of rectangles $$$A$$$ and $$$B$$$ is: the number of intervals that satisfy the minimum value is a[i] and the maximum value is a[j].

If we require $$$i<j$$$ at this time. Then the sum of all such rectangle pairs intersecting like this is the answer to this question.

So what I think is that index $$$i$$$ goes from left to right, first query the total area of all $$$Lmin,Rmin$$$ rectangles within the range of $$$Lmax_i, Rmax_i$$$ rectangle, add it to the answer, and then add a $$$Lmin_i,Rmin_i$$$ rectangle.

The key to the problem is this task of dynamically adding rectangles and solving total area.

My idea may be naive, please don't laugh at me hard XD

»
16 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Объясните мне решение задачи E, пожалуйста, я не совсем понимаю идею, о которой говорится в разборе.

»
15 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can i use string hash to solve the problem C?

  • »
    »
    14 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Yes, I think it may work. If you use string hash you can simply calculate the hash of all the resulting string with some math. I would not recommend it though. It's better to solve easier problems the intended way instead of forcing your way through with advanced ds.

»
14 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

C and D should be swapped in my opinion

»
11 месяцев назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

https://codeforces.net/contest/1849/submission/239744150 For problem E, Does anyone know what is this judgement means? I believe my code is correct and it passed 35 testcases.

  • »
    »
    11 месяцев назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Same problem bro. There is an error in generating test 36.

  • »
    »
    11 месяцев назад, # ^ |
    Rev. 3   Проголосовать: нравится +3 Проголосовать: не нравится

    It looks like the issue was caused by a test generator submitted during the hacking phase (it was generating the test too slowly). I've rewritten that generator in C++ and rejudged all submissions receiving that verdict, so the problem should be fixed.

    UPD: somehow that issue still persists, I'm not sure what is happening but hopefully I can resolve it

    UPD2: okay, now it definitely works.

»
11 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

IN PROBLEM D, WHY CAN'T I FIRST BFS ALL THE 2 ONES AND THEN ALL ONES AND COUNT REST UNVISITED INDICES? PLS GIVE A TEST CASE FAILING THIS?

»
2 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Problem B monsters solution video editorial link: https://youtu.be/C1v2IkfKaEM?si=vfMBsIDWsvGQCO51