Автор MohammadParsaElahimanesh, 6 недель назад, По-английски

👋 Salam Codeforces!

Here are the step-by-step solutions. Hoping you enjoy them! 😊

The announcement blog is available here and you can also explore all blogs related to Rayan here!

2034A - King Keykhosrow's Mystery

Idea: ArshiaDadrasPreparation: AmirrzwM

Solution
Implementation

2034B - Rakhsh's Revival

Idea: MohammadParsaElahimanesh and AmirrzwMPreparation: AmShZ

Solution
Implementation

2034C - Trapped in the Witch's Labyrinth

Idea: MohammadParsaElahimaneshPreparation: AmirrzwM

Solution
Implementation

2034D - Darius' Wisdom

Idea and Preparation: MohammadParsaElahimanesh

Solution
Implementation (first approach)
Implementation (second approach)

2034E - Permutations Harmony

Idea: AmirrzwM and MohammadParsaElahimaneshPreparation: MohammadParsaElahimanesh

Solution
Implementation

2034F1 - Khayyam's Royal Decree (Easy Version)

Idea and Preparation: TheScrasse

Solution
Implementation

2034F2 - Khayyam's Royal Decree (Hard Version)

Idea and Preparation: TheScrasse

Solution
Implementation

Note: Thanks to peti1234 and _istil for suggesting the idea of including this subtask!

2034G1 - Simurgh's Watch (Easy Version)

Idea: ArshiaDadrasPreparation: AmShZ and ArshiaDadras

Solution
Implementation

Note: Special thanks to Prof. Mohammad Ali Abam for proposing this problem!

Fun fact: He wasn’t able to solve it himself, but when he presented it to me [Arshia is talking...], he said, “I don’t know how to solve it yet, but I guess its difficulty would be at most around E…”)

2034G2 - Simurgh's Watch (Hard Version)

Idea and Preparation: ArshiaDadras

Solution
Implementation

Note: Thanks to _istil and Error_Yuan for suggesting the idea of including this subtask!

2034H - Rayan vs. Rayaneh

Idea and Preparation: MohammadParsaElahimanesh

Solution
Implementation
  • Проголосовать: нравится
  • +132
  • Проголосовать: не нравится

»
5 недель назад, # |
Rev. 3   Проголосовать: нравится +67 Проголосовать: не нравится

will $$$O(1000*1000*100)$$$ pass for problem A?
constraints for problem A were a bit misleading, as soon as i saw the constraints, i decided to just write bruteforce since (lcm < $$$10^6$$$), and pretests passed.

But in system testing, my solution failed.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +40 Проголосовать: не нравится

    This is an unfortunate coincidence.

    • The Polygon rules say that it's good to set $$$n \leq 100$$$ and $$$a_i \leq 1000$$$ in problem A.
    • We wrote a brute force, but using int instead of long long, it was much faster and passed comfortably, so we did not worry at all.
    • »
      »
      »
      5 недель назад, # ^ |
      Rev. 2   Проголосовать: нравится +6 Проголосовать: не нравится

      I'm sure the Polygon guide also specifies to test in Python for easy problems though, do they pass easily too? Also, none of the testers tried it with long long?

      • »
        »
        »
        »
        5 недель назад, # ^ |
          Проголосовать: нравится +8 Проголосовать: не нравится

        We tested brute force and Python, we forgot to test brute force in Python. And yes, exactly one tester tried to use long long and got AC in 827 ms because there was an "almost max test" and not the max test only containing 1000 999.

        Yes, this could have been fixed in time. Fortunately we avoided other (more critical) issues, such as both the model solution and the checker being wrong in problem G.

      • »
        »
        »
        »
        5 недель назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        I used long long int and the code is somewhat similar to yours. My code got passed in the pretest but got TLE in the main test. If i had gotten the TLE verdict in time i could've tried to fix my code. BTW congrats on becoming IM :)

        • »
          »
          »
          »
          »
          5 недель назад, # ^ |
          Rev. 2   Проголосовать: нравится +6 Проголосовать: не нравится

          Yes, it was actually lucky for me to get TLE on pretests. It could've miserably failed on main tests without it.

          Also thanks :) It's surprising that I got 7 more rating than carrot's prediction.

    • »
      »
      »
      5 недель назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      What is the polygon rule?

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Unfortunately it costed me becoming master :(

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    my bruteforce solution passed

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится -9 Проголосовать: не нравится

    Misleading for sure. Most would assume $$$O(1e8)$$$ time complexoty, but forget that the modulo operation isn't exactly fast. One of my friends also used brute force and failed on test 6. Another one of mine tried brute forcing but implemented poorly, so he could catch his mistake on pretest 3.

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится +6 Проголосовать: не нравится

      I think that using long long is also a factor. Sometimes changing from long long to int helped me reduce two-thirds of my time. Anyway, I just tested his solution with t = 100 and 999 1000 for each test case, and it ran for more than 2 seconds on C++11 and C++14. I think I learnt something as I've never considered this a major problem :)

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится
  • »
    »
    4 недели назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I don't know about checking all possible solutions. But this simple solution works.

        int m = max(a, b);
        while (m % a != m % b) { m++; }
    
»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I solved 4 questions with no WA... super happy!!!

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    bhai mai naya hu maine 2 hhi solve kar paya abhi sad hu d me apne kya lagya hai greedy ?any approach u can tell me about it?

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Yes, iterated from right to left, and if the element can be made bigger I borrowed from the left.

»
5 недель назад, # |
  Проголосовать: нравится +4 Проголосовать: не нравится

Feeling Bad for tourist.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone tells me if my intuition is right or wrong ? — I though around the condition which was mention in the problem like m will be atleast greater than a or b so let take the maximum of a and b bc m can't be smaller than a or b and then just used the simple while loop with increment which is lead to answer !! I' am Right ??

#include <bits/stdc++.h>
using namespace std;

int main() {
    ios_base::sync_with_stdio(false);
    cin.tie(NULL);
    int t;
    cin >> t;
    while (t--) {
        int a, b;
        cin >> a >> b;
        int m = min(a, b);
        while ((m % a)!=(m % b)) {
            m++;
        }
        cout<<m<<endl;
    }

    return 0;
}

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I think your code's okay? I just submitted your exact same code and it got Accepted from the verdict.

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится +1 Проголосовать: не нравится

      Yeah i know just want to know if my intuition is also right bc i also saw the solution with a*b/gcd(a,b) , AND still can't able to solve atleast two problems again in this contest !!

      • »
        »
        »
        »
        5 недель назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Yeah that’s what I used to solve A too. Don’t worry, it’s a Div1 + Div2 contest so just practice more and you’ll get used to Div2B problems. 6 months ago Div2B problems are still challenging to me :) Have a nice day to you. Today I got negative delta too, not a good performance.

»
5 недель назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

Hi, Can anyone help me understand how to intuitively come up with the solution for k=3 case in problem E? I thought about the other cases but didn't get the idea about this case.. Thanks

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Try writing a brute force and noticing a pattern, or just try random strategies for constructions and find one that works which is what I did but this is prob a lot slower than just noticing the pattern.

»
5 недель назад, # |
  Проголосовать: нравится +39 Проголосовать: не нравится

Bruh, I managed to get an Idleness limit exceeded in a non interactive problem

294097379

»
5 недель назад, # |
  Проголосовать: нравится +30 Проголосовать: не нравится

Top 60, excluding profiles with no countries.

Top 60 participants
»
5 недель назад, # |
Rev. 4   Проголосовать: нравится +6 Проголосовать: не нравится

The editorial for E is quite unclear. I understand, that if k is even, we can form [k/2] pairs of [pi, reverse(pi)], as each pair sums to a constant row, the table is good. Now let k mod 2 = 1, then the solution suggests that we can represent k as k = 2 + ... + 2 + 3.

Also, the solution constructs an example of three permutations {p1, p2, p3} that p1 + p2 + p3 is a constant row.

What I don't understand, is why we can choose the remaining pairs in a way they don't intersect with the set {p1, p2, p3}

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +11 Проголосовать: не нравится

    Note that $$$k$$$ cannot be $$$n! - 1$$$ (similar as it cannot be $$$1$$$). Therefore, $$$k \leq n! - 3$$$. An answer always exists since there are three forbidden permutations (the reverses of $$$p_1, p_2,$$$ and $$$p_3$$$).

»
5 недель назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

In problem D, with a little bit of optimization, a "brute force" solution using 3 sets to store indices of 2s, 1s and 0s is accepted, is this intended?

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I used the same approach.... got accepted.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I think using one variable to store the index of one "1" is enough.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    why do you consider this brute force?

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Oh I guess it's just the nature of my solution lol. I just used a loop while the array is not sorted then swap 2s with 1s and 1s with 0s. And it turned out to be accepted, and also not TLE.

      • »
        »
        »
        »
        5 недель назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I think it depends more on how you determine which swaps to do. The sets on their own gives you log(n) access and removal, which is already better than nothing.

        I would think if you used more than n swaps, your solution wouldn't be accepted, and if you're applying n swaps, as long as determining each swap takes less than O(n) time, it would be accepted and I'd consider that not brute force.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

i fall from rank around 4k to 7k just because of using long long instead of int. can't believe.

»
5 недель назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

just another approach for task D which is more clear for me than the editorial's one. Lets iterate over our array and notice that if we have 2 on i-th position then it is either already sorted (like 0 1 2) or we need to swap it with some 1 (cause the diff between i-th and j-th element must be exactly 1) from the right side of our array (like 0 2 1 -> 0 1 2). It is obvious that it is better for us to swap it with some 1 that has the biggest index, so we make sure that it will be sorted ( 2 1 1 -> 1 1 2). Now we will have the same logic for every 1 that we have but we will swap it with the rightest 0 instead. So my solution 294080860 does it by saving priority_queue for 1-es and 0-es position ( may be we dont even need those and should use only 2 variables instead). So when i meet 2 i do swap it with top of my priority_queue ( it is the index of the rightest 1) so now i should decrease value of my number by 1 and increase the rightest 1 by 1( 2 2 1 0 1 2 -> 1 2 2 0 1 2) and pop the value from my priority_queue(just in case that 1 is 2 after the swap). Doing the same for every 1 i meet dont forget to update queue for 1-es cause when we swap our 1 with the rightest 0 new 1 can become the rightest 1 for some 2 to swap with (for example 1 2 1 0 -> 0 2 1 1 -> 0 1 1 2). If u sure that we dont need these queues and may store only 2 values or this solution has a countertest let me know pls)

»
5 недель назад, # |
Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

Problem D can also be solved in simple way by just comparing original array and the sorted version of it. Just handle cases greedily by traversing array A from back (B is sorted version of array A):

  1. A[i] = 0, B[i] = 1 => In A, replace this 0 with first 1 available from beginning
  2. A[i] = 1, B[i] = 2 => In A, replace this 1 with first 2 available from beginning
  3. A[i] = 0, B[i] = 2 => This is special case, change A[i] 0 -> 1 -> 2. like in step 1 & 2.

AC solution — 294056505

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Yes, I used a similar approach 294077012

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Can you give an intuitive explanation or proof for why this always works in <= n operations?

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Note that for case 3 we have to swap the leftest 2 in array A.

      Since you can do at most n operations, you can "assign" each operation to some index, so that every index has at most 1 operation assigned to it. Then, for cases 1, 2 we assign their operations to index i. The catch is when we try to do the 3rd case. Let's assign swap operation from 0 to 1 to index of 0 and swap operation from 1 to 2 to index of 2. Notice that we would need to perform a swap on index of 2 (let's say x) only if there is some 2 to the left of x (because A[x] will become 1 after swapping it). That means if we swap the leftmost 2 in 3rd case we will be fine.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone tell why this code is giving TLE on TC: 7 for problem C: 294109488.

Thanks!

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Maybe This part Makes it $$$(nm)^2$$$:

    for (int i = 0; i < n; ++i) {
                for (int j = 0; j < m; ++j) {
                    if (g[i][j] == '?' && !visited[i][j]) {
                        vector<pii> component = dfs_count_connected(i, j);
                        if (component.size() > 1) {
                            for (auto [cx, cy] : component) {
                                v[cx][cy] = 1; // Mark all cells in the component as processed
                                loop.insert({cx, cy});
                            }
                        }
                    }
                }
            }
    
    
  • »
    »
    5 недель назад, # ^ |
    Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

    to help you with local debugging, you can run your code on this test and see that it is slow

    I think that the problem is here (I had to replace the dollar sign with # as it was breaking markup)

                   if (is_valid(nx, ny, n, m)) {
                        if (g[nx][ny] == '#') {
                            // Path exits the maze
                            for (auto el : path) g[el.first][el.second] = '#', v[el.first][el.second] = 1; // Mark all as exit paths
                            return; // Exit path
                        }
                        if (visited_nodes.count({nx, ny})) return; // Loop detected
                        q.push({nx, ny});
                        visited_nodes.insert({nx, ny});
                    } else {
                        // Path exits the maze
                        for (auto el : path) g[el.first][el.second] = '#', v[el.first][el.second] = 1; // Mark all as exit paths
                        return; // Exit path
                    }
    

    So inside bfs you are following some paths, so each "recursive" bfs call is not $$$O(1)$$$ anymore but can be up to $$$O(|path|)$$$ and path in bad cases can be as long as $$$O(n^2)$$$, thus overall asymptotic is $$$O(n^3)$$$

  • »
    »
    5 недель назад, # ^ |
    Rev. 4   Проголосовать: нравится +3 Проголосовать: не нравится

    The only issue with the above code was that visited_nodes was local to the lambda function, causing the complexity to escalate to O((nm)^2) (because although we are marking 'v' as well but if we call bfs from some other node i' and j' then it is possible that we again try to check for loops which itself can have worst case complexity as n*m — Since in my TLE code, the iteration was constrained by this condition: if (visited_nodes.count({nx, ny})) return;, changing this to if (v[nx][ny]) return; fixed the code), removing it and only using 'v' vector for marking visited nodes resolved the issue. Here is the AC submission: 294113385.

    Thanks a lot MohammadParsaElahimanesh and polka125 , really appreciate your insights!

»
5 недель назад, # |
  Проголосовать: нравится +9 Проголосовать: не нравится

I like the question will upsolve c,d for sure and nice stories thanks.. for the good questions

»
5 недель назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

In the editorial for F2, where did upper bound k come from in the sum i <= j <= k. There is no mention of what k is here.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Hours later I reread my comment and realize that k is the variable from the problem statement.

»
5 недель назад, # |
Rev. 2   Проголосовать: нравится +8 Проголосовать: не нравится

Could you explain F2 more intuitively? For example what does $$$weight_i$$$ means? I don't see anywhere we're processing the doubling of value.

Also it seems that by calculating the sum of $$$dp_i$$$, we're summing up the contribution of the values before reaching the first special point. What about the values we gained after reaching that point?

I understand that these ideas are meant to solve F1 only. However I just can't come up with how to optimize from F1 to F2. Or F2 is a completely different approach from F1?

  • »
    »
    5 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится +48 Проголосовать: не нравится

    The approach to F2 is very different and should not be seen as an optimization of F1 and I think it's not explained clearly in the editorial. We never consider values like "number of ways to go from scroll $$$i$$$ to scroll $$$j$$$ without passing through any other scroll on the way". The main idea uses the fact that $$$2^i$$$ is actually a very specific function — I don't think the approach can be generalized to even $$$3^i$$$ for example.

    The key insight is to relate $$$2^i$$$ with the number of subsets of an $$$i$$$-element set. If you consider a path that goes through scrolls $$$s_1, \ldots, s_c$$$ (and no other scrolls) and we consider a gem taken before entering $$$s_1$$$, it is counted with multiplicity $$$2^c$$$. But rather than considering it with the multiplicity $$$2^c$$$, we will count it once per an event "for that gem, there is a path that goes through some set of scrolls $$$q_1, \ldots, q_d$$$ after taking that gem (but possibly some other scrolls as well)". There are $$$2^c$$$ of ways you can choose $$$q_1, \ldots, q_d$$$ out of $$$s_1, \ldots, s_c$$$, so if you just ignore the "disjointness condition" (i.e. the "without passing through any other scroll on the way" part) then multiplicities will magically work out by themselves.

    But even after that point, getting the final result out of these is nontrivial as well. I used the identity that $$$2^c = 2^{c-1} + \ldots + 2^1 + 2^0 + 1$$$.

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I think you can solve it with $$$3^i$$$, just add a factor of $$$2$$$ here:

      add(cnt_weighted[i], 2 * paths(i, j) * cnt_weighted[j]);

      We considered giving this version in the contest, would it have made sense?

      • »
        »
        »
        »
        5 недель назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Right, that's true :). It crossed my mind when writing the previous comment, but I didn't have time to think it through. Nevertheless, it seems like when coming up with $$$3^i$$$ one has to solve $$$2^i$$$ before as an intermediate step. It would make sense to pose it on the contest, but its score would have to be increased appropriately

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      This is very cool, pity that the person who wrote the editorial didn't understand the solution

      • »
        »
        »
        »
        5 недель назад, # ^ |
        Rev. 2   Проголосовать: нравится +8 Проголосовать: не нравится

        ⏳ This happened because we were in a hurry to publish it, juggling multiple tasks, and dealing with limited time. Apologies for the oversight!

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится +18 Проголосовать: не нравится

      Sorry for the earlier miswriting and for missing parts of the problem in the editorial. I’ve edited it, and I believe the current version is now clear and detailed enough for anyone to understand the solution steps.

      I’ve also tried to address the points raised in your comments and those from others as well. Please let me know if you still feel something is missing or needs further clarification.

      Apologies again, and thank you for your feedback! 💡

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      The approach to F2 is very different and should not be seen as an optimization of F1

      this is not true, mine is. You can optimize it through some coefficient magic.

      • »
        »
        »
        »
        5 недель назад, # ^ |
          Проголосовать: нравится +10 Проголосовать: не нравится

        elaborating slightly, note that you do inclusion exclusion on the paths, so for a fixed i, and a fixed j where you are considering transitions from state j to i, we need to subtract ways from i to k, then k to j for some intermediate state k.

        The core observation is whenever k is included the set of numbers, so will i, and thus, we can associate the subtracted coefficient of the path i -> k -> j with k (along with its normal coefficient where you go directly from k -> j).

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится -8 Проголосовать: не нравится

    dang, if some red is struggling with it, then i have no chance

  • »
    »
    5 недель назад, # ^ |
    Rev. 3   Проголосовать: нравится +60 Проголосовать: не нравится

    I'll try to explain this the way I solved, which seems similar to the editorial solution but in a more intuitive manner.

    Observe that the process is equivalent to making the following change: Instead of multiplying the value of rubies in our satchel by 2 when a special condition is achieved, we spawn one identical ruby for each ruby currently in our satchel.

    Let $$$dp_{x,y}$$$ represent the expected value of rubies spawned at state $$$(x, y)$$$, where $$$x$$$ is the number of red rubies and $$$y$$$ is the number of blue rubies in the satchel when a special condition is achieved.

    The probability of transitioning from state $$$(x_1, y_1)$$$ to $$$(x_2, y_2)$$$ is given by the hypergeometric distribution:

    $$$ P(x_1, y_1, x_2, y_2) = \frac{\binom{n - x_1}{x_2 - x_1} \binom{m - y_1}{y_2 - y_1}}{\binom{n - x_1 + m - y_1}{x_2 - x_1 + y_2 - y_1}}, $$$

    where $$$x_2 \geq x_1$$$ and $$$y_2 \geq y_1$$$, otherwise, we cannot reach $$$(x_2, y_2)$$$ from $$$(x_1, y_1)$$$.

    In the DP transitions, we either spawn some rubies from those drawn from the chest, or we spawn rubies from those spawned at a previous state.

    For the first case, we have:

    $$$ dp_{x, y} += (2x + y) \cdot P(0, 0, x, y), $$$

    where we multiply by $$$2x + y$$$, which represents the value of rubies spawned for each ruby drawn from the chest.

    For the second case, we have:

    $$$ dp_{x_2, y_2} += dp_{x_1, y_1} \cdot P(x_1, y_1, x_2, y_2), $$$

    where we multiply the expected value of rubies from a previous state by the probability of transitioning to the new state.

    So, iterate over states in sorted order and calculate the DP values. In the end, the answer will be expected value of spawned rubies $$$+$$$ rubies drawn from chest $$$=$$$ (sum of $$$dp_i$$$ over all $$$i$$$) $$$ + $$$ $$$(2n + m)$$$.

    This method can be extended to multiply by any number $$$m$$$ by spawning $$$(m - 1)$$$ rubies for each ruby in the satchel.

  • »
    »
    5 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Say for the sake of simplicity you have one gem, ever. In the end, if we define phantom conditions $$$(0,0)$$$ and $$$(m,n)$$$, you end up counting expressions of the form $$$p(0,i_1)p(i_1,i_2)\ldots p(i_j,k+1)$$$ where $$$p(a,b)$$$ is the number of ways to get from condition $$$a$$$ to condition $$$b$$$ without restrictions, with the conditions topologically sorted. A path that goes through exactly $$$x$$$ conditions will be counted $$$2^x$$$ times: for example, the path 1 -> 2 -> $$$k+1$$$ is counted twice, in 1 -> k+1 and 1 -> 2 -> k+1. Then, accounting for gems is straightforward because we can treat all the gems as spawning at once when we reach a condition, lazy prop style.

    As to how to motivate this: I agree that F1 is kinda a dead end. One scenario I can think of would be as follows:

    After doing F1, you stare at the formula for inclusion-exclusion. For getting from condition $$$0$$$ to condition $$$k+1$$$ withour passing through anything else, you add expressions of the form $$$p(0,k+1),p(0,i_1)p(i_1,i_2)p(i_2,k+1),\ldots$$$ and $$$-p(0,i_1)p(i_1,k+1),-p(0,i_1)p(i_1,i_2)p(i_2,i_3)p(i_3,k+1)\ldots$$$

    You somehow figured that you likely have to do some algebraic manipulations that modify inclusion-exclusion so that you do it once per $$$k$$$ instead of $$$O(k)$$$ times. Let's go back to the basics: consider the 1-gem case with $$$k=1$$$ (which is a subproblem of larger $$$k$$$ anyways) is there a way to count the path that meets the condition twice right away? You can just add $$$p(0,1)p(1,2)$$$ instead of subtracting it, and that can be tracked in the dp — then you basically have the solution.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone help me figure out the reason for MLE despite the space complexity being O(n)?

https://codeforces.net/contest/2034/submission/294082426

The idea was to first swap all 1's and 2's using 2 pointers and then swap all 0's and 1's to sort them followed by 1's and 2's

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I guess you cannot sort it well and you have infinite loop witch led to many swaps and MLE. you can use assertion like swaps.size() <= 2n to find infinite loop. (I cannot test it my self right now, maybe some hours later:/)

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

294071265 this gives runtime error on pretest 8 for problem C . Can anyone tell what is the reason . I have not seen similar verdicts in any other solutions , so I am curious what caused this error ?

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

can someone explain F2 in more detail?

»
5 недель назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

Why does F1 solution work with non-equal probabilities of transitions from (r, b) to (r-1, b) and (r, b-1), it seems to treat every path as having the same probability ?

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +36 Проголосовать: не нравится

    Let's calculate the probability of going from $$$(0, 0)$$$ to $$$(n, m)$$$ following a specific path.

    • Numerator: whenever you jump from $$$i$$$ to $$$i-1$$$ red rubies, you multiply it by $$$i$$$. Same for the sapphires.
    • Denominator: whenever you jump from $$$i$$$ to $$$i-1$$$ gems in total, you multiply it by $$$i$$$.

    So the probability is $$$\binom{n+m}{n}^{-1}$$$ regardless of the path.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

for problem A , can we use binary search on answers

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I don't think so.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    you can have a try. It is useful but it is not the best. the ans must from 1 to m*n; so you can use the binary search,the time is O(log(m*n)). but in fact we have known that the ans=lcm(m,n),the time is O(log(max(m,n))

    • »
      »
      »
      5 недель назад, # ^ |
        Проголосовать: нравится +8 Проголосовать: не нравится

      There’s nothing binary-searchable in this problem. You could not determine whether the answer is greater or smaller than a specific value or not.

      • »
        »
        »
        »
        5 недель назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        i thought it like if the condition is meet , so what ever the m value on right will also meet so , just eliminate the right side and check on left for least value

»
5 недель назад, # |
  Проголосовать: нравится +13 Проголосовать: не нравится

wow, so F1 editorial is just my explanation T_T

  • »
    »
    5 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится -11 Проголосовать: не нравится

    Yes, we used almost the same writing format. Your solution is more complete, because you also described how to compute $$$ways_{i,j}$$$ as well!

»
5 недель назад, # |
Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

Does anyone know why this submission has runtime error?

I am trying to greedily solve the problem in parts, first by the groups of 0s and 1s

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

This div was really discouraging.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    At which point? The tourist dramas, or the #define int long long dramas, or something else? =))

»
5 недель назад, # |
Rev. 3   Проголосовать: нравится +17 Проголосовать: не нравится

A way simpler solution for problem D: https://codeforces.net/contest/2034/submission/294029509

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

in problem E is there some proof that there's no solution for k = (n!)-1 ?

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +24 Проголосовать: не нравится

    It's pretty obvious, because if you use all $$$n!$$$ permutations, you get the same sum $$$S$$$ in each column. Then, if you remove any permutation, the sum in each column will be unique.

    For example if you remove something like $$$[1, 3, 4, 2]$$$, then the sum in each column will be $$$[S - 1, S - 3, S - 4, S - 2]$$$.

»
5 недель назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

In questions F1 and F2 why the constraint like

It is guaranteed that the sum of n and the sum of m over all test cases do not exceed $$$2⋅10^5$$$

is there? It has nothing to do with the approach used, or is there some other possible way to solve it?

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +14 Проголосовать: не нравится

    Probably just a general technique to hide intended solution.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +6 Проголосовать: не нравится

    I got into this trap and tried to come up with some $$$\mathcal{O}((n+m) \cdot k)$$$ solutions but failed :(

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone provide a proof for the greedy solution in G1? I came up with some similar greedy to check whether the answer is 2, but they all turned out to be incorrect.

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится +18 Проголосовать: не нравится

    Think of it this way:

    • Case $$$I = 0$$$: This case is trivial and we could assign any color we want to that interval!
    • Case $$$I = 1$$$: At this point, we are at $$$x$$$, and we need to assign an opposite color to one of the intervals starting at $$$x$$$ to maintain validity. Here’s why the interval with the largest endpoint is the best choice:
      • Suppose we have an interval $$$[x, x+k]$$$, which is the largest interval among those starting at $$$x$$$. In our greedy algorithm, we always assign this interval the opposite color from the previous one.
      • Now, consider if an “efficient” algorithm chose a smaller interval $$$[x, x+r]$$$ where $$$r < k$$$ to have the opposite color and instead assigned the same color as the previous one to $$$[x, x+k]$$$.
      • In the efficient algorithm, some other interval would need to have a unique color at $$$x + r + \epsilon$$$:
        • If this interval is $$$[x, x+k]$$$, then $$$[x, x+k]$$$ could have been assigned the unique color in $$$[x, x+r]$$$ instead, making it unnecessary to assign $$$[x, x+r]$$$ a unique color.
        • Otherwise, this interval must be another one, such as $$$[x+r-s, x+r+t]$$$. This interval would already have the opposite color compared to $$$[x, x+r]$$$. However, this creates a contradiction because both $$$[x, x+k]$$$ and $$$[x+r-s, x+r+t]$$$ would have the opposite color of $$$[x, x+r]$$$ and both would cover $$$x + r + \epsilon$$$.
      • Thus, choosing the largest interval $$$[x, x+k]$$$ to have the opposite color ensures validity and avoids any contradictions.
    • Case $$$I = 2$$$: In this case, none of the upcoming intervals can have a unique color at this point because all colors are already used by earlier intervals. For any upcoming interval, there will always be another interval of the same color starting earlier that has a unique color at this point. To resolve this, we assign the interval with the smallest endpoint among the remaining intervals. This approach follows the same logic as the previous case: ensuring that we minimize conflicts and maintain validity.
»
5 недель назад, # |
  Проголосовать: нравится +2 Проголосовать: не нравится

A solution whose complexity is $$$O(T\cdot M\cdot \log M\cdot K + M\cdot \sqrt{M} + \sum_{i=1}^Tn_i\cdot \log n_i)$$$ cannot easily pass H. I think this solution might not be worse than the tutorial. It may be better if it is guaranteed that the sum of $$$M$$$ doesn't exceed $$$3\cdot 10^6$$$.

»
5 недель назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

i dont know what is happening but i am stuck in this weird problem where i am getting correct answer in my local system but after submission i am getting a totally different answer

294374002

in the above submission if you go to the second test case and then for the test case

4

2 0 0 1

i am getting a weird answer: 5 1 4 2 1 2 3 3 4 4 3

but in my local system i am getting correct answer here is the image of the execution

need help

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    These types of differences arise due to the varying behavior of compilers on different systems. For instance, if you access an out-of-bounds index in an array (e.g., a[-1]), your local compiler might return a default value like 0 due to its specific behavior, while on the judge system, it could return a completely random value from memory (e.g., 1948329449) or even result in a runtime error (RTE) if the compiler detects the invalid access.

    In your case, I suspect something similar is happening, which is why you’re seeing different results on your local system compared to the judge output. The good news is that you’ve identified the specific test case causing the issue, and it’s a small test. This makes it easier to debug your code by carefully checking for instances where you’re accessing out-of-bounds elements in your arrays.

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

can anyone point out logic error in my code It gives wrong answer on test case 7. I am marking inescapable cell as infected and then doing floodfill from them. https://codeforces.net/contest/2034/submission/294627595

»
5 недель назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Submission:294661590

Anyone, please help !! my submission has the same memory for all test cases. it passed the first 6 test cases but why it failed on the 7th ??

  • »
    »
    5 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    These types of issues occur due to exceeding the stack memory size as a result of making too many recursive function calls (almost infinitely), which ultimately leads to MLE (Memory Limit Exceeded). Even if you manage to fix the MLE issue, I believe you might still encounter TLE (Time Limit Exceeded) because of the sheer number of recursive calls being made.

    MLE tends to happen faster than TLE, which is why, in some cases where you’re using a fixed amount of memory for each test, the program runs into MLE first due to excessive recursion before it can hit the time limit.

»
5 недель назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

https://codeforces.net/contest/2034/submission/294830979

Can someone please explain if doing a BFS would work for problem C? Marking all the exitable cells and the '?' cells which are surrounded by these exitable cells on all 4 directions and then subtracting them from the total.

»
4 недели назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

My solution to F2:

Consider the expectations directly. Let $$$f_i$$$ stands for the expected value of the path from $$$(n, m)$$$ to condition $$$i$$$. Initially $$$f_i=2(n-r_i)+(m-b_i)$$$. Then we'll calculate the contribution of each condition seperately. This is ok because of the linearity of expectations. If we passes a condition, then at this moment the value doubled. In other words, the contribution of this condition is the value of the path before it. Therefore for each condition $$$j$$$, if we can go from $$$j$$$ to $$$i$$$, let $$$p$$$ equals to the probability of a path from $$$(n, m)$$$ to $$$i$$$ passes $$$j$$$, $$$f_i\gets f_i+pf_j$$$.

This works no matter what the coefficient of the conditions is.

I'm posting this because I'm not sure if my idea is correct, or I just accidentally wrote a right code.

»
3 недели назад, # |
  Проголосовать: нравится -31 Проголосовать: не нравится

shit E :( and good F :)