Блог пользователя awoo

Автор awoo, история, 7 недель назад, По-русски

1997A - Сильный пароль

Идея: BledDest

Разбор
Решение (BledDest)

1997B - Создайте три региона

Идея: BledDest

Разбор
Решение (Neon)

1997C - Четные позиции

Идея: BledDest

Разбор
Решение (adedalic)

1997D - Максимизация корня

Идея: BledDest

Разбор
Решение (Neon)

1997E - Прокачка персонажа

Идея: BledDest

Разбор
Решение 1 (awoo)
Решение 2 (awoo)

1997F - Фишки на прямой

Идея: BledDest

Разбор
Решение (BledDest)
  • Проголосовать: нравится
  • +89
  • Проголосовать: не нравится

»
7 недель назад, # |
  Проголосовать: нравится +55 Проголосовать: не нравится

Problem D can be solved in $$$O(n)$$$ with just a basic DFS. The idea is to maximize the minimum in each subtree. Submission: 273547075

  • »
    »
    7 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    just what i do.

  • »
    »
    7 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Thanks for the great solution. Can you please explain, when node==0 , why we are not using this part?

    Part
    • »
      »
      »
      7 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      You only care about the final value in the node 0(1). So you maximize the minimum in node 0's subtree(excluding node 0) and then use the operation as much as the minimum allows.

      • »
        »
        »
        »
        7 недель назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Thanks, Can you provide the intuition for your algo and how and why it works, I was on almost the same path for a few minutes in the contest but didn't connect much. Can you please explain>

        • »
          »
          »
          »
          »
          7 недель назад, # ^ |
          Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

          This was my thought process. I want to use the operation on node 1 as many times as possible. The number of times I will be able to do it is equal to the minimum value in the subtree(excluding node 1) of node 1. Now to extend the logic, the maximum number of times I will be able to do it is equal to the minimum value in subtrees of node 1's children. This logic naturally looks like dynamic programming.

          DP transitions: if the value of the current node is greater than the value of $$$dp[node]$$$, we don't change anything, otherwise we use the operation while $$$a[node] < dp[node]$$$, in math words $$$dp[node] = \lfloor\frac{a[node] + dp[node]} {2} \rfloor $$$.

          • »
            »
            »
            »
            »
            »
            7 недель назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            Thank you so much, mate.

          • »
            »
            »
            »
            »
            »
            7 недель назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            Thank you for your solution, I have difficulty understanding (a[node] + dp[node]) / 2 part.

            Is this because we try to evenly distribute values between a node and it's parent? So that we can ensure root node can get maximum value?

            • »
              »
              »
              »
              »
              »
              »
              7 недель назад, # ^ |
              Rev. 3   Проголосовать: нравится +1 Проголосовать: не нравится

              Yeah, it is just a faster way of doing:

              while(a[node] < dp[node]) {
                    a[node]++, dp[node]--;
              }
              

              since if you don't do anything, the value in the current node will become the minimum. $$$\lfloor \frac{a[node] + dp[node]}{2} \rfloor > a[node]$$$

          • »
            »
            »
            »
            »
            »
            7 недель назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            Can you please explain the dp state that you defined ??

            • »
              »
              »
              »
              »
              »
              »
              7 недель назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              $$$dp[node]$$$ means the minumum value in the subtree of $$$node$$$.

  • »
    »
    7 недель назад, # ^ |
      Проголосовать: нравится +7 Проголосовать: не нравится

    I created video editorial for D: Maximize the Root.

  • »
    »
    7 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Same solution without using additional dp array: https://codeforces.net/contest/1997/submission/273543097

  • »
    »
    7 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    can u explain the format of input in d?i didnt get who is the parent,please explain by taking 1st test case,who is whose parents..

    • »
      »
      »
      7 недель назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      Input: 4 0 1 0 2 1 1 3

      Tree:

      Note: The given array is $$$p_2, p_3, p_4, ..., p_n$$$, in this case the array is $$$1, 1, 3$$$, that means that node 2's parent is node 1, node 3's parent is 1 and node 4's parent is node 3.

  • »
    »
    7 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Can anyone tell me what is wrong with this solution i got a overflow error even though everything is in long long 273894619

  • »
    »
    7 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You are correct. But you don't even need the memoization. A single pass down the tree suffices. Here's a slightly cleaned up version of my contest solution: 273914043

  • »
    »
    7 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Checkout my clean solution 273559093

»
7 недель назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Why did I time out $$$O(nlog^2n)$$$ on E.

  • »
    »
    7 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится -45 Проголосовать: не нравится

    I'm wondering a similar thing. Why do $$$O(n\cdot\log^4(n))$$$ solutions time out but $$$O(n \sqrt{n})$$$ solutions pass?

    $$$log^4(200000) \approx 790$$$ and $$$\sqrt{200000} \approx 447$$$. They really aren't that different. Plus, $$$\log$$$ is a tiny thing compared to square root.

»
7 недель назад, # |
Rev. 2   Проголосовать: нравится -12 Проголосовать: не нравится

I fsted D and E :(

I hope there will be strong pretests next time.

»
7 недель назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

I enjoyed all of the problems I saw in the contest

A was easy but still is a good problem for Div 2 A.

My only complaint is that for problem B, it wasn't really emphasized that there was one connected component at the start. It was there in text but the diagram had more than one component. Ideally that part would have been in bold.

For problem C, I misread the problem (which was completely my fault) and thought that the data corruption affected both random even and odd characters. If anyone has a solution to that problem I'd be interested. The greedy doesn't work as it is possible that by adding a closing bracket at your current position you stop the bracket sequence from becoming a regular bracket sequence. (Funnily enough, I actually the working code for this problem as a part of my attempt at a solution for the problem where random even and odd characters are corrupted.)

Problem D was a classic tree DP problem that I almost solved but ran out of time because of how much time I wasted on problem B and C.

Very nice and educational contest even though I completely under-performed.

  • »
    »
    7 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    D was binary search not DP i thought.

    • »
      »
      »
      7 недель назад, # ^ |
        Проголосовать: нравится +1 Проголосовать: не нравится

      You can solve it with both. To me DP was more intuitive but the intended solution was binary search.

  • »
    »
    5 дней назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    I solved it by recognising pattern for the optimal filling of the brackets(for the given constraints).

    IDK why it works, maybe I shall prove it sometime.

    280823450

»
7 недель назад, # |
  Проголосовать: нравится -11 Проголосовать: не нравится

B was basically an IQ problem

  • »
    »
    7 недель назад, # ^ |
      Проголосовать: нравится +2 Проголосовать: не нравится

    Why are you so obsessed with iq? You also literally solved it in the contest.

    • »
      »
      »
      7 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I am not so obsessed with IQ. It is just an interest of mine.

      B being an IQ problem doesn't really have to do with whether or not I solved it. It is an IQ problem because you clearly can't train for such problems. I bet you can't even find a similar problem on codeforces.

      Anyway, it seems like most div. 2 A-C are IQish problems. To me, it just felt like this B was a full-on IQ problem. I bet you could put that problem on an IQ test and no one would bat an eye.

      • »
        »
        »
        »
        7 недель назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Isn't your whole point that you can never reach red because your iq isn't high enough. Or am I mistaken? So if you can solve so called "IQ problems" than why can't you reach red?

        Also I think that your claim that you can't train for such problems is clearly ridiculous. Many GMs were stuck at Newbie for months and couldn't solve div.2 A-C. Has their IQ changed?

        • »
          »
          »
          »
          »
          7 недель назад, # ^ |
            Проголосовать: нравится -11 Проголосовать: не нравится

          Well, to be fair, it was an easier IQ problem. There are harder IQ problems out there, like that one B2 from a few contests ago. Though I'd classify that B2 as an IQish problem, not a full-on IQ problem. I was just letting the authors know that they made (whether on accident or on purpose) an IQ problem.

          The sad thing is that the IQish problems don't seem to go away completely as you search for higher-rated problems. There is no feeling like going to a problem's editorial and seeing that it was an IQ/IQish problem all along. Of course, there are worse feelings, like getting sent to hell probably, but there is no feeling quite like failing to solve an IQ problem.

          Also I think that your claim that you can't train for such problems is clearly ridiculous. Many GMs were stuck at Newbie for months and couldn't solve div.2 A-C. Has their IQ changed?

          Probably, because most of them started codeforces when they were like 12. On average, cognitive ability increases substantially from 12-25, but it mostly stops increasing by age 18. Show me the profile of one of these users, though. I'd be happy to be proven wrong.

          • »
            »
            »
            »
            »
            »
            7 недель назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            According to Emory School of Medicine, "IQ is an abbreviation for Intelligence Quotient. “Intelligence,” as measured by IQ tests is rather narrowly defined. An IQ is intended as a predictor of the level of abilities a child will need to be successful in school. In the general population this score becomes relatively stable after about four years of age." So no, IQ doesn't generally change when you're an adolescent.

            Here are two profiles that I think have inspiring rating graphs.

            1. Bakry
            2. adamant his username fits his rating graph quite well in my opinion
            • »
              »
              »
              »
              »
              »
              »
              7 недель назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              What they mean by that is that individuals' IQ scores generally stabilize by the age of 4. This means that, if a person scores 115 at age 4 (this is in reference to other 4-year-olds), they will likely score around 115 at age 34 (this is in reference to people around the age of 34). It doesn't mean that cognitive ability doesn't increase past 4. Of course a 34 year old is gonna be smarter than a 4 year old, on average.

              With that being said, I think that both of those users you have mentioned started way before they were 18. So they had lower cognitive ability when they started than they do now.

      • »
        »
        »
        »
        7 недель назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        These are called "ad-hoc problems."

  • »
    »
    7 недель назад, # ^ |
      Проголосовать: нравится +11 Проголосовать: не нравится

    You are overusing the term IQ problem.

»
7 недель назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

The given grid contains at most 1 connected region

missed the above statement in B, because my mind covered with the pictures in the statement , assumed there can be multiple components, because of which much time got wasted

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Amazing Contest! but I think there is a problem, rating changes applied to all participants even those whose rating greater than 2100 this could be a mistake maybe.

»
7 недель назад, # |
Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

The B Question was not correctly framed.The ambiguity in the sentences were high.

»
7 недель назад, # |
Rev. 3   Проголосовать: нравится +3 Проголосовать: не нравится

Alternative approach to problem E.

I observed that, if we do not fight against monster i for some k = x, then we will not fight against it for k < x either. So I found the max k for each monster that would make them run away from the fight, and for each input if x <= res[i] then we do not fight with them. My proof was prayers, so anyone is welcome to either prove it right or wrong.

The rest is straight forward: We will do binary search for each monster. Let's take some fixed k. If we will not fight with monster i, then we have already fought with at least k*a[i] monsters out if first i-1. If that's the case then we should look for a greater k. Now we must be able to check how many of them we did not fight, meaning how many of them had res[j] >= k for j < i. We can do it with simple sum segment tree, keeping how many monsters had k = x in the leaf nodes.

The reason I cannot prove the first paragraph is that, when we check k < x, it's true that we will have more frequent level-ups, however it's also true that we will skip more monsters, so if we create an equation, both sides would be decreasing.

Accepted solution: 273561881

  • »
    »
    7 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    A proof in plain language:

    Consider two ks, k_0<k_1, they walk in the sequence simultaneously.

    When k_0 levels up, k_1 still stays on the old level because more monster is required for him. This will result in k_0 skipping some monsters that k_1 fought.

    But that's fine because either k_1's level never catches up with k_0, or if it catches up, at that time k_0 already made some progress at that level, and now they will fight the same monsters.

    So k_1 can never fought more monsters than k_0 do.

  • »
    »
    7 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Another way to do this with a similar approach is store store an ordered multiset of all res[j] for j < i and find the order of key k.

  • »
    »
    7 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Proof for the first paragraph:

    Proposition to be proved is $$$(\forall i \exists ans[i] $$$ such that $$$query(i, x) \leftrightarrow ans[i] < x)$$$. Assume that $$$i$$$ is the minimum index for which the statement does not hold. Hence $$$\exists k$$$ such that $$$query(i, k) \wedge \neg query(i, k + 1)$$$. Therefore $$$card(j < i, ans[j] = k) \geq a[i] + 1$$$. We define $$$t := max(j < i, ans[j] = k)$$$. $$$(k*a[t] \leq \Sigma_{k' = 0}^{k - 1} card(j < t, ans[j] = k') \wedge k*a[t] + a[t] > \Sigma_{k' = 0}^k card(j < t, ans[j] = k'))$$$ which means that $$$a[t] > a[i] \wedge \neg query(t, k)\Rightarrow \neg query(i, k)$$$. Thus we achieve proof by contradiction.

  • »
    »
    7 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    My post contest solution with explanatory comments. Your explanation helped. Thanks <3

    https://codeforces.net/contest/1997/submission/273966239

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In Problem F, can anyone explains me this line how this would be the most optimal:

this is just checking that the minimum number of Fibonacci numbers used to represent an integer is equal to m .

»
7 недель назад, # |
Rev. 3   Проголосовать: нравится +1 Проголосовать: не нравится

I had a solution for problem E that's a bit simpler conceptually. (Doesn't require the harmonious series observation or changing the timeframe from i to k)

It's based on the following observation:

If a monster is fought when k=k_0, it will be fought when k is larger.

Given above, we can walk from position 1 to n, and maintain the number of monster fought for each k. For every new monster, based on observation, we only need to increase the count by 1 for a suffix. That can be done with BIT or your favorite range query data structures.

Now we still need to decide what's the smallest k for each position, or the which suffix should we operate on.

Again based on the observation, this can be found by binary search. When checking whether we will fight that monster for k, we make a query to our BIT to calculate the current level for k.

The overall complexity is still O(nlog^2n).

  • »
    »
    7 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    You can also query the BIT directly(find first) and the complexity will be $$$O(n \log n)$$$.

    • »
      »
      »
      7 недель назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      how to find pref[x] / x < a in log(n) with point updates?

    • »
      »
      »
      7 недель назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      Hmm can you explain more? Thanks!

      IMO one log is from binary search and another is from BIT, it's not clear to me how we can drop one of the two?

      [Edit] I figured out how to do it... Basically you look at 1, 10, 100, 1000... and combine finding sum and checking the binary search condition.

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

A simpler implementation of C without stack/vector. 273589519

i thought of brackets reducing in pair of 2 and guessed by looking at pretest. although i am not very clear why this works and it would be great if someone could explain it.

»
7 недель назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

Problem C can be solved in constant time and space with the formula len(s)/2 + 2*s.count(‘(‘) for an input string s, just have counter for open parenthesis and update as the input comes in.

If anyone has a good proof, I would be interested in seeing it.

»
7 недель назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

i'm really curious and wanna know the linear $$$ O(n + m) $$$ of mine passes in ~$$$500ms$$$ but at the same time the binary search solutions having time complexity of $$$ O(n\log_2 n + m\log_2 n) $$$ pass in ~$$$200ms$$$.

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Solution For C

Spoiler
void solve(int T)
{
    int n;cin>>n;string s;cin>>s;
    int temp = 0;
    int ans = 0;
    for(int i=0;i<n;i++){
        temp++;
        if(s[i]==')'){
            ans += temp-1;
            ans += temp/2-1;
            temp = 0;
        }
    }
    cout<<ans<<endl;
}

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can someone please explain why my question D doesn't work? I am just using basic DFS, and trying to find what the maximum value can I add to vertex 1. Submission: https://codeforces.net/contest/1997/submission/273756060

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

can someone please explain the B problem, i really dont understand from either tutorial and vid soln

  • »
    »
    7 недель назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    There are only two layouts in which blocking a cell creates 3 regions. Try brute forcing every cell and you will notice that only two layouts can be added to the answer.

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится
»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For problem E can someone explain this part :

int nxt = x + k - 1 >= int(alive.size()) ? n : *alive.find_by_order(x + k - 1);

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I just realised B says at most 1 connected component and here I was thinking of using articulation points and all lol

»
7 недель назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

E can be solved in $$$O(nlogn)$$$ using the "binary search on fenwick tree" approach. You can see 273651754.

  • »
    »
    7 недель назад, # ^ |
    Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

    Another solution for E is a parallel binary search or divide and conquer approach also in O(n * log(n)). My solution: 273896973

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone tell me why my submission for D overflows the stack? As far as I can tell, I only visit each node once.

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Problem E can be solved in $$$O(n \log n)$$$. I've written a blog for it.

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anybody explain how to find k-th element, greater than x with fenwick? :)

  • »
    »
    6 недель назад, # ^ |
      Проголосовать: нравится +3 Проголосовать: не нравится

    Process the queries offline. Sort queries by decreasing x.

    Make an array of [value, index] pairs in the original array. Sort that by decreasing value. Maintain a pointer for the array of [value, index] pairs, initially starting at 0 Now to solve the query [x, k] in log^2(n) time:

    while the pointer is pointing at a value greater than x,update the fenwick tree at index from 0 to 1 Binary search for the correct position

    • »
      »
      »
      6 недель назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      Do you know any log(n) solution with Fenwick? :( Trick with binsearch is classic. In this case complexity is O(n log^3(n)), not O(n log^2(n)) (task E)

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hello

Can someone help me find the error in my submission : 273811843 it showed runtime error in test 6.

»
7 недель назад, # |
Rev. 4   Проголосовать: нравится 0 Проголосовать: не нравится

Maybe my solution for Problem D is the most simply one with only 30 lines. Just use dfs(p) to refer how many can be offered by all p's children. This solution is very fast Code

  • »
    »
    7 недель назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    Bro used one liner condition to handle so that it looks pretty short, however it doesn't count as "most simply one"., as it is nothing more than greedy approach most people have known.

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I solved problem D with a basic dfs traversal.https://codeforces.net/contest/1997/submission/273877479

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I have a doubt in F. Can you please explain where am I wrong. After placing all chips on the line after doing some operations we always end up with 1 or 2 chips finally. So, cost is always 1 or 2.

. Proof - Because lets say i>2 have a even number of chips at i. Lets say 4, now by op 1 it becomes 1 3 1 Op-2 — 0 2 2 Op-2 — 1 1 1 Op-2 — 0 0 2 Like this every even pile can be reduced to 2. And every odd pile to 1. After this, to ones at diff pos can be combined into 1 or 2. 10001 goes to 110111 after few 1st operations. Again this can be combined and made into 1 or 2.

So, how can there be a solution for m>2

»
7 недель назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

E and F is so dificult, i had no idea for this ;-;

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone tell me what is wrong with this solution for problem D, i got a overflow error even though everything is in long long 273894619

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I have one doubt regarding Problem E. In this problem when i used segmented tree my solution exceeded the time limit in test case 12 but when i did the same with fenwick tree it passed . Can anyone tell me the reason behind it? although the updation and query time complexity for both the data structure is same ,that is O(logN).273965828 Fenwick Tree , 273964980 Segmented tree.

»
7 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Problem D can also be solved in O(n) by using basic Divide and Conquer. Submission: 274014881

»
6 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Thank you for such well detailed E problem solution BledDest.

»
6 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I have one doubt, in C editorial,
if input string = "_ _ ) )" will it assign ())) ?

»
5 недель назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I just wanted to know do problem E allows n(loglogn)logn^2, Cause I am getting TLE at testcase 7.