TheScrasse's blog

By TheScrasse, history, 2 months ago, In English

Today I've taken part in AtCoder Regular Contest 186, solved problem E and spent the remaining 90 minutes not solving anything.

Then I've taken part in Codeforces Global Round 27 and spent 90 minutes solving 2035D - Yet Another Real Number Problem.

How to stop being stupid?

  • Vote: I like it
  • +194
  • Vote: I do not like it

»
2 months ago, # |
Rev. 2   Vote: I like it +116 Vote: I do not like it

I'd gladly exchange my skill of solving easy problems with your stupidity

»
2 months ago, # |
Rev. 2   Vote: I like it -31 Vote: I do not like it

lol xxD

»
2 months ago, # |
  Vote: I like it -39 Vote: I do not like it

same here, in today's global contest i spent 18 minutes to solve A , thats give me emotional damage.

  • »
    »
    2 months ago, # ^ |
      Vote: I like it +2 Vote: I do not like it

    Your DP is enough to give Emotional Damage to any horse.

»
2 months ago, # |
  Vote: I like it -17 Vote: I do not like it

I would like to understand your thought process ?

were you stuck in finding the core-logic ?

Or were you stuck with implementation ?

In my case, I had found logic in the first 30 minutes while solving D. But I had to think a lot while implementation, to avoid getting into large numbers. Because, there would have been integer overflows. I had to split each number into two parts. 1) odd number, 2) power of 2's in that number. This took me more than 40 minutes to do it fully. .

  • »
    »
    2 months ago, # ^ |
      Vote: I like it -10 Vote: I do not like it

    In the end you did it, that's matter

  • »
    »
    2 months ago, # ^ |
      Vote: I like it -24 Vote: I do not like it

    You doing cp past 8/9 years sir, it's inspiring btw can we connect through linked-in

  • »
    »
    2 months ago, # ^ |
      Vote: I like it +9 Vote: I do not like it

    I was stuck mainly because I found out the stack is small (length $$$\leq 8$$$?) and I was trying to iterate on subsets to remove instead of just removing the last element.

»
2 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I've taken part in Codeforces Global Round 27 and spent too much time solving 2035C, so I solved 2035D instead.

  • »
    »
    2 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    This seems like a good idea actually, I wasted a lot of time on C and solved it in 20 mins of clear thinking after contest. So the tip is if you're struggling with finding good ideas for a problem try another problem.

    • »
      »
      »
      2 months ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      my bug was the following

      usedNumbers[1<<i - 1] = true
      

      should've been

      usedNumbers[(1<<i) - 1] = true
      

      I'm DUMB

      • »
        »
        »
        »
        2 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        That's why I always use brackets when it comes to bit operators

»
2 months ago, # |
Rev. 2   Vote: I like it +100 Vote: I do not like it

I think it's not uncommon to find that a problem's point value or number of solves doesn't always correspond to how difficult the problem is for you? At least you solved D in the end. :)

For example, I thought that problem A from today's ARC was harder than all the others, and that F from today's global round was comparable or easier than D.

»
2 months ago, # |
  Vote: I like it +32 Vote: I do not like it

I find D today to be crazy hard, here is what my thought process is:

Firstly, no immediate properties comes to mind. So, start by the next line of generic thinking:

Start from an optimal configuration, (decent amount of thinking ), figure out a condition for every pair

then, ( decent amount of thinking later), lift this condition for the global configuration

then,(even more thinking later), find out that a stack can indeed maintain the optimal global configuration and is correct to do so.

Perhaps, the idea is it is far easier if you just guess a solution and verify that it works. I am also curious how can anyone figure this out considerably easier.

in contrast, E and F are basically trivially: they fall to the first statement — an immediate property did come to my mind. SQRT for E, monotone by +=2n for F.

TLDR: I think 2035D is crazy hard and I used the maximum amount of thinking for it, not unlike problems in regions of >=2700.

  • »
    »
    2 months ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    It's very strange because it was the exact opposite for me — D felt easy enough and E seemed extremely hard.

    • »
      »
      »
      2 months ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      Maybe I should also say my thought process for E so to compare/ infer the difference?

      First (by intuition) I know that problem of this type usually fall to SQRT. And the constraints do look good for SQRT. To execute the SQRT,I just need to correctly understand what it means to have k upgrade, or k attacks, for small k.

      I then naturally (ok I just realized this isn’t actually that natural, but anyways) tried to look at the sub cases of fixing number of attacks and upgrades, and then clearly see the unique best arrangement. I also quickly see that the damage dealt is monotone in both arguments. Together this is enough to give the solution for SQRT+Binary Search.

      (I made a mistake in contest, they are only monotone for valid pairs of parameters)

      Compared to D, far fewer “rewriting/calculations” and “problem solving decisions” are needed. All statements I have used are short and simple, and so are their justifications. The properties that I want to look for are exactly right.

      • »
        »
        »
        »
        2 months ago, # ^ |
        Rev. 3   Vote: I like it +1 Vote: I do not like it

        My thought process for problems as a CM :

        for D , First thinked we should carry all 2's to the end , wrote the code. It failed . I understood why that happened. Then I thinked the problem as distributing 2's to elements to their right. Thinked of dp's but couldnt find anything since number of 2's can be nlogn. Then I thinked about stack , realized to optimize the current answer we can keep switching the 2's of the last element in stack to the current element or do nothing. Proved it as : values in stack will be monotonically decreasing so if the value we pushed is a better value it only makes sense to pop things from the top of the stack. Got AC.

        for E I was just dumb , I didnt thinked about anything and did random stuff in hopes of it would pass. I should have used my brain.

        for F , I taught the problem was a easy binary search + greedy problem. Reduced the problem to ranges transforming the tree to euler tour. Wrote the code , realized it fails because we cant skip any range , we need to either -1 or +1. Couldnt find a fix for it.

        • »
          »
          »
          »
          »
          2 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          I see, I think D and F I both rejected the simple wrong approaches very quickly because “It couldn’t be that easy”. (So I only looked for disproofs and not proofs.) I guess if I were following your thought process this point alone probably make it a lot more smooth.

    • »
      »
      »
      2 months ago, # ^ |
      Rev. 2   Vote: I like it +1 Vote: I do not like it

      For $$$E$$$, you just naturally consider the final damage $$$d$$$, then the cost is

      $$$ dx + (t+\lceil\frac{z'}{d}\rceil)y, $$$

      where $$$t = \lfloor \frac{d-1}{k} \rfloor$$$ is the number of mandatory hits that you need to make before reaching the damage $$$d$$$, and $$$z'=z-\small\frac{kt(t+1)}{2}$$$ is the enemy health that remains after dealing these mandatory hits.

      Then, clearly, either $$$d$$$ or $$$\lceil \frac{z'}{d} \rceil$$$ is smaller than $$$\sqrt z$$$, so you try all small $$$d$$$, and also for each possible value of $$$\lceil \frac{z'}{d}\rceil$$$ you find the smallest $$$d$$$ which delivers it e.g. via binary search.

      It barely requires any thinking or guessing at all, just carefully writing down the mathematical model of the problem, and then applying a standard trick.

  • »
    »
    2 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I think for D, if you notice that it's always not worse to operate only for pairs $$$(i,i+1)$$$, then you'd realize adding an element $$$i$$$ simply means doing some operations $$$(*,i)$$$ from the previous optimal solution. With this I think it's quite easy to get the greedy solution using stacks.

    However I found F really hard (even harder than G2 I think) because I had no idea about the period thing, and though I knew how to check for a fixed value from the beginning, I was only able to notice the period after a long time. Was this some sort of tricks or something?

    • »
      »
      »
      2 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I don’t quite understand what you mean for D. It seems that what you said make the situation worse:now that you can pass through some intermediate construction that is worse than the starting setup?

      Though I see the form Igor_Parfenov provided below is indeed what I am looking for. I just wasn’t able to find the conclusion, but the proof feels much easier if I did find it.

      for F, I think the idea of “problem feels probably monotone” -> “ok it is not “ ->” but it is monotone if we group differently” is sufficiently common. Though, most of the time it is about grouping odd/even due to only parity constraints. I think I saw this mostly in technical math problems though.

    • »
      »
      »
      2 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Somehow, for F it was immediately obvious to me that doing $$$2n$$$ operations allow us to loop into the same state, but by the vast amount of similar problems I was conditioned to believe that they're solved by some kind of quadratic tree DP, so the thought of considering each remainder modulo $$$2n$$$ individually as the first linear factor to the complexity haven't crossed my mind at all.

      But there is no trick to it to noticing the period, you just do the dumb thing of adding $$$1$$$ to each vertex on first cycle and subtracting $$$1$$$ from each vertex in the second cycle. It's more like you don't perceive this trivial fact as relevant? I think there were some problems that relied on considering remainders separately, but I don't recall any case when the modulus is something larger than $$$2$$$ (like in palindrome decomposition).

  • »
    »
    2 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    My reasoning for D: Assume that you want to solve it for fixed $$$n$$$, rather than all prefixes. You can go from right to left, and at each time decide what to do with the current $$$i$$$. Locally, it's always optimal to either keep its powers of $$$2$$$, or move them to the currently largest number on the suffix.

    This, in turn, means that when we add a number, it's only reasonable for us to move some powers of $$$2$$$ to the new number, otherwise we would move have done it on the previous step. This also justifies using a stack: If it's beneficial to move $$$2$$$ from the earlier number, but not from the later, we would have previously move them onto the later number, so we first need to consider later numbers.

    Overall, since I'm not that good at quickly proving things rigorously, I start implementing the solution when my intuition tells me it's plausible enough and I have a vague idea of how I would approach a proof, rather than when I actually have a complete proof. I think having a full feedback generally incentivizes this.

    • »
      »
      »
      2 months ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      About start implementing the solution when intuition exists:

      I tend to prove everything rigorously (or at least, provide intuitive proofs). Even though there is a full feedback, implementing a wrong solution could still cost at least 10 minutes (on problems on range 2000, more when more difficult). Additionally, I lose all the momentum and break my thinking flow, as now I have to solve the problem again when my mind already moved on to implementations. I personally find the cost-benefit strongly favours proving it.

»
2 months ago, # |
  Vote: I like it +8 Vote: I do not like it

Same :(

It seems like a common theme for me in contests to approach easy problems wrong often and lose a ton of time or even get completely stuck.

»
2 months ago, # |
  Vote: I like it 0 Vote: I do not like it

In my last contest I solved C slower than F1&F2...

»
2 months ago, # |
Rev. 2   Vote: I like it +29 Vote: I do not like it

I think its just that A-D requires an almost different skillset compared to anything E and beyond

So if you kept practicing the harder problems naturally you will get a bit rusty on those

»
2 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Btw, how to prove this greedy, which is heavily used by my solution?

How to solve the "simple" problem. Assume number $$$a_i$$$ is even. Then we can before start divide it by two and remember one operation: choose number $$$j \ge i$$$ and multiply $$$a_j$$$ by two. Let's apply all those divisions, and finally we will have an array of only odd numbers and a set of suffixes, where we choose an element and multiply it by two.

Now to maximize the final sum use the following greedy: go from right to left (e.g. from small suffixes to big) and apply multiplication to the largest currently number.

It sounds for me quite correct, but I can't prove it.

  • »
    »
    2 months ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    Let the current max be $$$A$$$ , and the current second max be $$$B$$$. ($$$A > B$$$)

    If we don't apply the current suffix to $$$A$$$ and apply it to $$$B$$$ that simply means we get a worse answer because :

    1 — all future suffixes can also do operations on $$$A$$$ and $$$B$$$.

    2 — it is more optimal for a set of operations that can be done on both $$$A$$$ and $$$B$$$ to be done completely on $$$A$$$ (since $$$A$$$ is the larger one).

    Thus for all operations that we applied on $$$B$$$ that we can apply to $$$A$$$ , we need to shift them from $$$B$$$ to $$$A$$$ to make the answer more optimal.

    I don't know how formal this is but I think this counts as a proof.

»
2 months ago, # |
  Vote: I like it 0 Vote: I do not like it

i am wondering the same thing :(

»
2 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Bro suffering from success.

»
2 months ago, # |
  Vote: I like it 0 Vote: I do not like it

I'm actually wondering about problem E from Global Round. I tried different ways to bruteforce it, but it got either TL or WA. Then I just additionally bruteforced around $$$\sqrt{(z*y/x)}$$$, which is an optimal point for $$$f(u) = x * u + y * z / u$$$, where u — is the number of upgrades used. As you can see this function is simplified and doesn't consider the restriction given by $$$k$$$. Surprisingly it still worked, and I don't know why. Maybe the optimal point doesn't shift that much, or maybe it's just hard to make strong tests in this problem.

»
2 months ago, # |
  Vote: I like it +13 Vote: I do not like it

everyone bricks easy problems sometimes, you're just taking it too seriously

»
2 months ago, # |
  Vote: I like it 0 Vote: I do not like it
  1. Solve A and B

  2. Skip C and D

  3. Solve E and F

  4. ???

  5. Profit

I understand you bro, every time I get stuck on a problem it's always the *1700 rated one, lord I hate *1700 problems so much