imagine ur trying to solve problem C.the first time u see the problem statement ur like damn this is interesting.and 30 mins later u ahve no clue how to solve even though u have thought hardly about the problem as if youre solving world hunger and peace. then u see the editorial and it says.
just split it into 2 numbers and ur basically done.
happens to noone except me i guess
happens to literally everyone. When you practice problems of the right difficulty you should feel like you're constantly struggling, because that's a sign you're actually improving.
You can even struggle on problems far below your skill level. As a red I'm still not immune from struggling on the occasional div2A (for example edu round 100)
do you think strunggling==improving? for me improving==struggling and struggling!=improving u may struggle alot,but that doesnt mean u improve (which explains my rating constantly getting ups and downs at the range 600-800).even reaching 1000 is a distant dream for me.
My point is that struggling is necessary for improvement, so you just have to keep pushing through and learning new ideas as you go. The more you genuinely love the process, the less it will feel like "struggling" when you're stuck on a problem.
Don't you remember I hate 1111 (which was ironically div2A in testing) :holyfuck:
As the author of the problem in question I wanted to make some comments.
The editorial for the problem is devilishly simple, misleading people into thinking the problem was easy. I don't think the problem was that easy in practice, as evidenced by the submission count. While I didn't expect the problem to be so hard (and indeed most testers solved it), the problem was indeed hard for a div 2C.
During testing I made the comparison to 1450C1 - Errich-Tac-Toe (Easy Version). As you can see by the blog comments, it was certainly not an easy problem by any means, but after one simple observation (spoiler!) it becomes a lot easier (in fact, the observation is similar in some ways). (This is also why 1. the constraints for my problem are so small, to allow other solutions, and 2. problem D was only 1750 on the scoring distribution, to encourage people to try it as well [I think it's a lot easier to make progress on that problem FWIW.].)
It is easy to miss these observations, but just because the solution is short doesn't mean the process of getting there should be, too. There are so many things you can try in both problems, how do you know which will work out and which don't? That is problem-solving: building intuition for these types of problems so you see the same ideas later.
Contrary to what you mention, everyone goes through the phase when they just can't solve a problem, read the editorial and go "how didn't I see that?!?". This is part of the natural process of problem solving.
So TL;DR don't beat yourself up about it, short solution $$$\neq$$$ easy to come up with.
To tell the truth, the idea is really hard to think but if you get the point, it's quite easy. I think that's why people think C is an easy problem.
I used dfs when I solve this problem but didn't find the approach of the editorial... Maybe the real fun of CP is in it.
dfs? what
basically its top down dp
Hey I just wanted to say, this problem was amazing because even though I couldn't solve it, I was finally motivated to practice bitmask. While practicing bitmasks it started helping me visualise DP states so much better as well. The fact that there are 3 ways (maybe more) to solve it is what makes it so valuable. So thank you and your co-author for the question and the contest.
easy to implement != easy to think
Solving Div2C problems in less than 30 minutes would make you at least a specialist on codeforces. I think that you are giving up too early and maybe don't have enough patience. Spending more than an hour on a problem isn't very uncommon. A better question is: what did you use these 30 minutes for? Were you just staring at the text of the problem statement and thinking?
I didn't have enough time to solve that problem C during the contest, but I upsolved it later without checking the editorial. How did this work?
The solution wasn't immediately obvious, so I just started by implementing what I can:
That's the "addition" operation, done exactly in the way how it is described in the problem statement. Followed by the "subtraction" operation, which can undo the effect of "addition" (these two functions can be debugged by validating them against each other). Each of these operations has O(logN) time complexity. After completing this, a few opportunities are unlocked:
It's immediately possible to implement a naive O(NlogN) solution by just iterating over [1, 2, 3, ..., N-1] and "subtracting" them from N in a loop. If we don't get a negative number or zero after a "subtraction", then we found a good pair and it can be counted. The samples presented in the problem statement can be solved, but the time complexity is not good enough for the problem constraints.
Playing around with "addition" and "subtraction" in different ways and printing the results allowed to look for various patterns. For example, I wanted to see if maybe the naive O(NlogN) solution could be further improved by using a binary search or something like this. This eventually led me to a test, which just printed a counter value "decrementing" by 1 after each iteration. The output of this test showed a very clear pattern: the odd digits are changing and the even digits remain the same. Which is basically the key observation mentioned in the editorial.
After that, implementing the O(logN) solution and comparing its output for small testcases with the naive reference O(NlogN) solution was easy.
Please note that I wasn't just staring at the screen for X minutes and then having a sudden divination: "Oh, yeah, that's the necessary observation". The time was spent writing and testing different pieces of code, each of them providing extra information and moving me closer to a working solution.
I actually enjoy Google Kick Start contests, because they motivate participants to at least try implementing solutions for small constraints, rather than giving up completely if the problem seems to be too difficult. A naive bad time complexity solution can be sometimes used as a foothold for implementing a better solution. There's an old blog https://www.joelonsoftware.com/2002/01/06/fire-and-motion/ and it talks about interesting things: "Maybe this is the key to productivity: just getting started. Maybe when pair programming works it works because when you schedule a pair programming session with your buddy, you force each other to get started."
That said, just being able to solve problems in principle is not good enough. We need to solve them very fast before a contest ends if a good rating is desired. And speed improves with experience, because many competitive programming problems are solved by similar tricks.