Hi there! The recent Good Bye 2019 was very important event for us, and thanks for all the positive feedback we got, especially from Radewoosh! As a follow-up, we would like to share some background on its preparation.
Problems:
A: This one was the hardest to come up with. We wanted it to be super easy but still to include some idea, and this is like the $10$th version of the problem.
B: Initially there was another problem at this position. However, it turned out to be too hard for B (one orange couldn’t do it during the testing), and people didn’t like it in general, so we came up with another task.
C: Funny, but we didn’t know that it’s always possible to make array good with adding at most $$$1$$$ element before the contest.
D: This problem was created just 3 days before the contest to make it more balanced :O. We are glad that the contest did turn out to be well-balanced.
E: Initially we wanted this problem to be at position C, but the coordinator told us it was too hard for it and we moved it to D. However, testing showed that we have to move it even further!
F: This problem caused a lot of troubles because we wanted to cut $$$n^2$$$ brute force solution which was working very fast. Luckily, nobody managed to pass it in $$$n^2$$$ during the contest, but in upsolving savinov managed to!
G: Before there was another problem at this position, but this problem by a recent round by hugopm turned out to be too similar to it :(, so we had to replace this position.
Initially, I wanted to send the current problem G to IMO (asking to prove that there exists a nonempty set with zero-sum), and I think it would make a great P2, but we wanted to make this contest the best we could make, so I decided to use it here. We are glad you also found it beautiful!
I think it may be interesting how this problem was created. I wanted to come up with some problem with permutations, and the first (and obvious) version was:
Prove that for any permutation $$$p$$$ of $$$n$$$ elements there exists some set $$$S$$$ such that
Of course, this is a very bad problem, as even $$$S = {1, 2, \dots, n}$$$ would suffice. So I decided that permutation wasn't necessary, and the next version of the problem was:
Prove that for any array $$$a$$$ of $$$n$$$ elements such that $$$1\le a_i \le n$$$ there exists some set $$$S$$$ such that
But this one was too obvious and too well known...
I just decided to regroup terms, so that the new version became:
Prove that for any array $$$a$$$ of $$$n$$$ elements such that $$$1\le a_i \le n$$$ there exists some set $$$S$$$ such that
It was left to just set $$$b_i = i - a_i$$$ and to ask to find subset with zero sum! Funny, that this simple process led to such problem O_O
Creating tests was really hard for this problem, you can read about one testcase here, but we are glad that not so many heuristics solutions managed to pass!
H: We needed some programming task (or otherwise some people would kill us). Initially we came up with the $$$q = 0$$$ version of that problem and considered it as Div2B — Div1A level, but then we thought about the queries and Mediocrity managed to find $$$n\log{n}$$$ solution!
We had problems with TL with this problem as well, wanting to cur $$$n\sqrt{n\log{n}}$$$ solutions, but didn’t manage too, unfortunately.
Fun fact: initially we considered G and H to be of about the same difficulty. The contest proved that we were very wrong :D
I: I loved this task a lot (this is my favorite problem in general) and wanted to create something with a similar idea (Idea: count some function for every cell such that some operation changes values of functions nicely).
Firstly I wanted to send it to IOI, but after I decided that it's really hard to come up with meaningful subtasks for this problem, so I decided to use it here.
I would like to point out, that problems E, G, I were created in direction solution -> problem. For some reason people keep saying that this way gives bad problems too often, and I want to persuade others that this way of creating problems is great :D
During the contest
We were really surprised when the first 2 accepted solutions of problem G came from people with ratings 1900 and 1700 :o
At the end of the contest, we thought that the decision to make it $$$3$$$ hours was wrong, as the number of attempts to pass some heuristics in G and some $$$nsqrt{nlog(n)}$$$ in H was increasing in last $$$30$$$ minutes :(.
Post-contest impressions
We expected people to complain about bad TL in F, H, about bad tests in G, and about the contest being too Mathforces in general (yes, that’s why I wrote Mathforces Thinkforces in the announcement. We were really surprised by the amount of positive feedback though, thank you a lot!
There was a very interesting post-contest discussion involving Radewoosh and Swistakk. We intended to make the problems heavily thinking oriented and less coding oriented, and we think that we succeeded. However, we would like people to share their opinions: do you think a lot of ad-hoc problems in one contest are bad for Codeforces?
Thank you for the amazing year though, we hope to return with other good contests!
Happy New Year!
(You will see my cyan hair in the next few days)
I have no issue with ad hoc problems (D was nice!) but I do not like it when a contest has multiple problems that could just as well appear in a mathematics contest (E, G) as tasks like this don't involve what I think is the core of competitive programming: thinking how to do something (asymptotically) fast.
Then, do you also dislike the problems like interactives and game theory? They're hardly about asymtotic analysis, hence, by your definition, are against the core of competitive programming.
Interactive problems are actually one of my favourite types of problems. In them you optimise the number of queries like you usually optimise complexity, but uninteresting things like constant optimization cannot make unintended solutions pass.
I also want to make clear that I do not hate purely mathematical problems as long as they appear infrequently. I think both E and G were great problems, but should probably not have appeared in the same round.
Interactives are hardly about asymptotic analysis? Why? That's the complete reverse of how I think.
I've always thought of interactives more "find a strategy"-type of problem than a "reduce time complexity"-type similar to game theory problems. I'd argue that if we start to categorize this as an asymptotic analysis problem, there will be no problem which won't be included in this category.
For example, I can argue that GoodBye 2019 E and G were both "reduce time complexity"-type of problems since we can naively check the all possible subsets for both problem, which would clearly TLE.
You should solve more interactive problems, especially from OI where it originally began.
I think the number of attempts to pass some heuristics would increase in last 30 minutes with any duration of a contest :)
Problems A B C cant be hard at implementation, so they were good. Problems D and G are two difficult ad-hocs and i love them so much. Imho only problem E are not good enough for this contest. Another problems stands in their places. Maybe some problem with data structure was better at E (and i would not have to solve it)
Sorry for my pure english :( Waiting for your cyan hairs!!
I don't think there were too many ad-hoc problems. Let's put aside AB. Interactive problems are so underdeveloped that anything that doesn't involve binsearch counts as adhoc. F and H were sqrt. E didn't use a particularly unusual idea, splitting by some sort of even and odd coordinates should be the first thing that comes to mind and the solution untangles from there. C and G were ad-hoc and I didn't try I yet, but overall, we have several semi-kinda-adhoc that's pretty much just "don't copypaste standard algorithms". This is fine.
Usually , I don't like the contests where I perform bad. But this was an exception , my performance was very shitty but still I liked the contest very much. Cute C , interactive D which took me a lot of time , and a beautiful E which I couldn't do in contest time. G was also nice. Thanks for the beautiful contest. You've become my most favorite problem setter after this round and SEERC 2019. Keep up the good work.
Whats the single number solution for problem C ?
Check this comment and submission 67895496
I think for problem E there is an issue with deceptive constraints. They make us to search for some $$$O(N^2)$$$ algorithmic solution rather than AD-hoc idea. Personally, I spent a lot of time on the wrong solution. And only when I looked at the consumed time of the other participants solutions (thanks CF for allowing this), I threw out the algorithmic solution and started to think about simple ideas.
Besides this issue, contest was very good, as for me.
Well, the simplest solution with recursion passes.
Lol, I like your F solution. I'm a fan of you.
It's not mine =) https://codeforces.net/blog/entry/72611?#comment-569602
I feel it's more of your mistake assuming solution should be O(n^2). I felt that generally one try to put tight constraints only when there are slow solutions which can pass.
We had to make $$$n \le 1000$$$ because of checker.
Is deceptive constraints actually an issue that requires to be fixed?
In my opinion, assumption that best possible solution fits the constraints precisely is generally incorrect. I can see multiple reasons to set less strict limits for the problem:
Do not give a hint as a complexity of required solution. It is actually good in such ad-hoc problems when implementation is generally more easy than coming to a solution. It also opens up a challenge: how far you can push beyond acceptable border and how good is the best possible solution you can come up with. (One-number solution for problem C is great).
Make the CP world a better place for slow languages. For sure Java and Python coders will curse you for setting $$$n = 2 \cdot 10^6$$$ when desired solution is $$$\mathcal{O}(n\ \cdot \log n)$$$ and there is no $$$\mathcal{O}(n\ \cdot \log^2 n)$$$ solutions that you want to cut off.
Answering question "Why authors put such constraints here?" we should consider all sides of problemsetting. Just as in this case: checker is also required to be fast. In this problem is it even possible to check correctness of answer faster than $$$\mathcal{O}(n^2)$$$?
I understand all the reasons, but anyway 90%+ of problems give hint for complexity of required solution, so it is generally incorrect, but still most probable.
I didn't mean it is an issue that should be fixed for sure, I just explain the main reason (imho), why authors wanted this problem to be at position C, but had to move it to position E. Oddly enough, with less constraints this task become more tricky.
Well, these are just my impressions, maybe I'm wrong.
I'd disagree here: with enough searching, you can find a convoluted O(N^2) solution that still retains the character of the original solution. Glad the authors chose to let it pass. Otherwise I think the parity solution would have been more apparent, and how would one write a checker?
We're still waiting right here!