The task I want to discuss is 739E - Gosha is hunting. While the official solution is a greedy algorithm sped up enough to pass the time limit, I recently came upon another solution. The main idea is to speed up the obvious dp approach, where we define dp[i][x][y] as the maximum expected number of caught pokemon in the prefix of first i pokemon, if we throw at most x A-pokeballs and at most y B-pokeballs. The computation of each state is O(1), so the complexity of this solution is O(n^3). There is no obvious way to speed up this dp, because the transition of states is already done in O(1), and that's where dp optimization techniques usually cut the complexity. It's also useless to use some other definition of dp, since they will all take O(n^3) time to compute. But what we can do is to use the same trick used to solve the task Alien, from IOI 2016, or 674C - Levels and Regions in O(n log k) as Radewoosh had described on his blog, and completely kick out a dimension from our dp!
Kicking out the 3rd dimension:
By kicking out the 3rd dimension, we're left with dp[i][x]. This is now defined as the highest expected number of caught pokemon in the prefix of i pokemon if we throw at most x A-pokeballs and any number of B-pokeballs. Obviously this will always use the maximum amount of B-pokeballs. But what's really cool is that we can actually try to simulate this last dimension: we define some C as a "cost" we have to pay every time we want to take a B-pokeball. This is essentially adding the functions f(x) = dp[n][a][x] and g(x) = -Cx. The cool thing is, f(x) is concave, i.e. f(x+1) — f(x) <= f(x) — f(x-1). This is intuitive because whenever we get a new B-pokeball, we will always throw it at the best possible place. So if we get more and more of them, our expected number of caught pokemon will increase more and more slowly. And why is it useful that f(x) is convex? Well, h(x) = f(x) + g(x) has a non-trivial maximum, that we can find. And if h(x) is maximal, it means that for this C, it's optimal to throw x B-pokeballs. Now it's pretty obvious that we can do a binary search on this C to find one such that it's optimal to throw exactly b B-pokeballs, as given in the input. Inside our binary search we just do the O(n^2) algorithm, and when we finish, do a reconstruction of our solution to see how many B-pokeballs we've used, and use that information to continue binary searching. This gives us complexity O(n^2 log n), which is good enough to get AC. This trick was shown to us at our winter camp, which ended yesterday.
Kicking out another dimension?
But is this all? Can we do better? Why can't we kick out the 2nd dimension in the same way we kicked out the first one? It turns out that in this task, we actually can! We just define D as the cost that we deduct each time we use an A-pokeball, and then using binary search find the C for which we use exactly enough B-pokeballs, and reconstruct the solution to see if we've used too many or too little A-pokeballs. The function is again concave, so the same trick works! Using this I was able to get AC in O(n log^2 n), which is pretty amazing for a Div1 E task with N <= 2000. My friends vilim_l, jklepec, lukatiger and me are still amazed that this can be done!
Can you please elaborate on why f(x) is concave? When we get a new B-ball, the set of pokemons at which we throw A-balls may change, and is not clear to me why f(x+1) — f(x) <= f(x) — f(x-1).
I can't, since I don't actually know how to prove it, but I wrote a generator and a checker and it never fails.
Isn't it convex instead of concave?
If it has a maximum, it's concave. It's different in mathematics than it is for lenses or in physics.
Thanks.
Thanks.
I think the reason f(x) is concave is the following:
There are 2 cases: f(x) < f(x + 1) or f(x) = f(x + 1). The first one occurs when there is some pokémon i such that no ultra ball was thrown at it in f(x) and Ui > 0; i.e. the same optimal state of f(x) is maintained and the new ultra ball is thrown at the pokémon i which will give the greatest improvement to the answer, i.e. to f(x + 1). The second one occurs when there is no such pokémon i and the answer of f(y) will remain the same as f(x) for every y > x.
Now f(x + 2) - f(x + 1) ≤ f(x + 1) - f(x) is true because the optimal choice was made at f(x + 1), as I said before. Thus, the best improvement for f(x + 2) is at most equals to the improvement made on f(x + 1), otherwise it would be made on f(x + 1).
I agree, up to a point. You don't really cover the case where when you get an extra ultra ball, you change the set of pokemon you throw your pokeballs at, and it's no longer obvious if it's still concave or not.
Auto comment: topic has been updated by linkret (previous revision, new revision, compare).
This idea is called "wqs binary search" here in China..
But still it's quite impressive to kick out both dimensions using this trick...
What does wqs stand for? And do you have other problems that use this kind of technique?
Sorry for the late reply.I have been away from OI for some reason. wqs stands for Qingshi Wang, who introduced this method to Chinese participants. I remember that IOI 2016 Day2 task3 used the similar method, though my memmory may not be reliable.
Thanks
Were you taught this method before IOI 2016? Is there any good resource for all the cool things you learn in china? (I’m asking this coz when rest of world discovers something cool you guys come out and “Oh, we call that ***” like wqs , LiChao tree or etc.)
s-quark introduced this trick and YuukaKazami set the similar problems about 6 years ago. However, this trick didn't become popular until IOI2016.
Here is the article of s-quark(in Chinese).
I should learn Chinese. Urgently.
im sorry
The provided article link does not work anymore :(, is the article still available online?
https://web.archive.org/web/20191113042858/http://tsinsen.com/resources/Train2012-sol-wqs.pdf
Thanks! <3
Would you mind elaborating on the significance of g(x)? How does this simulate the third dimension?
g(x) works as a "limiting" function, thanks to it it won't be worth it to throw an ultraball at every pokemon. Imagine that you aren't using this cost function, then dp[i — 1][a] + pb[i] > dp[i — 1][a] and dp[i — 1][a — 1] + pa[i] + pb[i](1 — pa[i]) > dp[i — 1][a — 1] + pa[i], so it would always be beneficial to throw an ultraball. With g(x) your new function h(x) will have a maximum, so now if y maximizes h(x), the h(y) + C*y = dp[n][a][y] is maximum, now all you have to do is find the correct C such that y = b, then you will have maximized dp[n][a][b], and have an answer to the problem. Hope this messy text was helpful.
I don't understand the concave function part in dp[i][a][x].
The cool thing is, f(x) is concave, i.e. f(x+1) — f(x) <= f(x) — f(x-1).
Are you sure about this? I think there will be a point of inflexion in the f(x). Up to the POI we have f(x+1)-f(x) >= f(x)-f(x-1) and after the POI f(x+1)-f(x) <= f(x)-f(x-1). After sufficiently large x, f(x) becomes constant or asymptotic.
omg wow
imposter
How did you arrive to the conclusion that bounds for C should be [0, 1]?
Because 0 <= f(x) — f(x-1) <= 1.
Does anybody have a convincing prove that f(x) is concave?
Thanks for the post linkret! There is one caveat I should mention with the doubly nested binary search solution: you need to carefully handle cases where only choosing A and only choosing B are both tied for the lowest cost.
Here's a test case:
Both your code above and my first submission output 31.71, but the answer is 31.65.
The optimal costs AC and BC come out to be 0.34 and 0.46. With these values, there are three elements where choosing only A and choosing only B are tied. In order to satisfy the constraints, exactly one of them must choose A and exactly two of them must choose B.
This is my modified solution that covers these cases. It handles all computations in integers and checks a more complete condition with the outer binary search.
.