Could some of you which have participated in Google Hash Code 2017 be kind enough to share some of your ideas/solutions to the problem?
# | User | Rating |
---|---|---|
1 | tourist | 3985 |
2 | jiangly | 3814 |
3 | jqdai0815 | 3682 |
4 | Benq | 3529 |
5 | orzdevinwang | 3526 |
6 | ksun48 | 3517 |
7 | Radewoosh | 3410 |
8 | hos.lyric | 3399 |
9 | ecnerwala | 3392 |
9 | Um_nik | 3392 |
# | User | Contrib. |
---|---|---|
1 | cry | 169 |
2 | maomao90 | 162 |
2 | Um_nik | 162 |
4 | atcoder_official | 161 |
5 | djm03178 | 158 |
6 | -is-this-fft- | 157 |
7 | adamant | 155 |
8 | awoo | 154 |
8 | Dominater069 | 154 |
10 | luogu_official | 150 |
Could some of you which have participated in Google Hash Code 2017 be kind enough to share some of your ideas/solutions to the problem?
Name |
---|
Idea 1 (too slow): While we can add anything: let's calculate for each pair of (Server, video) how our score will change if we add video to the server and choose one greedily by this improvement
Idea 2: After we added (video, server) it'll change the "improvement score" only for pairs (video, other server) so we can maintain them using set and don't recalculate each time
Idea 3: The cost of adding a video is not constant. The bigger video — the less number of videos we can add. So lets divide improvement for each pair by size of video
Idea 4: Let's add random constants and bruteforce them (by hand or not). This will change results a bit
Idea 5: On smaller test, we can choose not optimal pair but one of the most optimal pairs randomly and try this many times
Implementing this gives 2.63-2.65m. If I remember correctly, first 3 ideas give you 2.63m
Maybe you can provide some code? It would be great. It's so unclear for me how to implement these ideas to get decent run time.
Our code which gave max points for " Videos worth spreading"
I've implemented 3 first ideas and they gave me only 2.22m. Story of my life.
Can you write your scores on individual test cases? Thanks :)
Did anyone try to do a local optimization after the greedy step, and what were the results?
We tried it, without much success. State transition function was to remove a random video from some cache and try adding one or more videos to it. This gave very little improvement as score recalculation was slow and perhaps our greedy solution was a deep local minimum. Or maybe just some implementation issue.
Solving for a single cache, assuming all others are fixed, is simple: compute the gain in overall latency should you add video X (for all videos X), and then solve a 0/1-knapsack problem.
Our solution: Shuffle the caches, and solve the knapsacks one-by-one. Overall (with some tiny optimisations/randomisations here and there): 2.578M
could you share your code please?
Sure, it's on my GitHub: https://github.com/PetarV-/HashCode2017
Hi PetarV,
Can you please explain what do you mean by "assuming all others are fixed"??
Assuming you've fully decided which videos go in all the other caches.
(Of course, in reality, while executing this algorithm you usually haven't---and that's why the solution obtained at the end is only approximate.)
Thanks for your reply,
So If I have the list of all videos with their weights and values, should I fill the caches one by one assuming that there are no other caches?
how the order of the caches will affect the result?
Sounds about right. To be fully precise, "assuming that there are no other caches" is incorrect, since you do take into account all the assignments you made to other caches when you compute the utilities of the videos (e.g. if some cache already has video X, and serves it to everyone better than the cache you're currently filling, then X will have zero value with respect to its knapsack).
The order in which this is done is clearly important — it should be simple to construct a case with only two caches, where choosing which cache to fill first will impact the result significantly. You can think of optimising each cache as a local ("greedy") solution, which might not yield the best results in the global case.
Lastly, I don't think the word should is the best word to use. This is indeed the strategy we used, and we managed to (barely) get into the finals (ranked 53rd). As some other people have said, it seems that it's possible to outperform us with simpler randomised greedy strategies (that do not involve solving several dynamic programming problems as a subtask).
so in this case I can't put the same video in multiple caches? right?
Of course you can, if it would provide a benefit to any user with respect to the (partial) solution you have built up before.
If it doesn't, there's really no need to (the algorithm will assign a value of zero).
Two of the input files can be solved exactly:
In "me_at_the_zoo", the limits are small, a formulation as a mixed-integer-program can be solved by an appropriate MIP-solver in a few seconds.
In "trending_today", every endpoints is connected to every cache sever and all the latencies are the same. Also the total size of all videos is the total size of all cache servers.
This reduces the problem to fitting each video into exactly one cache server, which is some variation of a bin-packing problem. To solve it, consider the videos in decreasing size and solve a subset-sum problem for each cache server (one server after another).
What's the optimal score for "me at the zoo"?
516557