I have been trying to solve this question with MO's algorithm. but My solution was getting TLE. Please somebody share an approach, if it is possible to solve with MO's algorithm . Link to problem : http://codeforces.net/contest/588/problem/E
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
# | User | Contrib. |
---|---|---|
1 | cry | 166 |
2 | maomao90 | 163 |
2 | Um_nik | 163 |
4 | atcoder_official | 161 |
5 | adamant | 160 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | nor | 153 |
9 | Dominater069 | 153 |
I have been trying to solve this question with MO's algorithm. but My solution was getting TLE. Please somebody share an approach, if it is possible to solve with MO's algorithm . Link to problem : http://codeforces.net/contest/588/problem/E
Name |
---|
Auto comment: topic has been updated by papa-ka-para (previous revision, new revision, compare).
Keeping the set of people on the current path in a
set<int>
or similar structure will have a time complexity, which is unlikely to pass. In a set it is for inserting, deleting and finding the minimal a elements. However you insert and delete more often than you find the minimal elements.There exists a data structure which lets you insert and delete elements in , while taking to find the minimal elements. For this, divide the range of legal values into equal blocks and for each block keep a hash table containing all the added elements in that block. To insert or delete, simply access the relevant hash table, to find minimal, iterate through all possible elements while skipping the ranges where the hash table is empty. Using this, the complexity is
It is also worth noting that in the case of this problem, after you flatten the tree into an array, instead of dividing into blocks of cities, you should divide into blocks of people, otherwise Mo's algorithm won't optimize as intended.
This is still quite slow, the constant is large and hash-tables only support on average, not every time. I would prefer to use a method that uses time. Centroid decomposition works really well here, HLD is not bad either.
Hi, sorry for late reply.
I learnt something new, that you mentioned in second paragrah, and third paragraph. Thank you for your reply.
I will try it with centroid decomposition.