190829557 This is my submission Which gives me TLE,I think Using Map is more preferable than unordered_map as worst Time Complexity of Unordered_map is O(n^2) as compared to O(logn) for Map!!
# | User | Rating |
---|---|---|
1 | tourist | 3985 |
2 | jiangly | 3814 |
3 | jqdai0815 | 3682 |
4 | Benq | 3529 |
5 | orzdevinwang | 3526 |
6 | ksun48 | 3517 |
7 | Radewoosh | 3410 |
8 | hos.lyric | 3399 |
9 | ecnerwala | 3392 |
9 | Um_nik | 3392 |
# | User | Contrib. |
---|---|---|
1 | cry | 170 |
2 | maomao90 | 162 |
2 | Um_nik | 162 |
4 | atcoder_official | 160 |
5 | djm03178 | 158 |
5 | -is-this-fft- | 158 |
7 | adamant | 155 |
8 | Dominater069 | 154 |
9 | awoo | 152 |
9 | luogu_official | 152 |
190829557 This is my submission Which gives me TLE,I think Using Map is more preferable than unordered_map as worst Time Complexity of Unordered_map is O(n^2) as compared to O(logn) for Map!!
Name |
---|
Rehash your hash-table.
m.rehash(n);
will allocate in memory space forn
elements(likevector::resize(n)
)This still doesn't pass the time limit in the mentioned problem (191032622). The correct solution is to use a custom hash function that makes
unordered_map
always run in $$$O(1)$$$ (191032878). Take a look at this blog post by neal.It's not quite that simple.
A map is a binary search tree. A binary search tree is a binary tree where every node in each node's left subtree is less than it, and the right subtree greater than it. Updates to a binary search tree are O(log n) because a well-implemented BST will cut the tree in half for each level of depth, akin to binary search, and adding elements is constant time (just make a new node and edge).
An unordered_map is a hash table. Hash tables can't be looked at in any meaningful order. They work by running what is called a hash function on inserted values to generate a number, which is used to index the stored value. They can be added to and accessed in O(1) time. The complexity of making an unordered_map of n elements is not actually O(n^2), but O(n) amortized. The reason O(n^2) is theoretically possible is due to something called a hash collision, which is where a hash function generates the same value for 2 elements inserted into a map. When this happens, a rehash has to performed, which takes O(n) time. If a rehash happens every time you insert into an unordered_map, then yes, it would take O(n^2) time. In practice, this is basically impossible unless the dataset inserted into the hash table was specifically designed to mess up the hash function. This is why we call the time complexity O(n) amortized.
Note: BST maps have amortized time complexity too, actually. To ensure fast lookup times, they occasionally have to balance themselves, which essentially rebuilds the tree. This takes O(n) time.
Unordered_map is O(1) on average, not amortized.
However, map is O(log n) amortized.
You should look up the difference between the two but basically the main difference is that amortized means it is guaranteed but average does not. So it is guaranteed that in the worst case map has O(log n) time complexity but it is not guaranteed that unordered_map has O(1) time complexity — which is why O(n^2) blowups exist.