C++ STL: Order of magnitude faster hash tables with Policy Based Data Structures

Правка en21, от Chilli, 2018-07-23 11:26:11

TL;DR

The Policy Hash Table has 3-6x faster insertion/deletion and 4-10x increase for writes/reads. As far as I can tell, there are no downsides. The policy hash table (specifically the open-addressing version), beats out unordered_map in all my benchmarks.

Background

I've often been irritated by how slow unordered_map is in C++. Too often, I have something that runs fast enough in terms of complexity, but the constant factor from unordered_map slows down the solution too much. Yesterday though, after using the useful order statistics tree from https://codeforces.net/blog/entry/11080, I was curious if there were any other useful data structures hiding in the Policy STL. And lo and behold, I found a hash table.

Benchmarks

Well, enough backstory, let's look at some numbers. All benchmarks below are compiled with C++14 -O2.

unordered_maplinear insertion:                                  0.689846
cc_hash_tablelinear insertion:                                  0.408233
gp_hash_tablelinear insertion:                                  0.256131

unordered_maplinear read/write:                                 1.69783
cc_hash_tablelinear read/write:                                 0.202474
gp_hash_tablelinear read/write:                                 0.26842

unordered_maprandom insertion:                                  2.90184
cc_hash_tablerandom insertion:                                  3.15129
gp_hash_tablerandom insertion:                                  0.56553

unordered_maprandom read/write:                                 2.02336
cc_hash_tablerandom read/write:                                 0.333415
gp_hash_tablerandom read/write:                                 0.403486

While for linear insertions, the policy hash table gives modest improvements, the policy hash table blows the unordered_map out of the water when it comes to reads/writes. These are order of magnitude improvements that make hash tables usable when they previously weren't. "Official" benchmarks done by the DS authors can also be found here

Example

Benchmarks of course, don't always reflect the real world. So here's an example of it allowing a solution to be accepted that TLE's with unordered_map.

Example problem (5000 ms time limit)
Solution with unordered_map: (TLE on test case 8)
Solution with policy hash table directly substituted in: (TLE on test case 26)
Solution with unordered_map, rewritten to not use clears: (TLE on test case 26)
Solution with policy hash table and rewritten to not use clears: (AC with max time of 3180 ms)

Usage

To use this data structure:

#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
gp_hash_table<int, int> table;

From there, the API seems almost exactly the same.

Defeating Anti-Hash tests

One weakness of hash tables is that mean people can find hash collisions offline and blow up the complexity of your hashmap. In my opinion, the easiest way of solving this is below. There's no need to define your own custom hash function.

const int RANDOM = chrono::high_resolution_clock::now().time_since_epoch().count();
struct chash {
    int operator()(int x) { return hash<int>{}(x ^ RANDOM); }
};
gp_hash_table<key, int, chash> table;

Why is it so much faster?

See my comment here

A better hash function

There is one issue with using policy hash tables as is. The default hash function for numerics in C++ is just the identity. This is especially problematic for using hash tables for something like a fenwick tree, especially since the default bucket structure for policy_hash_tables is based of powers of 2 and not primes. If you're using policy hash tables for fenwick trees, you have 2 options. 1. Choose to use prime table sizes, as done here. 2. Choose a better hash function for integers, as done here.

Thanks to adamant for his post that revealed to me the existence of policy data structures, and thanks to ed1d1a8d for the discussion.

PS: In other posts for unordered_map, I've seen people claim that reserve and max_load_factor could increase performance drastically. They didn't seem to do much for me. However, if you want to do something similar for these hash tables, check out the example here

Code for the benchmarks can be found here

EDIT: I realized that gp_hash_table is the way to go, not cc_hash_table. gp_hash_table sacrifices ~10% reading/writing speed to gain 3-6x in insertion/deletion/clearing. I updated the post to reflect the new numbers.

Теги hashtable, c++, policy based, unordered_map

История

 
 
 
 
Правки
 
 
  Rev. Язык Кто Когда Δ Комментарий
en25 Английский Chilli 2020-04-08 21:39:21 204
en24 Английский Chilli 2018-12-20 19:59:40 19
en23 Английский Chilli 2018-08-26 22:29:49 97
en22 Английский Chilli 2018-07-23 11:48:12 5
en21 Английский Chilli 2018-07-23 11:26:11 64
en20 Английский Chilli 2018-07-23 11:23:29 657
en19 Английский Chilli 2018-07-23 10:23:47 198 Tiny change: ' clears:]( http://cod' -> ' clears:](http://cod'
en18 Английский Chilli 2018-07-23 09:17:15 390
en17 Английский Chilli 2018-07-21 23:50:59 504
en16 Английский Chilli 2018-07-21 05:33:02 10
en15 Английский Chilli 2018-07-21 05:31:38 4
en14 Английский Chilli 2018-07-21 05:20:37 2
en13 Английский Chilli 2018-07-21 05:18:56 5
en12 Английский Chilli 2018-07-21 04:59:02 842
en11 Английский Chilli 2018-07-21 01:50:28 0 (published)
en10 Английский Chilli 2018-07-21 01:08:04 136
en9 Английский Chilli 2018-07-21 01:04:50 45
en8 Английский Chilli 2018-07-21 00:42:45 21
en7 Английский Chilli 2018-07-21 00:41:07 99
en6 Английский Chilli 2018-07-21 00:40:10 24
en5 Английский Chilli 2018-07-21 00:37:56 25
en4 Английский Chilli 2018-07-20 23:34:11 41
en3 Английский Chilli 2018-07-20 23:21:31 2598 Tiny change: 'n\n\n~~~~~\n\nunordered ' -> 'n\n\n~~~~~unordered '
en2 Английский Chilli 2018-07-20 23:01:35 4
en1 Английский Chilli 2018-07-20 23:01:11 1442 Initial revision (saved to drafts)