I was looking over some C++ solutions written by the best of CodeForces when I realized that most of them use arrays initialized to a large value instead of instantiating a vector inside the main method. Is there any benefit to doing it the former way other than (maybe) saving a few keystrokes?
Here are some examples of what I mean.
vector <int> graph[200010];
int a[200010];
ll dp[200010][2];
ll dp2[200010][2];
const int Maxn = 1e6 + 9;
typedef long long ll;
const ll oo = (ll)1e18;
vector<int> al[Maxn];
ll val[Maxn];
vector<pair<ll,int> > sub[Maxn];
ll dp[Maxn][2],ans[2],tmp[2];
Upd: Thanks to everyone who responded! The main points from the comments below have been compiled here:
In general, C++ arrays…
- are faster/more efficient
- There's a smaller runtime constant
- Vectors require an extra indirection and are stored on the heap
- The difference in performance becomes more obvious if you use a lot of small vectors.
- use less memory (useful for tight ML constraints)
- are more convenient to use
- Easy to instantiate and clear (with
memset
)
- Easy to instantiate and clear (with
So overall, I guess vectors are fine in most cases, but you can use arrays just to be on the safe side.