I recently found out that inserting a vector
in a map
is possible :
map< vector<int>, int > mp;
vector<int> vec = {1,2,3,4,5};
mp[vec] = 5;
cout<<mp[vec];
// prints 5
- If there are
N
vectors
in mp present as keys already, what is the time complexity to insert a newvector
of sizeM
inmp
? Is itO(M*log(N)
or something else depending upon the sizes of vectors present inmp
already? - What would be the time complexity of a query operation such as
mp[vec]
? Please help.
Both are $$$O(M*lg(N))$$$
Any type which has
operator<
defined can be inserted in themap
.vector
has this operator defined, that's why vector can be inserted in themap
.map
is implemented as a self-balancing tree (usually red-black tree). Time taken in inserting or searching a key in self-balancing trees isO(log(N) * T(operator<))
. For the case of vector time required foroperator<
is linear in terms of the size of the vector as this operator will require to traverse all the elements of the vector. Hence it will beO(log(N) * M)
.Can I ask a similar question ?
if N is the size of vector
Is adding a map to vector having the complexity of O(N)
And if I sort a vector, is the complexity O(N log N)
Yes.
map < int , vector < int > >
insertion has complexityO(log(map_size)+vector_size)
.sort(vec.begin(),vec.end())
has complexityO(vector_size*log(vector_size)*complexity_of_comparing)
.What is the worst case of comparing ? I mean is it affect much ?
It depends on the vector's content. For example,
vector < int >
hascomplexity_of_comparing=O(1)
.oh, ok.
And can I ask something more. If we want to use something to store sorted adj vertex. What is the best to use, vector<set> or vector<map<int, bool>>
Well, strictly speaking, the complexity of calling
mp[ver]
orinsert(vector,int)
can be as high aslog(N)*max_vector_length_in_mp
, as c++ standard only guaranteesO(log(N))
times of comparing, while not specifying which elements will be compared.This is also related to how the compiler implements
<
between vectors. If comparing A and B costsO(|A|+|B|)
instead ofO(min(|A|,|B|))
(as we would normally implement it), complexity can reachlog(N)*max_vector_length_in_mp
as well.However, the two scenarios mentioned above are not so likely, so I guess for most compilers that complexity will be
O(M*log(N))
. Don't rely on that too much though, aspriority_queue
in GCC has already turned out to have used a rather strange(and counter-instinct) implemention, which makes the complexity of inserting a vector into it becomelog(N)*max_vector_length_in_mp
.Follow-up: another thing that I was interested in at some point is why
std::sort
of a vector ofstd::string
elements would be linearithmic in the total size of the strings. I struggled to come up with a guarantee of any sort.Moreover, I couldn't figure out any algorithm that would sort a list of $$$N$$$ strings in the given complexity that would be easier than shuffling the $$$N$$$ strings and then inserting them into a BST one string at a time.
In particular, I'm not sure how a regular binary max-heap would guarantee (even amortized) $$$O(\text{sum_of_lengths }log(N))$$$ for $$$N$$$ operations.
Even harder follow-up: In some harder string problems, the solution is to sort strings with the comparator defined by the relationship $$$a \prec b \Leftrightarrow a + b < b + a$$$. This comparator has $$$O(|a| + |b|)$$$ complexity instead of $$$O(min(|a|, |b|))$$$. Why is it fast enough to
std::sort
using this comparator. Moreover, which sort algorithm would guarantee linearithmic complexity with this comparator?Any thoughts on these?
On sorting of strings, I think quicksort can handle the job in
O(sum(length) log n)
in expectation, even if comparing of strings A and B costsO(|A|+|B|)
. Mergesort can give the same complexity (without randomness) if comparing costsO(min(|A|,|B|))
.On the complexity of binary heap (under the assumption that comparing is
O(min(|A|,|B|))
), let's consider two cases: 1.pop
: After each comparing, one of the operands will take a step up in the heap, and let's count the cost in the element who goes up. 2.push
: Similarly, with each comparing one operand goes up. The only exception is the last comparing. We count all these costs in the inserted element.It's not hard to prove that each element can only be counted
O(log)
times.On the $$$|A| + |B|$$$ problem, even though it might go well in expectation, quicksort on strings can act quadratically with a probability of $$$1/n$$$, which is a lot higher than the "normal" quick-sort, and it is probably unwanted. For example, this might be problematic if you have one long string and (not too many) short strings.