I have observed a fact while solving . The complexity of lower bound varies with type of iterator passed to it.
But a more weird fact is
1. the below lower bound takes O(log(n)) time
~~~~~ multiset< ll > set1; //some insert operation on multiset it=set1.lower_bound(val); ~~~~~
the question is Vasiliy's Multiset
here is my submission in which it took O(logn) when i used in above format here
2. the below lower bound works in O(n) time
multiset< ll > set1;
//some insert operation on multiset
it=lower_bound(set1.begin(),set1.end(),val);
here is my submission in which it took O(n) when i used in above format here
[cut] I dont understand why it was happening like this because both iterators here are same type.
Can someone specify places where all places lower_bound function is of O(logn) complexity
Auto comment: topic has been updated by teja349 (previous revision, new revision, compare).
set::lower_bound takes advantage of knowing it's applying this operation on a set and it's properties. Ensuring logN time.
While std::lower_bound only promises logN time if the iterators are random. Set iterator is a not a random one ( as far as I know, I don't mainly use C++), so that explains the time.
This looks good.
Auto comment: topic has been updated by teja349 (previous revision, new revision, compare).
Click
i think even u solved it the same way and got TLE right
Method lower_bound of std::multiset class works with internal implementation, it searches for the suitable vertix in self-balanced tree.
And function std::lower_bound uses usual binary search which requires random access iterators for checking element inside range using O(1) time.
Just change it to
This is strictly O(log(n))
how can I implement lower bound on vector pair in logn
I assume you mean that you have something like, vector<pair<int, int>> which is sorted by the standard comparator and you want to do a lower_bound call but only on the first coordinate. If, say, you want to get the first element that is of the form {4, x}, then you should do a lower_bound call with the value {4, INT_MIN} or whatever the corresponding minimum is. Note that this will be a different value if you are not using the standard comparator.
Thanks for the blog, I was getting TLE on leetcode hard problem just because
I was doing
auto it = lower_bound(s.begin(),s.end(),val)
, but when I changed it to thisauto it = s.lower_bound(val)
the code got accepted. So does that mean the first one takes O(n) and thelater one takes O(log n) time
The first one takes $$$\mathcal{O}(n \log n)$$$ which is even worse than a linear search.
Am i correct in my perception ?
In the first one first it sorts the set which takes O(nlogn) time and then applies binary search to find lower bound which is O(logn)
so O(nlogn+logn) approximates to O(nlogn).
Am i correct purplesyringa
I was actually mistaken a bit, it's $$$\mathcal{O}(n)$$$ still.
std::lower_bound
doesn't sort the set, the set is already sorted.std::lower_bound
is literally binary search on an array except that instead of using indexes, it uses itertors. So it starts withl = a.begin(), r = a.end()
and usesm = l + (r - l) / 2
as pivot. For arrays this works in $$$\mathcal{O}(\log n)$$$ because you can subtract iterators to get the distance between them and add an integer to an iterator in $$$\mathcal{O}(1)$$$. For sets, you can't do that directly, so to calculater - l
you have to incrementl
one by one until you getl == r
, which works in $$$\mathcal{O}(r - l)$$$, approximately. Thenl + (r - l) / 2
incrementsl
the required count of times in $$$\mathcal{O}(r - l)$$$ too. So all in allstd::lower_bound
runs in $$$\mathcal{O}(n + \frac{n}{2} + \frac{n}{4} + \dots + 1) = \mathcal{O}(n)$$$.It's actually a bit harder than that because incrementing an arbitrary iterator of a set by
x
is not $$$\mathcal{O}(x)$$$ but $$$\mathcal{O}(x \log n)$$$, but it can be shown that if the set is implemented as a red-black tree (it usually is), for the operations binary search performs on the iterators a stronger complexity asymptotic holds.std::set::lower_bound
, on the other hand, is simply a tree walk so it's $$$\mathcal{O}(h)$$$ where $$$h$$$ is the height of the tree and the height of a red-black tree is logarithmic in $$$n$$$.i understood it . Thank you for clarifying me
$$$\mathcal{O}(n \log{n})$$$ doesn't mean it's worse than a linear search.
And I'm aware that it could be implemented as $$$T(n) = n + T(\frac{n}{2}) = \mathcal{O}(n)$$$.
I'm not sure why it's better than a linear search. You're right about $$$\mathcal{O}(n)$$$, that was an overestimate on my side.
That's not overestimate, since $$$f(n) = \mathcal{O}(n) \implies f(n) = \mathcal{O}(n \log{n})$$$, you were right on the $$$\mathcal{O}(n \log{n})$$$ bound, but $$$\mathcal{O}(n \log{n})$$$ doesn't mean it is slower than linear search, that's what I wanted to state.
Got it. I usually have to write something along the lines of 'the algorithm runs in exactly $$$\mathcal{O}(T(n))$$$' which is obviously an oxymoron, the right way to state this is 'the algorithm runs in $$$\Theta(T(n))$$$' but unfortunately many people don't know anything about complexity except big-O notation and don't know big-O is only an upper bound. Sorry for confusion.
I like using $$$\Omega$$$.
Because, if you want to say that a program runs faster than "something", you use $$$\mathcal{O}$$$, if you want to say that a program runs slower than "something", you use $$$\Omega$$$.
$$$\Theta$$$ doesn't have that "message", it's just some statement, even considering it means both. Plus it's rare when you say the program runs in exactly "something". And for many algorithms it's not that easy to analyze lower and upper bounds for the worst cases to be equal.
Can anyone point out why Maximum Subarray Sum — II is getting tle?
Hey, I did something similar, I got TLE, mostly time limits are strict.