teja349's blog

By teja349, history, 8 years ago, In English

I have observed a fact while solving . The complexity of lower bound varies with type of iterator passed to it.

But a more weird fact is
1. the below lower bound takes O(log(n)) time
~~~~~ multiset< ll > set1; //some insert operation on multiset it=set1.lower_bound(val); ~~~~~

the question is Vasiliy's Multiset

here is my submission in which it took O(logn) when i used in above format here
2. the below lower bound works in O(n) time

multiset< ll > set1;
//some insert operation on multiset
it=lower_bound(set1.begin(),set1.end(),val);

here is my submission in which it took O(n) when i used in above format here
[cut] I dont understand why it was happening like this because both iterators here are same type.

Can someone specify places where all places lower_bound function is of O(logn) complexity

  • Vote: I like it
  • -7
  • Vote: I do not like it

| Write comment?
»
8 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by teja349 (previous revision, new revision, compare).

»
8 years ago, # |
Rev. 2   Vote: I like it +11 Vote: I do not like it

set::lower_bound takes advantage of knowing it's applying this operation on a set and it's properties. Ensuring logN time.

While std::lower_bound only promises logN time if the iterators are random. Set iterator is a not a random one ( as far as I know, I don't mainly use C++), so that explains the time.

»
8 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by teja349 (previous revision, new revision, compare).

»
8 years ago, # |
  Vote: I like it +11 Vote: I do not like it
  • »
    »
    8 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    i think even u solved it the same way and got TLE right

»
8 years ago, # |
  Vote: I like it +8 Vote: I do not like it

Method lower_bound of std::multiset class works with internal implementation, it searches for the suitable vertix in self-balanced tree.

And function std::lower_bound uses usual binary search which requires random access iterators for checking element inside range using O(1) time.

»
8 years ago, # |
Rev. 2   Vote: I like it +8 Vote: I do not like it

Just change it to

it=set1.lower_bound(val);

This is strictly O(log(n))

»
5 years ago, # |
  Vote: I like it -46 Vote: I do not like it

how can I implement lower bound on vector pair in logn

  • »
    »
    5 years ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    I assume you mean that you have something like, vector<pair<int, int>> which is sorted by the standard comparator and you want to do a lower_bound call but only on the first coordinate. If, say, you want to get the first element that is of the form {4, x}, then you should do a lower_bound call with the value {4, INT_MIN} or whatever the corresponding minimum is. Note that this will be a different value if you are not using the standard comparator.

»
4 years ago, # |
  Vote: I like it -10 Vote: I do not like it

Thanks for the blog, I was getting TLE on leetcode hard problem just because

I was doing auto it = lower_bound(s.begin(),s.end(),val) , but when I changed it to this

auto it = s.lower_bound(val) the code got accepted. So does that mean the first one takes O(n) and the

later one takes O(log n) time

  • »
    »
    4 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The first one takes $$$\mathcal{O}(n \log n)$$$ which is even worse than a linear search.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it -10 Vote: I do not like it

      Am i correct in my perception ?

      In the first one first it sorts the set which takes O(nlogn) time and then applies binary search to find lower bound which is O(logn)

      so O(nlogn+logn) approximates to O(nlogn).

      Am i correct purplesyringa

      • »
        »
        »
        »
        4 years ago, # ^ |
        Rev. 3   Vote: I like it +1 Vote: I do not like it

        I was actually mistaken a bit, it's $$$\mathcal{O}(n)$$$ still.

        std::lower_bound doesn't sort the set, the set is already sorted. std::lower_bound is literally binary search on an array except that instead of using indexes, it uses itertors. So it starts with l = a.begin(), r = a.end() and uses m = l + (r - l) / 2 as pivot. For arrays this works in $$$\mathcal{O}(\log n)$$$ because you can subtract iterators to get the distance between them and add an integer to an iterator in $$$\mathcal{O}(1)$$$. For sets, you can't do that directly, so to calculate r - l you have to increment l one by one until you get l == r, which works in $$$\mathcal{O}(r - l)$$$, approximately. Then l + (r - l) / 2 increments l the required count of times in $$$\mathcal{O}(r - l)$$$ too. So all in all std::lower_bound runs in $$$\mathcal{O}(n + \frac{n}{2} + \frac{n}{4} + \dots + 1) = \mathcal{O}(n)$$$.

        It's actually a bit harder than that because incrementing an arbitrary iterator of a set by x is not $$$\mathcal{O}(x)$$$ but $$$\mathcal{O}(x \log n)$$$, but it can be shown that if the set is implemented as a red-black tree (it usually is), for the operations binary search performs on the iterators a stronger complexity asymptotic holds.

        std::set::lower_bound, on the other hand, is simply a tree walk so it's $$$\mathcal{O}(h)$$$ where $$$h$$$ is the height of the tree and the height of a red-black tree is logarithmic in $$$n$$$.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it +13 Vote: I do not like it

      $$$\mathcal{O}(n \log{n})$$$ doesn't mean it's worse than a linear search.

      And I'm aware that it could be implemented as $$$T(n) = n + T(\frac{n}{2}) = \mathcal{O}(n)$$$.

      • »
        »
        »
        »
        4 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I'm not sure why it's better than a linear search. You're right about $$$\mathcal{O}(n)$$$, that was an overestimate on my side.

        • »
          »
          »
          »
          »
          4 years ago, # ^ |
            Vote: I like it +8 Vote: I do not like it

          That's not overestimate, since $$$f(n) = \mathcal{O}(n) \implies f(n) = \mathcal{O}(n \log{n})$$$, you were right on the $$$\mathcal{O}(n \log{n})$$$ bound, but $$$\mathcal{O}(n \log{n})$$$ doesn't mean it is slower than linear search, that's what I wanted to state.

          • »
            »
            »
            »
            »
            »
            4 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Got it. I usually have to write something along the lines of 'the algorithm runs in exactly $$$\mathcal{O}(T(n))$$$' which is obviously an oxymoron, the right way to state this is 'the algorithm runs in $$$\Theta(T(n))$$$' but unfortunately many people don't know anything about complexity except big-O notation and don't know big-O is only an upper bound. Sorry for confusion.

            • »
              »
              »
              »
              »
              »
              »
              4 years ago, # ^ |
              Rev. 3   Vote: I like it +10 Vote: I do not like it

              I like using $$$\Omega$$$.

              Because, if you want to say that a program runs faster than "something", you use $$$\mathcal{O}$$$, if you want to say that a program runs slower than "something", you use $$$\Omega$$$.

              $$$\Theta$$$ doesn't have that "message", it's just some statement, even considering it means both. Plus it's rare when you say the program runs in exactly "something". And for many algorithms it's not that easy to analyze lower and upper bounds for the worst cases to be equal.

»
9 months ago, # |
Rev. 2   Vote: I like it -19 Vote: I do not like it

Can anyone point out why Maximum Subarray Sum — II is getting tle?

Code
  • »
    »
    9 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it
    My code

    Hey, I did something similar, I got TLE, mostly time limits are strict.