Arpa's blog

By Arpa, history, 8 years ago, In English

(This blog has been restored as Errichto and some other users had wanted.)

Hi !

Here is some implementation for solving RMQ (Tarjan’s algorithm) (Range Maximum / Minimum Query).

It’s very simple to implement and it’s time complexity is O((n + qa(n)), a() stands for Akerman inverse function used in DSU.

Problem : Given array a of n integers, and q queries, for each query print the maximum value in range [L, R].

Solution: We need a array of vectors, called assigned. assigned[r] contains queries that their R is r. When getting queries, push each query in assigned[R]. We need a dsu, first pari is i. We need a stack, named st.

For i from 0 to n, do:
	While st is not empty and a[st.top] <= a[i]
		Set i parent of st.top in dsu and pop this element from st.
	Push i to st
	For each query assigned to i
		Answer of this query is a[root of L of this query in DSU].
Code here.

Note that in above code I used path-compression technique for dsu only, size-comparing technique can be used too (but it has lower performance).

It’s obviously true, because each time for any j ≤ i, a[root(j)] is the greatest value in range [j, i].

Performance test

This method (known as Arpas trick)
Vector + Binary search
Sparse table
O(n) method
generator

Here is the result:

Method\Time(milliseconds)Strictly increasing arrayStrictly decreasing arrayRandom
This method (known as Arpa's trick) 294328902946
Sparse table 361235953807
Vector + Binary search 310161303153
O(n) method 378839203610
  • Vote: I like it
  • +89
  • Vote: I do not like it

»
8 years ago, # |
  Vote: I like it +61 Vote: I do not like it

I know this technique as "Arpa's trick". Are you sure you've got the right name?

  • »
    »
    8 years ago, # ^ |
      Vote: I like it +28 Vote: I do not like it

    I discovered this technique myself, but it seems this technique existed before (as Zlobober said). So I can't name this technique "Arpa's trick", but I call it "Arpa's trick", you can call it "Arpa's trick" as well.

  • »
    »
    8 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    At least he's helping others ... You can also write tutorials and name techniques using your fake name . Are you sure you wrote "lucian bicsi" correctly ?

    • »
      »
      »
      8 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Fake name ?! In fact, "Arpa" is compressed version of my name, my real name is "AmirReza PoorAkhavan", my friends call me Arpa at school, chat, etc. I'm more familiar with "Arpa" instead of my real name ("PoorAkhavan" or "AmirReza").

      There are much persons, knowing me as "Arpa", and don't know my real name even.

      • »
        »
        »
        »
        8 years ago, # ^ |
          Vote: I like it +8 Vote: I do not like it

        I was talking about I_love_Retrograd

        • »
          »
          »
          »
          »
          8 years ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          Oh right ! But your comment prepared a opportunity for me to talk about my nick name :D

          Edit: sorry anyway.

»
8 years ago, # |
  Vote: I like it +6 Vote: I do not like it

You could also binary search on the stack instead of using DSU, which might be easier to code depending on the implementation, and still achieve a reasonable performance (binary searches are pretty light).

  • »
    »
    8 years ago, # ^ |
    Rev. 2   Vote: I like it +1 Vote: I do not like it

    I'll add some statistics about what method is faster soon, please wait for several hours.

    Edit: Done.

    • »
      »
      »
      4 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Can anyone please explain the vector+binary search approach for this problem or provide any source where i can learn the approach. Thank you.

      • »
        »
        »
        »
        4 years ago, # ^ |
        Rev. 2   Vote: I like it +5 Vote: I do not like it

        Iterate over the elements of the array, and answer all queries that end in the current element. And you maintain a decreasing vector with the elements (and indices), on which you can binary search to find the answer.

        Think of it this way, if you fixate the right boarder and look at the RMQs [0, R], [1, R], [2, R], ..., [R, R], then you can notice that the RMQs are monotone decreasing. I.e. RMQ(0, R) >= RMQ(1, R) >= RMQ(2, R) >= ... >= RMQ(R, R). In the vector you maintain the list of all values that appear in that RMQ list, together with their most-left index.

        If you add a new element (so you go from R to R+1), you just need to look at how the RMQs change, since now you want the list to reflect the RMQs for [0, R+1], ..., [R+1, R+1]. Which means if the new element in the array is smaller than the previous minimum, then you can just add the element to the vector, as all the previous R+1 RMQs don't change, and the last one will have the RMQ A[R+1]. But if the element is bigger, then you need to delete a few of the previous RMQs from the list, since by adding that element their RMQ will get bigger.

        The example code from Arpa should explain the details.

»
8 years ago, # |
  Vote: I like it +5 Vote: I do not like it

You can outline header of the table like this:

A/B 1 2 3 4
Arpa's trick 1 3 6 0
Sparse table 6 3 1 0
O(n) 3 1 6 0

Code:


A/B | 1 | 2 | 3 | 4 - - Arpa's trick | 1 | 3 | 6 | 0 Sparse table | 6 | 3 | 1 | 0 O(n) | 3 | 1 | 6 | 0
»
4 years ago, # |
  Vote: I like it +26 Vote: I do not like it

Ok, I found this post, it's too old but anyway. If someone finds it in the future. It's said:

"Note that in above code I used path-compression technique for dsu only, size-comparing technique can be used too (but it has lower performance)."

But only path-compression works in $$$O(\log n)$$$ time. So if you want it to work in $$$a(n)$$$ time, you need to use both path-compression and size-comparing techniques.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    That's true, but in this algorithm we force links, not merge two sets. Thus using rank heuristic can't be applied in this algorithm, which leaves us with O*(log(n)) asymptotics per query. In detail: https://cs.stackexchange.com/a/108793

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Sorry, I didn't understand your point. Could you extend it, please?

      • »
        »
        »
        »
        3 years ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        All I wanted to say is that this approach can't use the full power of the DSU's heuristics, therefore O*(alpha(n)) asymptotics is unreachable.

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Huh? It's easy to implement merge-by-rank or merge-by-size here. It might be slower at practical input sizes, though, especially for non-malicious input.

          Doing so just involves tracking the "intended" roots (the largest value in each merged range) as a summary statistic in the actual structural roots of each tree.

        • »
          »
          »
          »
          »
          3 years ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          I think I know how to implement this using both optimizations. I will do it after the snackdown round, if nobody does it before and if I do not forget

          • »
            »
            »
            »
            »
            »
            3 years ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Uh, I was going to write blogpost about this :D

            Idea is the following: each DSU set contains segment of element between the stack entries. Each time we pop something from the stack, we merge segments that element was separating by merging the corresponding sets in DSU.

            • »
              »
              »
              »
              »
              »
              »
              3 years ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              I don't really understand what's the difference between this approach and the initial one

              • »
                »
                »
                »
                »
                »
                »
                »
                3 years ago, # ^ |
                  Vote: I like it +13 Vote: I do not like it

                Idk, I wanted to say that the only difference should be that we also store the maximal value of the whole component in its dsu representative

              • »
                »
                »
                »
                »
                »
                »
                »
                3 years ago, # ^ |
                  Vote: I like it -10 Vote: I do not like it

                The idea is the same: each dsu tree knows it's rightmost element. Yet order of linking actually doesn't matter so one can just merge correponding components.