ismail-but-noob's blog

By ismail-but-noob, 11 months ago, In English

Motivation

Dynamic CHT with sets is hard to write. I haven't learned LiChao Tree, but I will share this technique anyways, as it can be applied to other data structures.

AbdelmagedNour explained this to me $$$8$$$ months ago, and it introduced a new way of thinking to me. I could not find any references to something similar on the internet, so I decided I'll give it my best shot here.

If this already has a name, please comment it.

Prerequisites:

There are no prerequisities; however, the problem I have posted for checking implementation uses Convex Hull Trick. Please read this blog if you don't know about it.

The problem:

Assume we have a data structure which can add an element in $$$O(A)$$$ and query in $$$O(Q)$$$; however, elements have to be added in monotonic order of some valid comparison function. If we receive the elements in monotonic order, we have no problems! But what if we don't? My claim is that it is possible to add all $$$N$$$ elements in amortized $$$O(AN \log N)$$$, and query for our property in $$$O(Q \log N)$$$. Usually we want to query the maximum element/minimum element, or any property in which we can easily merge $$$2$$$ possible answers.

A classical example of this is the convex hull trick data structure. This data structure allows us to insert lines in the form $$$y = mx + c$$$, and query the minimum/maximum value of $$$y$$$ for a given $$$x$$$. It works in amortized $$$O(n)$$$ where $$$n$$$ is the number of lines added. However, this assumes that lines are sorted in monotonic order of their slopes. What if they aren't?

Initially we create an array $$$blk$$$ of size $$$\lceil{\log N}\rceil$$$. $$$blk[i]$$$ will store a version of our data structure of size $$$2^i$$$, initially, $$$blk[i]$$$ is empty for all $$$i$$$.

Now, to add the first element, we just add it to $$$blk[0]$$$; but for the second element, we clear $$$blk[0]$$$, use merge sort to sort $$$blk[0]$$$ and the second element, and insert them into $$$blk[1]$$$. The third element is inserted into $$$blk[0]$$$, the fourth is merged using merge sort with $$$blk[0],blk[1]$$$, and they are all inserted into $$$blk[2]$$$, (and $$$blk[0],blk[1]$$$ are emptied).

A simple code demonstrating it would look something like this:

DS merge(DS& a, DS& b) {
 // This function merges the 2 sorted data structures. This would depend on how the data structure works.
}
void insert(T element) {
  DS current;
  current.insert(element);
  int index = 0;
  while (blk[index].size > 0) {
   current = merge(current, blk[index]), blk[index].clear();
   index++;
  }
  blk[index] = current;
}

Why is the complexity amortized $$$O(AN \log N)$$$?

Notice that each element you add will go through at most $$$\log N$$$ places in the $$$blk$$$ array, and we add $$$N$$$ elements, each of which needs $$$O(A)$$$ to be inserted into the data structure, so it amortizes to $$$O(AN \log N)$$$.

Now, to query on this data structure, we choose the best possible answer throughout all non empty data structures in $$$blk$$$.

A simple code demonstrating it would like this:

T query(U input) {
  T ans = identity;
  for (auto& ds : blk) {
    if (ds.empty()) continue;
    ans = better(ans, ds.query(input)); // better returns whichever answer is better, or merges them, again, some things change depending on the data structure.
  }
}

The complexity of this is clearly $$$O(QX \log N)$$$, where $$$O(X)$$$ is the complexity of the better function.

As a practice for this technique, you can solve this problem.

Extra Problems

If you know any more problems which apply this technique (with any data structure), please comment and I'll add it to the list :)

P.S. This is my first educational blog, if there's any mistake or any improvements I can make, please criticize me in the comments :)

  • Vote: I like it
  • +127
  • Vote: I do not like it

»
11 months ago, # |
Rev. 2   Vote: I like it +26 Vote: I do not like it

I didn't see that coming :)

One thing I would like to add, I know only two usefull application for this one (I don't know if there are any more). The first one is the CHT mentioned in the blog as this method is much faster than just using sqrt decomposition.

The other one is making an insertion in the aho corasick, like I have a set of patterns and two types of queries either insert one more pattern or query for a certain string "how many pattern from the set appeared in it as a substring", again using this method will give us an $$$O(NlogN)$$$ solution where $$$N$$$ is the total length of the input which is much better than using sqrt decomposition.

  • »
    »
    11 months ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    For CHT, Li Chao Tree would be simpler, and without second $$$O(\log n)$$$ factor.

    • »
      »
      »
      11 months ago, # ^ |
        Vote: I like it +5 Vote: I do not like it

      Once, there was a problem I couldn't use LCT in because of the double precission (but in case of comparing 2 lines I can avoid the division), to solve that problem I used sqrt decomposition that time, so this one is still usefull to know I think. Sadly I don't have the link of the problem now, I only remember it was on dmoj

»
11 months ago, # |
  Vote: I like it +25 Vote: I do not like it

This is idea 4 of General ideas by adamant!

  • »
    »
    11 months ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Interesting blog!

    I hope that mine is more beginner-friendly so I will keep it :)

    Thanks for the insight!

»
11 months ago, # |
  Vote: I like it +9 Vote: I do not like it

One cool application of this trick: 710F - String Set Queries. I wrote an explanation about this in this blog.

»
11 months ago, # |
  Vote: I like it +9 Vote: I do not like it

I don't really know what it's called, but I've heard "Add to merge data structure"
The simplest application for this technique is std::set w/o erasure, with $$$O(logn)$$$ addition and $$$O(log^2n)$$$ lookup

On a more serious note, we can make this technique a bit more general. Consider the following problem:
You're given $$$Q$$$ queries of the form
- Add string $$$s$$$ into the dictionary
- Query the number of occurrences of each word from the dictionary in a given text $$$t$$$

The static problem (without addition) is doable in $$$O(\sum|s| + |t|)$$$ via Aho-Corasick algorithm

Now to solve the dynamic problem, let's build more Aho-Corasicks. Let's suppose the size of an Aho-Corasick is the sum of length of strings in the dictionary. Let's have one of size in range $$$[1, 2)$$$, one in range $$$[2, 4)$$$, $$$[4, 8)$$$ and so on. If we need two Aho-Corasicks in the same range, we can merge them and get one of the next range. So by applying the same technique we can build at most $$$O(log \sum |s|)$$$ data structures which can each be queried in $$$O(|t|)$$$. So the total complexity would be $$$O(log(\sum|s|) \cdot (\sum |s| + \sum |t|))$$$.

Another cool thing about it is we can perform deletions via the same data structure. Just store the deleted strings and subtract the query value. The same constraints as in Fenwick tree holds though (existence of inverse element).