The dynamic RMQ problem goes as follows:
Given an array of integers $$$a$$$ of length $$$n$$$ and $$$q$$$ queries of $$$2$$$ types:
- $$$l\ r$$$ — find the minimum on $$$a[l..r]$$$
- $$$i\ x$$$ — make $$$a_i = x$$$
If the number of updates $$$\gt \gt$$$ the number of queries (which may occur in MO algorithm for example, where we can have $$$O(n)$$$ min queries and $$$O(n\sqrt n) $$$ updates), what is the most efficient way to solve the problem?
We obviously need smth faster than $$$O(\log(n) )$$$ per update (which is achievable with a ton of ways), like $$$O(1)$$$ or maybe $$$O(\log{\log{n} }) $$$.
I though about it for a while but wasn't able to come up even with sublinear solution for querying min while preserving $$$O(1)$$$ update time.
not most effective for MO algorithm
split in blocks of length log log n, store minimum in each
to update you naively recalculate minimum in the block
then you can answer in O(n / log log n), update in O(log log n)
you are welcome
you can also do O(log log log n) by the way
O(1) I dont know though
I came up with one solution that works in $$$O(n * \sqrt{n})$$$, but it's pretty unsatisfying, since it requires offline queries, and sorting all queries of second type by $$$x$$$ in $$$O(n * \sqrt{n})$$$ (for example it is possible if $$$x$$$ is in range $$$[1, n]$$$).
First idea that comes to mind is some kind of sqrt-decomposition: in each block we keep minimum of all elements at that block. It's really easy to make queries of first type $$$O(\sqrt{n})$$$, but it seems updating minumum on block takes at least $$$O(log n)$$$ time. But we can precalculate all these values.
For each query $$$k$$$ of second type, find next query that changes the same element $$$i_k$$$, let it be $$$nxt_k$$$. So $$$[k,nxt_k)$$$ is time segment in which this query exists. Notice that for each query, exact index of element $$$i_k$$$ does not matter anymore, only it's block, time segment and value matters.
So we want to compute something like: $$$lazy_{b,i}$$$ — minimum at block $$$b$$$, after $$$i$$$ queries of second type at that block.
Now, let's sort all queries of second type in ascending order of $$$x$$$ and push them into their respective blocks.
Then, solve for each block $$$b$$$ independently: compress all time segments of queries of that block (this could be done by some kind of parallel scanline over all blocks). Finally, if all queries are sorted in ascending order, then for each query $$$k$$$ we take its compressed segment $$$[k,nxt_k)$$$, for all $$$i$$$ in $$$[k,nxt_k)$$$ make $$$lazy_{b,i}$$$ = $$$x_k$$$ and erase it from future updates. That is done using dsu in $$$O(n * \sqrt{n})$$$.
In result we should have all changes of all minumum values of all blocks, and everything else is trivial.
Hopefully that's good enough.
I see. But in practice bottom-up segtree worked actually fine. Especially with optimization of not going further up if minimum in vertex didn't change.
Think this is supposed to translate to "you can't beat amortized O(log)".
MIHAI PATRASCU MENTIONED 🇷🇴 🇷🇴 🇷🇴 🇷🇴 🇷🇴 🇷🇴 FIRST EVER COMMUNICATION PROBLEM IN IOI 🇷🇴 🇷🇴 🇷🇴 🇷🇴 🇷🇴 🇷🇴 WHAT EVEN IS LR BINARY SEARCH 🇷🇴 🇷🇴 🇷🇴 🇷🇴 🇷🇴
I think for the Mo problem you can use the second idea listed here (i.e. replace the segment tree which admits online O(log)/O(log) for sqrt decomposition which admits O(1)/O(sqrt) update/query)
But how will you do updates in $$$O(1)$$$? You need to recalculate minimum on whole block.
Oh true. Who would've thought thinking before speaking is a good idea.
where is chinese comment explaining how to do it