Hello, Codeforces!↵
↵
Some time ago I created a blog post about [Sqrt-tree](http://codeforces.net/blog/entry/57046). If you didn't read this post, please do it now.↵
↵
But earlier, we were able just to answer the queries on a static array. Now we will make our structure more "dynamic" and add update queries there.↵
↵
So, let's begin!↵
↵
# Update queries↵
↵
Consider a query $\text{update}(x, val)$ that does the assignment $a_x = val$. We need to perform this query fast enough.↵
↵
## Naive approach↵
↵
First, let's take a look of what is changed in our tree when a single element changes. Consider a tree node with length $l$ and its arrays: $\text{prefixOp}$, $\text{suffixOp}$ and $\text{between}$. It is easy to see that only $O(\sqrt{l})$ elements from $\text{prefixOp}$ and $\text{suffixOp}$ will change (only inside the block with the changed element). $\text{between}$ will change $O(l)$ elements. Therefore, total update count per node is $O(l)$.↵
↵
We remember that any element $x$ is present in exactly one tree node at each layer. Root node (layer $0$) has length $O(n)$, nodes on layer $1$ have length $O(\sqrt{n})$, nodes on layer $2$ have length $O(\sqrt{\sqrt{n}})$, etc. So the time complexity per update is $O(n + \sqrt{n} + \sqrt{\sqrt{n}} + \dots) = O(n)$.↵
↵
But it's too slow. Can it be done faster?↵
↵
[cut]↵
↵
↵
## A sqrt-tree inside the sqrt-tree↵
↵
Note that the bottleneck of updating is rebuilding $\text{between}$ of the root node. To optimize the tree, let's get rid of it! Instead of $\text{between}$ array, we store another sqrt-tree for root. Let's called it $\text{index}$ tha. It plays the same role as $\text{between}$— answers the queries on segments of blocks. Note that the rest of the tree nodes don't have $\text{index}$, they keep their $\text{between}$ arrays.↵
↵
A sqrt-tree is _**indexed**_, if its root node has $\text{index}$. A sqrt -tree with $\text{between}$ array in its root node is _**unindexed**_. Note that $\text{index}$ **is _unindexed_ itself**.↵
↵
So, for the _indexed_ tree, we have the following algorithm for updating:↵
↵
* Update $\text{prefixOp}$ and $\text{suffixOp}$ in $O(\sqrt{n})$.↵
↵
* Update $\text{index}$. It has length $O(\sqrt{n})$ and we need to update only one item in it (that represents the changed block). So, the time complexity for this step is $O(\sqrt{n})$.↵
↵
* Dive We can use the previous algorithm to do it.↵
↵
* Go into the child node that represents the changed block and update it in $O(\sqrt{n})$, again with the previously described algorithm.↵
↵
Note that the query complexity is still $O(1)$: we need to use $\text{index}$ in query no more than once, and this will take $O(1)$ time.↵
↵
So,the total time complexity for updating a single element is $O(\sqrt{n} + \sqrt{n} + \sqrt{\sqrt{n}} + \dots) = O(\sqrt{n})$. Hooray! :)↵
↵
## Implementation↵
↵
It is just a slightly modified code of a static sqrt-tree. [Here it is](https://gist.github.com/alex65536/0f4e69f49481791a9088d16df196789d#file-sqrttree-hpp).↵
↵
# Lazy propagation↵
↵
Sqrt-tree also can do things like assigning an element on a segment. $\text{massUpdate}(x, l, r)$ means $a_i = x$ for all $l \le i \le r$.↵
↵
There are two approaches to do this: one of them does $\text{massUpdate}$ in $O(\sqrt{n}\cdot \log \log n)$, keeping $O(1)$ per query. The second one does $\text{massUpdate}$ in $O(\sqrt{n})$, but the query complexity becomes $O(\log \log n)$.↵
↵
We will do lazy propagation in the same way as it is done in segment trees: we mark some nodes as _lazy_, meaning that we'll push them when it's necessary. But one things is different from segment trees: pushing a node is expensive, so it cannot be done in queries. On the layer $0$, pushing a node takes $O(\sqrt{n})$ time. So, we don't push nodes in query, we only see if the parent or current node is _lazy_, and just take it into account while performing queries.↵
↵
## First approach↵
↵
In the first approach, weconsidersay that only nodes on layer $1$ (with length $O(\sqrt{n}$) can be _lazy_. When pushing such node, it updates all its subtree including itself in $O(\sqrt{n}\cdot \log \log n)$. The u$\text{massUpdate}$ process is done ias follows:↵
↵
* Consider the nodes on layer $1$ and blocks corresponding to them.↵
↵
* Some blocks are entirely covered by $\text{massUpdate}$. Mark them as _lazy_ in $O(\sqrt{n})$.↵
↵
* Some blocks are partially covered. Note there are no more than two blocks of this kind. Rebuild them in $O(\sqrt{n}\cdot \log \log n)$. If they were _lazy_, take it into account.↵
↵
* Update $\text{prefixOp}$ and $\text{suffixOp}$ for partially covered blocks in $O(\sqrt{n})$ (because there are only two such blocks).↵
↵
* Rebuild the $\text{index}$ in $O(\sqrt{n}\cdot \log \log n)$.↵
↵
So we can do $\text{massUpdate}$ fast. But how lazy propagation affects queries? They will have the following modifications:↵
↵
* If our query entirely lies in a _lazy_ block, calculate int and take _lazy_ into account. $O(1)$.↵
↵
* If our query consists of many blocks, some of which are _lazy_, we need to take care of _lazy_ only on the leftmost and the rightmost block. The rest of the blocks are calculated using $\text{index}$, which already knows the answer even on _lazy_ block (because it's rebuildt after each modification). $O(1)$.↵
↵
The query complexity still remains $O(1)$.↵
↵
## Second approach↵
↵
In this approach, each node can be _lazy_ (except root). Even nodes in $\text{index}$ can be _lazy_. So, while processing a query, we have to look for _lazy_ tags in all the parent nodes, i. e. query complexity will be $O(\log \log n)$.↵
↵
Butthe update queries will become faster$\text{massUpdate}$ will become faster. It will look in the following way:↵
↵
* $\text{massUpdate}$ fully covers some blocks. So, _lazy_ tags are added to them. It is $O(\sqrt{n})$.↵
↵
* Do not forget to update the index. It is $O(\sqrt{n})$. ↵
↵
* Update $\text{prefixOp}$ and $\text{suffixOp}$ for partially covered blocks in $O(\sqrt{n})$ (because there are only two such blocks).↵
↵
* Do not forget to update the index. It is $O(\sqrt{n})$ (we use the same algorithm as in the next item). ↵
↵
* For partially covered blocks, we go to the nodes representing them and call $\text{massUpdate}$ recursively (reminds something,idoesn't it? :) ).↵
↵
Note that when we do the recursive call, we do prefix or suffix $\text{massUpdate}$. But for prefix and suffix updates we can have no more than one partially covered child. So, we visit one node on layer $1$, two nodes on layer $2$ and two nodes on any deeper level. So, the time complexity will be $O(\sqrt{n} + \sqrt{\sqrt{n}} + \dots) = O(\sqrt{n})$. The approach here is similar to the segment tree mass update.↵
↵
## Implementation↵
↵
I am too lazy (like a node in an sqrt-tree :) ) to write the implementation of lazy propagation on an sqrt-tree. So, I leave it as an exercise. If someone provides an implementation for this, I'll post it here mentioning the author.↵
↵
# Conclusion↵
↵
As we can see, sqrt-tree can perform update queries and do lazy propagation. So, it provides the same functionality as segment trees, but with faster queries. The disadvantage is slow update time, but sqrt-trees can be helpful if we have many ($\approx 10^7$) queries but not many updates.↵
↵
Thanks for reading this post. Feel free to write in comments if there are mistakes in it or something is not clear. I hope that this data structure will be helpful for you.↵
↵
I will publish another one post about sqrt-tree soon.↵
↵
Thanks to [user:Vladik,2018-04-24] for inspiring me to write this post.
↵
Some time ago I created a blog post about [Sqrt-tree](http://codeforces.net/blog/entry/57046). If you didn't read this post, please do it now.↵
↵
But earlier, we were able just to answer the queries on a static array. Now we will make our structure more "dynamic" and add update queries there.↵
↵
So, let's begin!↵
↵
# Update queries↵
↵
Consider a query $\text{update}(x, val)$ that does the assignment $a_x = val$. We need to perform this query fast enough.↵
↵
## Naive approach↵
↵
First, let's take a look of what is changed in our tree when a single element changes. Consider a tree node with length $l$ and its arrays: $\text{prefixOp}$, $\text{suffixOp}$ and $\text{between}$. It is easy to see that only $O(\sqrt{l})$ elements from $\text{prefixOp}$ and $\text{suffixOp}$ will change (only inside the block with the changed element). $\text{between}$ will change $O(l)$ elements. Therefore, total update count per node is $O(l)$.↵
↵
We remember that any element $x$ is present in exactly one tree node at each layer. Root node (layer $0$) has length $O(n)$, nodes on layer $1$ have length $O(\sqrt{n})$, nodes on layer $2$ have length $O(\sqrt{\sqrt{n}})$, etc. So the time complexity per update is $O(n + \sqrt{n} + \sqrt{\sqrt{n}} + \dots) = O(n)$.↵
↵
But it's too slow. Can it be done faster?↵
↵
[cut]↵
↵
↵
## A sqrt-tree inside the sqrt-tree↵
↵
Note that the bottleneck of updating is rebuilding $\text{between}$ of the root node. To optimize the tree, let's get rid of it! Instead of $\text{between}$ array, we store another sqrt-tree for root. Let's call
↵
A sqrt-tree is _**indexed**_, if its root node has $\text{index}$. A sqrt
↵
So, for the _indexed_ tree, we have the following algorithm for updating:↵
↵
* Update $\text{prefixOp}$ and $\text{suffixOp}$ in $O(\sqrt{n})$.↵
↵
* Update $\text{index}$. It has length $O(\sqrt{n})$ and we need to update only one item in it (that represents the changed block). So, the time complexity for this step is $O(\sqrt{n})$.
↵
* Dive
↵
* Go into the child node that represents the changed block and update it in $O(\sqrt{n})$, again with the previously described algorithm.↵
↵
Note that the query complexity is still $O(1)$: we need to use $\text{index}$ in query no more than once, and this will take $O(1)$ time.↵
↵
So,
↵
## Implementation↵
↵
It is just a slightly modified code of a static sqrt-tree. [Here it is](https://gist.github.com/alex65536/0f4e69f49481791a9088d16df196789d#file-sqrttree-hpp).↵
↵
# Lazy propagation↵
↵
Sqrt-tree also can do things like assigning an element on a segment. $\text{massUpdate}(x, l, r)$ means $a_i = x$ for all $l \le i \le r$.↵
↵
There are two approaches to do this: one of them does $\text{massUpdate}$ in $O(\sqrt{n}\cdot \log \log n)$, keeping $O(1)$ per query. The second one does $\text{massUpdate}$ in $O(\sqrt{n})$, but the query complexity becomes $O(\log \log n)$.↵
↵
We will do lazy propagation in the same way as it is done in segment trees: we mark some nodes as _lazy_, meaning that we'll push them when it's necessary. But one thing
↵
## First approach↵
↵
In the first approach, we
↵
* Consider the nodes on layer $1$ and blocks corresponding to them.↵
↵
* Some blocks are entirely covered by $\text{massUpdate}$. Mark them as _lazy_ in $O(\sqrt{n})$.↵
↵
* Some blocks are partially covered. Note there are no more than two blocks of this kind. Rebuild them in $O(\sqrt{n}\cdot \log \log n)$. If they were _lazy_, take it into account.↵
↵
* Update $\text{prefixOp}$ and $\text{suffixOp}$ for partially covered blocks in $O(\sqrt{n})$ (because there are only two such blocks).↵
↵
* Rebuild the $\text{index}$ in $O(\sqrt{n}\cdot \log \log n)$.↵
↵
So we can do $\text{massUpdate}$ fast. But how lazy propagation affects queries? They will have the following modifications:↵
↵
* If our query entirely lies in a _lazy_ block, calculate i
↵
* If our query consists of many blocks, some of which are _lazy_, we need to take care of _lazy_ only on the leftmost and the rightmost block. The rest of the blocks are calculated using $\text{index}$, which already knows the answer even on _lazy_ block (because it's rebuil
↵
The query complexity still remains $O(1)$.↵
↵
## Second approach↵
↵
In this approach, each node can be _lazy_ (except root). Even nodes in $\text{index}$ can be _lazy_. So, while processing a query, we have to look for _lazy_ tags in all the parent nodes, i. e. query complexity will be $O(\log \log n)$.↵
↵
But
↵
* $\text{massUpdate}$ fully covers some blocks. So, _lazy_ tags are added to them. It is $O(\sqrt{n})$.↵
↵
↵
↵
* Do not forget to update the index. It is $O(\sqrt{n})$ (we use the same algorithm as in the next item). ↵
↵
* For partially covered blocks, we go to the nodes representing them and call $\text{massUpdate}$ recursively (reminds something,
↵
Note that when we do the recursive call, we do prefix or suffix $\text{massUpdate}$. But for prefix and suffix updates we can have no more than one partially covered child. So, we visit one node on layer $1$, two nodes on layer $2$ and two nodes on any deeper level. So, the time complexity will be $O(\sqrt{n} + \sqrt{\sqrt{n}} + \dots) = O(\sqrt{n})$. The approach here is similar to the segment tree mass update.↵
↵
## Implementation↵
↵
I am too lazy (like a node in an sqrt-tree :) ) to write the implementation of lazy propagation on an sqrt-tree. So, I leave it as an exercise. If someone provides an implementation for this, I'll post it here mentioning the author.↵
↵
# Conclusion↵
↵
As we can see, sqrt-tree can perform update queries and do lazy propagation. So, it provides the same functionality as segment trees, but with faster queries. The disadvantage is slow update time, but sqrt-trees can be helpful if we have many ($\approx 10^7$) queries but not many updates.↵
↵
Thanks for reading this post. Feel free to write in comments if there are mistakes in it or something is not clear. I hope that this data structure will be helpful for you.↵
↵
I will publish another one post about sqrt-tree soon.↵
↵
Thanks to [user:Vladik,2018-04-24] for inspiring me to write this post.