Introduction:
This blog covers how a DSU can save all of its previous versions after several union operations which there seems to be a lack of resources to discuss. Thanks to MinaRagy06 for helping me write this blog!
A basic DSU can be implemented in the following way:
Next, I will explain how to store more information to answer queries that require time travelling.
Main Idea:
Problem 1 (Easy) :
Let's start by solving a simple CSES Problem.
The problem can be reduced to binary searching on the first time nodes $$$a$$$ and $$$b$$$ are connected. Now let's learn how to check connectivity during any time using persistent DSU.
We maintain an extra array $$$\text{time_changed}$$$ where it stores the first time a node does not become a root after adding the edges in order. Now we can run $$$\text{get_root}$$$ with an extra parameter $$$\text{time}$$$ to find the root of that node during a certain time and check if nodes $$$a$$$ and $$$b$$$ have the same root at that specific time during binary search.
Time Complexity: $$$O(N + M \cdot {log} N + Q \cdot {log} M \cdot {log} N)$$$
Problem 2 (Medium) :
Consider the following problem, there is a Graph consisting of $$$N$$$ nodes and initially there are no edges you are given $$$Q$$$ queries which you have to solve online of the type:
Add an Edge between node $$$A$$$ and node $$$B$$$
Check Whether Node $$$A$$$ and Node $$$B$$$ are in the same connected component after the $$$X$$$-th Query
Find the Number of Nodes in the connected component of Node $$$A$$$ after the $$$X$$$-th Query
Constraints:
$$$1 \le N,Q \le 2 \cdot 10^5$$$
$$$1 \le A,B \le N$$$
$$$1 \le X_i \le i$$$
In this problem, we also have to use the $$$\text{time_changed}$$$ array. In addition, we need to save the versions of each node such that it was a root after adding an edge in its component along with the size (or any new information we need to save in other problems).
We save two vectors for each node $$$\text{version}$$$ and $$$\text{size}$$$. whenever we add an edge, we push back the time to the new root along with the new total size.
Now whenever we want to get the size of component $$$A$$$ at time $$$T$$$, we find the root of $$$A$$$ at time $$$T$$$ and binary search on the largest index $$$pos$$$ such that $$$version[A][pos] \le T$$$ and return $$$size[A][pos]$$$ as the answer.
Time complexity: $$$O(N + Q \cdot {log} N)$$$
Problem 3 (Hard) :
$$$\textbf{Prerequisites: Persistent segment tree}$$$
Consider the following problem, initially you have $$$M = 1$$$ graphs of $$$N$$$ nodes with no edges, and you are given $$$Q$$$ queries which you have to solve online of the type:
Copy the $$$k$$$-th graph and label the new graph $$$M + 1$$$ and set $$$M = M + 1$$$ then add a new edge connecting nodes $$$A$$$ and $$$B$$$ in this graph.
Check whether node $$$A$$$ and node $$$B$$$ are in the same connected component in the $$$k$$$-th graph.
Find the number of nodes in the connected component of node $$$A$$$ in the $$$k$$$-th graph.
Constraints:
$$$ 1 \le N,Q \le 2 \cdot 10^5 $$$
$$$ 1 \le K \le M $$$
$$$ 1 \le A,B \le N $$$
We can’t do the DSU in the same way mentioned in the previous problem since it allows us to update only the last version of the DSU but here we may have to update an older version. To solve this, we can use a persistent segment tree for every array instead, leaf $$$i$$$ would store the value of parent and size for index $$$i$$$ and any non-leaf won’t store anything except the left and right child. When updating $$$parent_i$$$ and $$$size_i$$$ for some index $$$i$$$ for a particular DSU version $$$k$$$, we can just refer to the node $$$i$$$ in the segment tree with root $$$k$$$ and change the leaf values we need to and label the new root $$$M + 1$$$ which will correspond to the DSU version $$$M + 1$$$.
Time complexity: $$$O( N + Q \cdot {log}^2 N)$$$
Extra Problems:
https://codeforces.net/gym/104468/problem/B
https://oj.uz/problem/view/APIO20_swap
Thanks for reading!
Great read! It never crossed my mind to use a persistent seg-tree for dsu, the solutions here are quite good.
(By the way I got the fastest code by accident in that first cses question with a different approach which i believe works in O((m+q)*log(n)), if you want I can explain it here)
Yes, please!
The idea is to use a regular dsu without any path compression and to save with each edge the time when it was added
So to check when 2 nodes were connected you need to check what is the maximum value in the path between the 2 in the "tree" of the dsu, because before it the path wouldn't exists.
Since the height of the tree is bounded in O(logn) you can just "pop up" from both nodes until you get to the lca, checking what is the maximum time value on the way.
So you get O(logn) per query, plus O(mlogn) to build the "dsu tree" as I like to call it.
Theres a way to remove the binary search in the CSES problem!
We can instead use two pointers and keep moving nodes A and B till they are the same and we can keep track of the first time they need to be the same.
while they are not the same, if time_changed[a] < time_changed[b] we can move a to parent[a] and ans = time_changed[a] else we do it for b. if they dont become equal we return -1;
This is $$$O( N + (M+Q){log} N)$$$ !
with some fast input output and some optimizations it gets 0.07 seconds (fastest solution)
Congrats, you've beat me!
My turn to optimize now
It seems that you forgot that operations on a persistent segment tree cost O(log), resulting in O(log^2) operations for persistent DSU using them. I'm pretty sure you can't even claim O(logN * ackermann_inverse(N)) because amortization doesn't go well with persistent structures.
Thanks for your reply!
I Believe that in the worst case the tree is of height log(N) resulting in Log^2(N) operations including the segtree but this doesnt happen alot for the same reason the dsu is not N*Log(N) . (so I guess its average is (LogN * ackermann_inverse(N)) )
I think another way to look at it would be that its just a normal DSU with compression but there is a few segment tree calls for every function.
Please tell me if theres anything wrong or how you would find its complexity.
Well you yourself said what's wrong. You "guessed" that the "average is (LogN * ackermann_inverse(N))". Amortization and persistent data structures don't go well together naturally. We can create a case where you have a tree in the dsu with depth O(log) then create many branches (for example doing operations that don't change that tree with depth O(log)) from that one and in each one you'll have to compress O(log) nodes resulting in O(log^2) cost per operation in average.
About your intuition of it "being just a normal DSU with compression" it's completely wrong. It's more like many normal DSUs with compression. Imagine if you used only path compression. Then, according to your wrong intuition, it should be O(log) seg tree operations in average, since that's the case when using path compression. But it's easy to see that we can create a case similar to what I've described above and make O(N) dsus that have some big path and then make one operation in each one, resulting in O(N^2) seg tree operations.
Yes you are right, I didn't think about this case before and that is why I found a wrong upper-bound. Thanks I will edit it now.
Continuing on the idea of problem 3, you can make any data structure persistent if you just use a persistent segtree as its underlying memory.
I might be missing something simple, but where is the $$$O(N \cdot {log} N)$$$ coming from in the complexity of the last problem? Doesn't the preprocessing cost only $$$O(N)$$$?
Thanks for the nice blog btw. Learned something new.
oops my bad, I was probably confused with all the logs in the solution while calculating the time complexity. It is actually $$$O(N)$$$ not $$$O(N \cdot {log} N)$$$ preprocessing as we only build one segment tree. Thanks I will edit it now.