This blog post outlines the design of a very general data structure for associative range queries, in the Rust programming language.↵
↵
In the "real world", self-balancing binary search trees can be augmented to handle a variety of range queries. However, for contest problems, statically allocated variants are much easier to code and usually suffice. The contest community has come to know these data structures as _segments trees_. Here, I will generalize most segment trees that you can find in the wild into one polymorphic data structure, that can easily be copy-pasted during online competitions. I will call it an _ARQ tree_. ARQ is pronounced "arc", which has a similar meaning to "segment", but also stands for "Associative Range Query". It supports highly customizable range queries, the main requirement being that the aggregation operation must be _associative_.↵
↵
### Associativity and Semigroups↵
↵
We begin with an array $a_0, a_1, a_2, \ldots, a_{n-1}$. Each $a_i$ belongs to a semigroup $(S, +)$, where+$+$ stands for any associative binary operation on $S$. In formal notation:↵
↵
**Associative Law:** $+: S \times X \rightarrow S$ satisfies $(a + b) + c = a + (b + c)$ for all $a, b, c \in S$.↵
↵
Because+$+$ is associative, we can drop the parentheses and talk about range aggregates of the form $a_l + a_{l+1} + \ldots + a_r$.↵
↵
### The ARQ Problem↵
↵
In the Associative Range Query problem, we wish to support two types of queries:↵
↵
- Given bounds $l$ and $r$, compute the aggregate $a_l + a_{l+1} + \ldots + a_r$.↵
↵
- Given bounds $l$ and $r$, and a function $f: S \rightarrow S$, replace $a_i$ with $f(a_i)$ for all $l \le i \le r$.↵
↵
In typical instances where computing $a + b$ or $f(a)$ take $O(1)$ time, we wish to support each query in $O(\log n)$ time.↵
↵
### Identity and Monoids↵
↵
Perhaps you've heard of range queries over a monoid. A monoid is simply a semigroup with a special identity element:↵
↵
**Identity Law:** $id\in S$ satisfies $a + id = id + a = a$ for all $a \in S$.↵
↵
We represent `Semigroup` and `Monoid` using Rust `trait`s. The Rust compiler will not verify the associative and identity laws, so it's the programmer's job to check them when implementing these functions:↵
↵
~~~~~↵
trait Semigroup {↵
fn compose(&self, other: &Self) -> Self;↵
}↵
↵
trait Monoid: Semigroup {↵
fn identity() -> Self;↵
}↵
~~~~~↵
In practice, the trait bound `Monoid` turns out to be equivalent to `Semigroup + Clone`, and either of the two would suffice for our purposes. A `Semigroup` can trivially be extended into a `Monoid` by adding an identity element:↵
↵
~~~~~↵
impl<T: Semigroup + Clone> Semigroup for Option<T> {↵
fn compose(&self, other: &Self) -> Self {↵
match self {↵
Some(ref a) => match other {↵
Some(ref b) => Some(a.compose(b)),↵
None => self.clone()↵
},↵
None => other.clone()↵
}↵
}↵
}↵
↵
impl<T: Semigroup + Clone> Monoid for Option<T> {↵
fn identity() -> Self {↵
None↵
}↵
}↵
~~~~~↵
Conversely, a `Monoid` is already a `Semigroup` and can implement `Clone` via composition with the identity element:↵
↵
~~~~~↵
impl<T: Monoid> Clone for T {↵
fn clone(&self) -> Self {↵
self.compose(T::identity())↵
}↵
}↵
~~~~~↵
↵
### Arq Tree API v1: pointwise updates↵
↵
Now that we understand `Semigroup + Clone` as equivalent to `Monoid`, the choice between the twoibecomes an implementation detail, which may have tradeoffs in performance and ergonomics depending on the application. Personally, I found it easier to work with the `Monoid ` `trait`. Our first API will not support full range updates, but only pointwise updates:↵
↵
~~~~~↵
pub struct ArqTree<T> {↵
val: Vec<T>↵
}↵
↵
impl<T: Monoid> ArqTree<T> {↵
pub fn modify(&mut self, pos: usize, f: &dyn Fn(&T) -> T) {↵
// implement modify↵
}↵
↵
pub fn query(&self, l: usize, r: usize) -> T {↵
// implement query↵
}↵
}↵
~~~~~↵
I won't provide a full implementation: you may use other segment tree guides as a reference. In summary, we build a complete binary tree on top of our array. `tree.modify(pos, f)` will replace $a_{pos}$ by $f(a_{pos})$, then recompute each of the ancestors of $a_{pos}$ by applying $+$ to its two children. This will work with no restrictions on the function $f$. Its time complexity is comprised of one application of $f$ and $O(\log n)$ applications of $+$.↵
↵
### Shortcomings of the v1 API↵
↵
Our simple v1 API can't support efficient range updates! In order to update an entire range efficiently, we will need to apply $f$ lazily, storing it in internal nodes of the tree to eventually be pushed toward the leaves. If multiple updates are performed, we may have to store a composition of updates for postponed application. While one may implement a composition operation $f \circ g$ which simply first calls $g$ and then calls $f$, this makes the cost of function application no longer $O(1)$!↵
↵
Thus, we must switch from function pointers to an implicit, composable representation for $f$. The composition of "add 5" and "add 7" is not "add 5 and then 7"; rather, it's "add 12": we should store the number 12 instead of the adding functions.↵
↵
To recap, now we have a monoid $(S, +)$ of array elements, as well as a second monoid $(F, \circ)$ whose set $F \subset S\rightarrow S$ consists of the update functions that we're interested in. Why is $F$ a monoid? Well, it's easy to check that function composition is associative, making it at least a semigroup. And then, just as before, can choose whether to have `F: Monoid` or `F: Semigroup + Clone`. For $F$, I found the latter to be more ergonomic.↵
↵
However, these are not simply two independent monoids! The sets $FS$ and $MF$ interact, with functions from $F$ acting on elements from $MS$ to produce the newly updated elements of $MS$. While we're at it, I'm actually not too happy with the `Semigroup` and `Monoid` traits. There's more than one way for a type, say 32-bit integers, to be a monoid: the operation could be addition, multiplication, minimum, maximum, leftmost non-identity value, etc. With this design, we'd have to wrap our `i32`s in distinct wrappers for each `Monoid` implementation, and that's ugly.↵
↵
But remember that a `struct` is just a collection of data. A `struct`'s `impl` is a collection of functions, and possibly some associated type and constants. Typically, functions inside an `impl` take a special `self` element, and are called methods, but this is not strictly necessary. So we can instead define a trait that packages the two types $S$ and $F$, alongside functions that act on these types!↵
↵
### Arq Tree API v2: range updates↵
↵
~~~~~↵
pub struct ArqTree<T: ArqSpec> {↵
app: Vec<Option<T::F>>,↵
val: Vec<T::S>,↵
}↵
↵
impl<T: ArqSpec> ArqTree<T> {↵
pub fn modify(&mut self, l: usize, r: usize, f: &T::F) {↵
// implement modify↵
}↵
↵
pub fn query(&mut self, l: usize, r: usize) -> T::S {↵
// implement query↵
}↵
}↵
↵
pub trait ArqSpec {↵
type S;↵
type F: Clone;↵
type M;↵
↵
/// For eager updates, compose() can be unimplemented!(). For lazy updates:↵
/// Require for all f,g,a: apply(compose(f, g), a) = apply(f, apply(g, a))↵
fn compose(f: &Self::F, g: &Self::F) -> Self::F;↵
/// For eager updates, apply() can assume to act on a leaf. For lazy updates:↵
/// Require for all f,a,b: apply(f, op(a, b)) = op(apply(f, a), apply(f, b))↵
fn apply(f: &Self::F, a: &Self::MS) -> Self::MS;↵
/// Require for all a,b,c: op(a, op(b, c)) = op(op(a, b), c)↵
fn op(a: &Self::MS, b: &Self::MS) -> Self::MS;↵
/// Require for all a: op(a, identity()) = op(identity(), a) = a↵
fn identity() -> Self::MS;↵
}↵
~~~~~↵
This version supports the previous setting of pointwise updates as well. In that case, `identity()` and `op()` must satisfy their respective monoid laws, but `apply()` can apply any arbitrary function, and `compose()` can remain unimplemented or even crash because my `compose()` is never called during a `modify()` with `l == r`.↵
↵
However, if we plan to do range modifications, i.e., with `l < r`, then $f$ might get applied to an internal node of the tree! To ensure consistency, we require two additional laws:↵
↵
**Composition Law:** $(f \circ g)(a) = f(g(a))$ for all $f, g \in F$, $a \in S$↵
↵
**Distributive Law:** $f(a + b) = f(a) + f(b)$ for all $f \in F$, $a, b \in S$↵
↵
The composition law implies that $F$ is a semigroup, and the distributive law ensures consistent interactions between $F$ and $S$ throughout the tree!S$ and $F$ throughout the tree!↵
↵
### Example: Range Minimum Query↵
↵
To see how to specialize this API, let's see solve the following class problem:↵
↵
- Given bounds $l$ and $r$, compute the minimum of $a_l, a_{l+1}, \ldots, a_r$.↵
↵
- Given bounds $l$ and $r$, and a number $f$, replace $a_i$ with $f + a_i$ for all $l \le i \le r$.↵
↵
The first monoid $S$ consists of the numerical array elements with the minimum operation. The second monoid $F$ consists of functions which add a constant: they are most conveniently represented by literally storing that constant. All four functions are one-liners:↵
↵
~~~~~↵
pub enum AssignMin {}↵
impl ArqSpec for AddMin {↵
type S = i64;↵
type F = i64;↵
fn compose(&f: &i64, &g: i64) -> i64 {↵
f + g↵
}↵
fn apply(&f: &i64, &a: i64) -> i64 {↵
f + a↵
}↵
fn op(&a: &i64, &b: &i64) -> i64 {↵
a.min(b)↵
}↵
fn identity() -> i64 {↵
i64::max_value()↵
}↵
}↵
~~~~~↵
↵
Note that the programmer must manually verify the four laws (only two if range updates are not used). In some cases, your operations may need access to the size or position of the subtree corresponding to the current node. This does not require an extension of the API: the monoid type $S$ can simply be made a tuple which contains this additional information. For examples of the ARQ tree in action, please see:↵
↵
[GitHub link](https://github.com/EbTech/rust-algorithms/blob/master/src/range_query/mod.rs)↵
↵
### Static Implementation↵
↵
[GitHub link](https://github.com/EbTech/rust-algorithms/blob/master/src/range_query/static_arq.rs)↵
↵
To keep this blog post focused on the abstraction and general API, I'll leave the implementation details as a GitHub link. This is a binary-indexed ARQ tree implementation based on a [very cool blog post](http://codeforces.net/blog/entry/18051) by [user:Al.Cash,2019-07-25], so I recommend reading his explanations if you're interested!↵
↵
### Dynamic Implementation↵
↵
[GitHub link](https://github.com/EbTech/rust-algorithms/blob/master/src/range_query/dynamic_arq.rs)↵
↵
A dynamically allocated version of this data structure can be initialized in $O(1)$ instead of $O(n)$ time, enabling implicit representation of array sizes exceeding $10^{18}$! This data structure also has a persistence flag that can be toggled. When persistence is turned off, one should commit to one view of the data structure, and assume that all other views will be destroyed. ↵
↵
### Conclusions↵
↵
This is a side project that I built in summer 2017, expanded upon in summer 2019, and only now in 2020 had the chance to write about. Please let me know if anything can be improved or explained more clearly :)
↵
In the "real world", self-balancing binary search trees can be augmented to handle a variety of range queries. However, for contest problems, statically allocated variants are much easier to code and usually suffice. The contest community has come to know these data structures as _segments trees_. Here, I will generalize most segment trees that you can find in the wild into one polymorphic data structure, that can easily be copy-pasted during online competitions. I will call it an _ARQ tree_. ARQ is pronounced "arc", which has a similar meaning to "segment", but also stands for "Associative Range Query". It supports highly customizable range queries, the main requirement being that the aggregation operation must be _associative_.↵
↵
### Associativity and Semigroups↵
↵
We begin with an array $a_0, a_1, a_2, \ldots, a_{n-1}$. Each $a_i$ belongs to a semigroup $(S, +)$, where
↵
**Associative Law:** $+: S \times X \rightarrow S$ satisfies $(a + b) + c = a + (b + c)$ for all $a, b, c \in S$.↵
↵
Because
↵
### The ARQ Problem↵
↵
In the Associative Range Query problem, we wish to support two types of queries:↵
↵
- Given bounds $l$ and $r$, compute the aggregate $a_l + a_{l+1} + \ldots + a_r$.↵
↵
- Given bounds $l$ and $r$, and a function $f: S \rightarrow S$, replace $a_i$ with $f(a_i)$ for all $l \le i \le r$.↵
↵
In typical instances where computing $a + b$ or $f(a)$ take $O(1)$ time, we wish to support each query in $O(\log n)$ time.↵
↵
### Identity and Monoids↵
↵
Perhaps you've heard of range queries over a monoid. A monoid is simply a semigroup with a special identity element:↵
↵
**Identity Law:** $id\in S$ satisfies $a + id = id + a = a$ for all $a \in S$.↵
↵
We represent `Semigroup` and `Monoid` using Rust `trait`s. The Rust compiler will not verify the associative and identity laws, so it's the programmer's job to check them when implementing these functions:↵
↵
~~~~~↵
trait Semigroup {↵
fn compose(&self, other: &Self) -> Self;↵
}↵
↵
trait Monoid: Semigroup {↵
fn identity() -> Self;↵
}↵
~~~~~↵
In practice, the trait bound `Monoid` turns out to be equivalent to `Semigroup + Clone`, and either of the two would suffice for our purposes. A `Semigroup` can trivially be extended into a `Monoid` by adding an identity element:↵
↵
~~~~~↵
impl<T: Semigroup + Clone> Semigroup for Option<T> {↵
fn compose(&self, other: &Self) -> Self {↵
match self {↵
Some(ref a) => match other {↵
Some(ref b) => Some(a.compose(b)),↵
None => self.clone()↵
},↵
None => other.clone()↵
}↵
}↵
}↵
↵
impl<T: Semigroup + Clone> Monoid for Option<T> {↵
fn identity() -> Self {↵
None↵
}↵
}↵
~~~~~↵
Conversely, a `Monoid` is already a `Semigroup` and can implement `Clone` via composition with the identity element:↵
↵
~~~~~↵
impl<T: Monoid> Clone for T {↵
fn clone(&self) -> Self {↵
self.compose(T::identity())↵
}↵
}↵
~~~~~↵
↵
### Arq Tree API v1: pointwise updates↵
↵
Now that we understand `Semigroup + Clone` as equivalent to `Monoid`, the choice between the two
↵
~~~~~↵
pub struct ArqTree<T> {↵
val: Vec<T>↵
}↵
↵
impl<T: Monoid> ArqTree<T> {↵
pub fn modify(&mut self, pos: usize, f: &dyn Fn(&T) -> T) {↵
// implement modify↵
}↵
↵
pub fn query(&self, l: usize, r: usize) -> T {↵
// implement query↵
}↵
}↵
~~~~~↵
I won't provide a full implementation: you may use other segment tree guides as a reference. In summary, we build a complete binary tree on top of our array. `tree.modify(pos, f)` will replace $a_{pos}$ by $f(a_{pos})$, then recompute each of the ancestors of $a_{pos}$ by applying $+$ to its two children. This will work with no restrictions on the function $f$. Its time complexity is comprised of one application of $f$ and $O(\log n)$ applications of $+$.↵
↵
### Shortcomings of the v1 API↵
↵
Our simple v1 API can't support efficient range updates! In order to update an entire range efficiently, we will need to apply $f$ lazily, storing it in internal nodes of the tree to eventually be pushed toward the leaves. If multiple updates are performed, we may have to store a composition of updates for postponed application. While one may implement a composition operation $f \circ g$ which simply first calls $g$ and then calls $f$, this makes the cost of function application no longer $O(1)$!↵
↵
Thus, we must switch from function pointers to an implicit, composable representation for $f$. The composition of "add 5" and "add 7" is not "add 5 and then 7"; rather, it's "add 12": we should store the number 12 instead of the adding functions.↵
↵
To recap, now we have a monoid $(S, +)$ of array elements, as well as a second monoid $(F, \circ)$ whose set $F \subset S\rightarrow S$ consists of the update functions that we're interested in. Why is $F$ a monoid? Well, it's easy to check that function composition is associative, making it at least a semigroup. And then, just as before, can choose whether to have `F: Monoid` or `F: Semigroup + Clone`. For $F$, I found the latter to be more ergonomic.↵
↵
However, these are not simply two independent monoids! The sets $
↵
But remember that a `struct` is just a collection of data. A `struct`'s `impl` is a collection of functions, and possibly some associated type and constants. Typically, functions inside an `impl` take a special `self` element, and are called methods, but this is not strictly necessary. So we can instead define a trait that packages the two types $S$ and $F$, alongside functions that act on these types!↵
↵
### Arq Tree API v2: range updates↵
↵
~~~~~↵
pub struct ArqTree<T: ArqSpec> {↵
app: Vec<Option<T::F>>,↵
val: Vec<T::S>,↵
}↵
↵
impl<T: ArqSpec> ArqTree<T> {↵
pub fn modify(&mut self, l: usize, r: usize, f: &T::F) {↵
// implement modify↵
}↵
↵
pub fn query(&mut self, l: usize, r: usize) -> T::S {↵
// implement query↵
}↵
}↵
↵
pub trait ArqSpec {↵
type S;↵
type F: Clone;↵
/// For eager updates, compose() can be unimplemented!(). For lazy updates:↵
/// Require for all f,g,a: apply(compose(f, g), a) = apply(f, apply(g, a))↵
fn compose(f: &Self::F, g: &Self::F) -> Self::F;↵
/// For eager updates, apply() can assume to act on a leaf. For lazy updates:↵
/// Require for all f,a,b: apply(f, op(a, b)) = op(apply(f, a), apply(f, b))↵
fn apply(f: &Self::F, a: &Self::
/// Require for all a,b,c: op(a, op(b, c)) = op(op(a, b), c)↵
fn op(a: &Self::
/// Require for all a: op(a, identity()) = op(identity(), a) = a↵
fn identity() -> Self::
}↵
~~~~~↵
This version supports the previous setting of pointwise updates as well. In that case, `identity()` and `op()` must satisfy their respective monoid laws, but `apply()` can apply any arbitrary function, and `compose()` can remain unimplemented or even crash because my `compose()` is never called during a `modify()` with `l == r`.↵
↵
However, if we plan to do range modifications, i.e., with `l < r`, then $f$ might get applied to an internal node of the tree! To ensure consistency, we require two additional laws:↵
↵
**Composition Law:** $(f \circ g)(a) = f(g(a))$ for all $f, g \in F$, $a \in S$↵
↵
**Distributive Law:** $f(a + b) = f(a) + f(b)$ for all $f \in F$, $a, b \in S$↵
↵
The composition law implies that $F$ is a semigroup, and the distributive law ensures consistent interactions between $
↵
### Example: Range Minimum Query↵
↵
To see how to specialize this API, let's see solve the following class problem:↵
↵
- Given bounds $l$ and $r$, compute the minimum of $a_l, a_{l+1}, \ldots, a_r$.↵
↵
- Given bounds $l$ and $r$, and a number $f$, replace $a_i$ with $f + a_i$ for all $l \le i \le r$.↵
↵
The first monoid $S$ consists of the numerical array elements with the minimum operation. The second monoid $F$ consists of functions which add a constant: they are most conveniently represented by literally storing that constant. All four functions are one-liners:↵
↵
~~~~~↵
pub enum AssignMin {}↵
impl ArqSpec for AddMin {↵
type S = i64;↵
type F = i64;↵
fn compose(&f: &i64, &g: i64) -> i64 {↵
f + g↵
}↵
fn apply(&f: &i64, &a: i64) -> i64 {↵
f + a↵
}↵
fn op(&a: &i64, &b: &i64) -> i64 {↵
a.min(b)↵
}↵
fn identity() -> i64 {↵
i64::max_value()↵
}↵
}↵
~~~~~↵
↵
Note that the programmer must manually verify the four laws (only two if range updates are not used). In some cases, your operations may need access to the size or position of the subtree corresponding to the current node. This does not require an extension of the API: the monoid type $S$ can simply be made a tuple which contains this additional information. For examples of the ARQ tree in action, please see:↵
↵
[GitHub link](https://github.com/EbTech/rust-algorithms/blob/master/src/range_query/mod.rs)↵
↵
### Static Implementation↵
↵
[GitHub link](https://github.com/EbTech/rust-algorithms/blob/master/src/range_query/static_arq.rs)↵
↵
To keep this blog post focused on the abstraction and general API, I'll leave the implementation details as a GitHub link. This is a binary-indexed ARQ tree implementation based on a [very cool blog post](http://codeforces.net/blog/entry/18051) by [user:Al.Cash,2019-07-25], so I recommend reading his explanations if you're interested!↵
↵
### Dynamic Implementation↵
↵
[GitHub link](https://github.com/EbTech/rust-algorithms/blob/master/src/range_query/dynamic_arq.rs)↵
↵
A dynamically allocated version of this data structure can be initialized in $O(1)$ instead of $O(n)$ time, enabling implicit representation of array sizes exceeding $10^{18}$! This data structure also has a persistence flag that can be toggled. When persistence is turned off, one should commit to one view of the data structure, and assume that all other views will be destroyed.
↵
### Conclusions↵
↵
This is a side project that I built in summer 2017, expanded upon in summer 2019, and only now in 2020 had the chance to write about. Please let me know if anything can be improved or explained more clearly :)