This blog post outlines the design of a very general data structure for associative range queries, in the Rust programming language.
In the "real world", self-balancing binary search trees can be augmented to handle a variety of range queries. However, for contest problems, statically allocated variants are much easier to code and usually suffice. The contest community has come to know these data structures as segment trees. Here, I will generalize most of the segment trees that you can find in the wild into one polymorphic data structure, that can easily be copy-pasted during online competitions. I will call it an ARQ tree. ARQ is pronounced "arc", which has a similar meaning to "segment", but also stands for "Associative Range Query". It supports highly customizable range queries, the main requirement being that the aggregation operation must be associative.
Associativity and Semigroups
We begin with an array $$$a_0, a_1, a_2, \ldots, a_{n-1}$$$. Each $$$a_i$$$ belongs to a semigroup $$$(S, +)$$$; that is, a set $$$S$$$ together with an associative binary operation $$$+$$$. In formal notation:
Associative Law: $$$+: S \times S \rightarrow S$$$ satisfies $$$a + (b + c) = (a + b) + c$$$ for all $$$a, b, c \in S$$$.
Because $$$+$$$ is associative, we can drop the parentheses without ambiguity and talk about range aggregates in the form $$$a_l + a_{l+1} + \ldots + a_r$$$.
The ARQ Problem
In the Associative Range Query problem, we wish to support two types of queries:
Given bounds $$$l$$$ and $$$r$$$, compute the aggregate $$$a_l + a_{l+1} + \ldots + a_r$$$.
Given bounds $$$l$$$ and $$$r$$$, and a function $$$f: S \rightarrow S$$$, replace $$$a_i$$$ with $$$f(a_i)$$$ for all $$$l \le i \le r$$$.
In typical instances where computing $$$a + b$$$ or $$$f(a)$$$ take $$$O(1)$$$ time, we wish to support each query in $$$O(\log n)$$$ time.
Identity and Monoids
Perhaps you've heard of range queries over a monoid. A monoid is simply a semigroup with a special identity element:
Identity Law: $$$id\in S$$$ satisfies $$$a + id = id + a = a$$$ for all $$$a \in S$$$.
We represent Semigroup
and Monoid
using Rust trait
s. The Rust compiler will not verify the associative and identity laws, so it's the programmer's job to check them when implementing these functions:
trait Semigroup {
fn op(&self, other: &Self) -> Self;
}
trait Monoid: Semigroup {
fn identity() -> Self;
}
Equivalence
In practice, there is not much difference between a semigroup and a monoid, and either of the two would suffice for our purposes. This is because a semigroup can always be extended into a monoid by adding an identity element. In this Rust implementation, the Monoid
's advantage is that it can clone
(i.e., make copies of) elements, by applying its operation with the identity. Thus, the trait bound Monoid
turns out to be equivalent to Semigroup + Clone
. To illustrate, here is the conversion from Semigroup + Clone
to Monoid
, using Option<T>
to denote "T or the identity":
impl<T: Semigroup + Clone> Semigroup for Option<T> {
fn op(&self, other: &Self) -> Self {
match self {
Some(ref a) => match other {
Some(ref b) => Some(a.op(b)),
None => self.clone()
},
None => other.clone()
}
}
}
impl<T: Semigroup + Clone> Monoid for Option<T> {
fn identity() -> Self {
None
}
}
Conversely, a Monoid
is already a Semigroup
and can implement Clone
by operating with the identity element:
impl<T: Monoid> Clone for T {
fn clone(&self) -> Self {
self.op(T::identity())
}
}
ARQ API v1: Point Updates
Now that we understand Semigroup + Clone
as equivalent to Monoid
, the choice between the two becomes an implementation detail, with tradeoffs in performance and ergonomics depending on the application. Personally, I found it easier to work with the Monoid
trait
. Our first API will not support full range updates, but only point updates:
pub struct ArqTree<T> {
val: Vec<T>
}
impl<T: Monoid> ArqTree<T> {
pub fn update(&mut self, pos: usize, f: &dyn Fn(&T) -> T) {
// implement update
}
pub fn query(&self, l: usize, r: usize) -> T {
// implement query
}
}
I won't provide a full implementation: you may use other segment tree guides as a reference. In summary, we build a complete binary tree on top of our array. tree.update(pos, f)
will replace $$$a_{pos}$$$ by $$$f(a_{pos})$$$, then recompute each of the ancestors of $$$a_{pos}$$$ by applying $$$+$$$ to its two children. This will work with no restrictions on the function $$$f$$$. Its time complexity is comprised of one application of $$$f$$$ and $$$O(\log n)$$$ applications of $$$+$$$.
Shortcomings of the v1 API
Our simple v1 API can't support efficient range updates! In order to update an entire range efficiently, we will need to apply $$$f$$$ lazily, storing it in internal nodes of the tree to eventually be pushed toward the leaves. If multiple updates are performed, we may have to store a composition of updates for postponed application. While one may implement a composition operation $$$f \circ g$$$ which simply first calls $$$g$$$ and then calls $$$f$$$, this makes the cost of function application no longer $$$O(1)$$$!
Thus, we must switch from function pointers to an implicit, composable representation for $$$f$$$. The composition of "add 5" and "add 7" is not "add 5 and then 7"; rather, it's "add 12": we should store the number 12 instead of the adding functions.
To recap, now we have a monoid $$$(S, +)$$$ of array elements, as well as a second monoid $$$(F, \circ)$$$ whose set $$$F \subset (S\rightarrow S)$$$ consists of the update functions that we're interested in. Why is $$$F$$$ a monoid? Well, it's easy to check that function composition is associative, making it at least a semigroup. And then, just as with $$$S$$$, we can choose whether to have F: Monoid
or F: Semigroup + Clone
. For $$$F$$$, I found the latter to be more ergonomic.
However, these are not simply two independent monoids! The sets $$$S$$$ and $$$F$$$ interact, with functions from $$$F$$$ acting on elements from $$$S$$$ to produce the newly updated elements of $$$S$$$. While we're at it, I'm actually not too happy with the Semigroup
and Monoid
traits. There's more than one way for a type, say 32-bit integers, to be a monoid: the operation could be addition, multiplication, minimum, maximum, leftmost non-identity value, etc. With this design, we'd have to wrap our i32
s in distinct wrappers for each Monoid
implementation, and that's ugly.
But remember that a struct
is just a collection of data. A struct
's impl
block is a collection of functions (and possibly some associated types and constants). Typically, functions inside an impl
block take a special self
argument and are called methods, but this is not strictly necessary. So we can instead define a trait
that packages the two types $$$S$$$ and $$$F$$$, alongside functions that act on these types!
ARQ API v2: Range Updates
We scrap the Semigroup
and Monoid
traits, and instead define:
pub trait ArqSpec {
type S;
type F: Clone;
/// Require for all a,b,c: op(a, op(b, c)) = op(op(a, b), c)
fn op(a: &Self::S, b: &Self::S) -> Self::S;
/// Require for all a: op(a, identity()) = op(identity(), a) = a
fn identity() -> Self::S;
/// For eager updates, compose() can be unimplemented!(). For lazy updates:
/// Require for all f,g,a: apply(compose(f, g), a) = apply(f, apply(g, a))
fn compose(f: &Self::F, g: &Self::F) -> Self::F;
/// For eager updates, apply() can assume to act on a leaf. For lazy updates:
/// Require for all f,a,b: apply(f, op(a, b)) = op(apply(f, a), apply(f, b))
fn apply(f: &Self::F, a: &Self::S) -> Self::S;
}
pub struct ArqTree<T: ArqSpec> {
val: Vec<T::S>,
app: Vec<Option<T::F>>,
}
impl<T: ArqSpec> ArqTree<T> {
pub fn update(&mut self, l: usize, r: usize, f: &T::F) {
// implement update
}
pub fn query(&mut self, l: usize, r: usize) -> T::S {
// implement query
}
}
This version still supports the previous setting of point updates. In that case, op()
and identity()
must satisfy their respective monoid laws, but apply()
can apply any arbitrary function, and compose()
can remain unimplemented or even crash because update
s with l == r
will never call compose()
.
However, if we plan to do range updates, i.e., with l < r
, then we must be prepared to apply $$$f$$$ to internal nodes of the tree! To ensure consistency, we require two additional laws:
Composition Law: $$$(f \circ g)(a) = f(g(a))$$$ for all $$$f, g \in F$$$, $$$a \in S$$$
Distributive Law: $$$f(a + b) = f(a) + f(b)$$$ for all $$$f \in F$$$, $$$a, b \in S$$$
The composition law implies that $$$F$$$ is a semigroup, and the distributive law ensures consistent interactions between $$$S$$$ and $$$F$$$ throughout the tree!
Example: Range Minimum Query
To see how to specialize this API, let's use it to solve the following classical problem:
Given bounds $$$l$$$ and $$$r$$$, compute the minimum of $$$a_l, a_{l+1}, \ldots, a_r$$$.
Given bounds $$$l$$$ and $$$r$$$, and a number $$$f$$$, replace $$$a_i$$$ with $$$f + a_i$$$ for all $$$l \le i \le r$$$.
The first monoid $$$S$$$ consists of the numerical array elements with the minimum operation. The second monoid $$$F$$$ consists of functions which add a constant: their composition simply sums their respective constants. Thus, elements of $$$F$$$ are most conveniently represented by literally storing the constant in question. All four functions are one-liners:
pub enum RMQ {}
impl ArqSpec for RMQ {
type S = i64;
type F = i64;
fn op(&a: &i64, &b: &i64) -> i64 {
a.min(b)
}
fn identity() -> i64 {
i64::max_value()
}
fn compose(&f: &i64, &g: i64) -> i64 {
f + g
}
fn apply(&f: &i64, &a: i64) -> i64 {
f + a
}
}
// Instantiate with:
let mut rmq_tree = StaticArq::<RMQ>::new(&vec);
Note that the programmer must manually verify the four laws (only two if range updates are not used). In some cases, your operations may need access to the size or position of the subtree corresponding to the current node. This does not require an extension of the API: the monoid type $$$S$$$ can simply be made a tuple which contains this additional information. For examples of the ARQ tree in action, please see: ARQ tree example usage on Github
Static Implementation: ARQBIT
To keep this blog post focused on the abstraction and general API, I left the implementation details here as a GitHub link. Indeed, the key advantage of these abstractions is that I almost never have to think about segment tree code! I only have to worry about ensuring that my custom operations satisfy the four laws.
If you're interested in the details, this is a statically allocated binary-indexed ARQ tree with lazy propagation, which I like to call an ARQBIT. It's more heavy-weight than a standard BIT, but works on general semigroups. It's based on a very cool blog post by Al.Cash that you can check out for a better explanation!
Dynamic Implementation: Sparsity and Persistence
A dynamically allocated version of this data structure can initialize its leaves (potentially more than $$$10^{18}$$$ of them!) to the identity in $$$O(1)$$$ time, using a lazy construction. It supports some splitting and merging operations, as well as persistence. Most of its methods require an ArqView
parameter, which determines which node to treat as the root of the tree. When the is_persistent
flag is turned on, previously generated ArqView
objects remain valid and immutable, thus preserving access to all earlier states of the data structure. However, when the flag is turned on, only the more recently generated ArqView
should be considered valid, while the others may be destroyed.
Advanced Usage with push()
and pull()
Typically, the data structure is only updated (and new ArqView
s generated) by calls to update()
. However, advanced users may directly make use of push()
/pull()
to dig inside the tree. For example, suppose we want the first (i.e., leftmost) negative element in the array. One approach is to binary search down from the root of an RMQ tree. Example binary search functions are provided for each of the static and dynamic implementations above. Here, we focus on some general aspects of the dynamic implementation.
Since changes are lazily propagated, only the root node is valid for read/write access at first. We gain access to its subtrees as follows:
let (lchild_view, rchild_view) = arq_tree.push(root_view);
// If we make changes to either or both child subtrees, we must pull them before accessing the root again:
arq_tree.pull(root_view);
The static and dynamic implementation files above demonstrate how this works for the binary search example. Since these functions leave the underlying array unchanged, they don't need to pull()
.
Once in a while, you'll come across a problem where you need range updates but can't satisfy the distributive law, not even if you store additional information such as subtree size and position. You might need custom break/tag conditions with intricate runtime analyses. In such cases, the provided algorithms will no longer function as-is. You may have to implement your own version of query()
and/or update()
. Nonetheless, the push()
/pull()
API may reduce the amount of work you have to do.
Conclusions
This is a side project that I built in summer 2017, expanded upon in summer 2019, and only now in 2020 had the chance to write about. Please let me know if you'd like something to be explained in more detail :)
Auto comment: topic has been updated by EbTech (previous revision, new revision, compare).
Should rust be used in CP ?
I made the case for Rust in CP in a previous blog post. Obviously not everyone agrees: it takes a while to learn, and restricts you from doing unsafe things. On the other hand, I feel it expresses one's ideas very naturally, encourages good programming style, and it's surprisingly good at catching most bugs, while making the rest easier to spot! It's efficient, too.
I've probably written more Rust CP code than anyone, so feel free to use some of my more recent code as a template. At the very least, I proved that it's possible to perform at an orange level in Rust. Now let's see if someone can turn it up to red :P
A bit of a necropost, but thanks for this write-up! I've been kinda experimenting with porting some of my Kotlin DSA libraries to Rust and this was really helpful for me to get used to how Rust handles generics.
One thing I noticed is that it's sometimes helpful to change the
Spec
trait to accept&self
for the oracle functions (e.g.fn op(&self, a: &S, b: &S) -> S;
), and for the segment tree to hold a copy of theSpec
. Most of the time, this makes little difference except that they should be implemented on a ZST / zero-sized type (struct RMQ;
) instead of an uninhabited enum, but occasionally, this allows the oracle to depend on precomputed data (or even memoizable data if you put the mutable state in aRefCell
). I can even implement the trait on a&MySpec
instead of aMySpec
if I need to share that precomputed data between several segment trees or other structures.Sure, there are other ways to do that, e.g. static objects, but Rust tends to hate those and require either unneeded thread-safety (from the point of view of remotely-ran competitive programming) or clunky workarounds, like the horrible and unsound IO code in my examples below, lol. Certainly not something I'd like to write or modify mid-contest.
Examples (the most interesting code is at the bottom of these):
128169711 — implemented on ZST
AtCoder submission — precomputed powers of ten
128187512 — precomputed Fibonacci numbers, and helper functions