pavook's blog

By pavook, history, 2 years ago, In English

Sometimes I wonder, what implementation of a segment tree to use. Usually, I handwave that some will suit, and that works in most of the cases.

But today I decided to back up the choice with some data and I ran benchmarks against 4 implementations:

  • Simple recursive divide-and-conquer implementation
    Code
  • Optimized recursive divide-and-conquer, which does not descend into apriori useless branches
    Code
  • Non-recursive implementation (credits: https://codeforces.net/blog/entry/18051)
    Code
  • Fenwick Tree
    Code

All implementations are able to:

  • get(l, r): get sum on a segment (half-interval) $$$[l; r)$$$
  • add(i, x): add $$$x$$$ to the element by index $$$i$$$

Here are the results:

Note: I decided not to apply any addition-specific optimizations so that with minor modifications the data structures could be used for any supported operations.

I generated queries the following way:

  • Update: generate random index (rnd() % size) and number
  • Query: at first, generate random length (rnd() % size + 1), then generate a left border for that length

Benchmarking source code. Note that ideally you should disable frequency scaling, close any applications that might disrupt the benchmark (basically, the more you close — the better) and pin the benchmark process to a single CPU (core).

Plot-generating Python script

My benchmarking data in case you want to do some more plotting/comparing.

I compiled the benchmark with #pragma GCC optimize("O3") on GNU GCC 11.3.0, and ran it with the CPU fixed on 2.4 GHz, the process pinned on a single CPU core and the desktop environment closed.

This is probably my first useful contribution so any suggestions/improvements are welcome.

  • Vote: I like it
  • +126
  • Vote: I do not like it

| Write comment?
»
2 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by pavook (previous revision, new revision, compare).

»
2 years ago, # |
  Vote: I like it +17 Vote: I do not like it

All good, but benchmark have 1 error: Fenwick != Segment tree

  • »
    »
    2 years ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Although it's technically true, Fenwick tree and non-recursive segment tree are similar both in structure and in performance. It's also a frequent "dilemma": implementing Fenwick tree or segment tree, so its addition felt appropriate.

    • »
      »
      »
      2 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      I do not consider the Fenwick tree and the non-recursive segment tree to be similar in structure.

    • »
      »
      »
      2 years ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Even if it's true, you still can't perform many types of complex operations using Fenwick tree, so imho Fenwick tree is quite useless for regular contests... Anyway, thanks for the blog, it was really interesting :)

      • »
        »
        »
        »
        2 years ago, # ^ |
          Vote: I like it +3 Vote: I do not like it

        Fenwick tree is useful when TL is tight or (if the task allows) if writing a Segment tree will be too long.

»
2 years ago, # |
  Vote: I like it -12 Vote: I do not like it

I think a pointer-based segment tree is missing.

»
2 years ago, # |
  Vote: I like it +14 Vote: I do not like it

Thanks for the job you have already done!

However, in my option, it doesn't provide any useful information. It's more a toy research project as you eventually learn different segment tree implementation than a serious benchmark because it simply says something like recursive > non-recursive > fenwick as expected with g glance.

To improve, I list several possible direction here:

  • The asymptotic complexity is $$$O(n \log n)$$$ with derivatives $$$\log n + 1 > 0$$$ which means the curve should be convex. However, the plot shows somehow counter-intuitively a concave curve. I suggest to plot against $$$\mathrm{time} / n \log n$$$ or use log-scaled $$$n$$$-axis which may end up with a plausible result or find serious drawdark of the existing plotting.
  • Try to generate more testsuits instead of simply randomly generated data. For example, is there any carefully crafted data point which causes significant performance downgrade, like to introduce unexpected high ratio of cache misses?
  • Try to compare different segment tree operations besides simple additions. How does the complexity of the basic operations affects the overall speed?
  • Simply fixing the cpu frequency and turning off graphical environment does not mean sufficient. First of all, what's your CPU model? and what's the instruction set? Did it run in the full-power mode? Did you pin cpu core to avoid context switching? And we know that segment tree presents large amount of memory access. So you should also provides the information about memory. I think it's better the breakdown the part into cpu computation and memory accessing, and carefully measure metrics like cache misses.
  • »
    »
    2 years ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Thank you very much for the suggestions.

    1. I think L2/L3 cache sizes had their impact on this result along with other things like branch misses. Here are the plots divided by $$$\log n$$$:
      Plots

      Notably, non-recursive query has a remarkably constant constant :).

      The sudden jumps in update constant plot you can see (at $$$N \approx 275000$$$ for recursive implementations, at $$$N \approx 450000$$$ for non-recursive implementation) align quite nicely with tree sizes beginning not to fit in my 8M L3 Cache.

      I couldn't figure anything else special about the plots, so any help/ideas would be appreciated.

    2. I don't know even how to approach this suggestion. I'm sure it's very hard to figure out performance-intensive tests with pure random and I'm too stupid for evolution or simulated annealing approaches.
    3. I think I will do this eventually. I'll try to post updates somewhere here.
    4. CPU model: Intel(R) Core(TM) i5-1135g7. I did run it in full-power mode (perf-bias set to 0 and perf-policy set to performance). I reran the benchmark with pinning the process to a CPU core using taskset, thank you for this advice.

      About cache misses and other advanced metrics: I feel like that information would be quite a pain in the butt to collect. I don't know how to measure those metrics except for tools like perf. But if I'm going use perf or something similar, I'll need to run all the instances separately and collect the results separately. That would really blow up the complexity of running the benchmark.

»
2 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by pavook (previous revision, new revision, compare).