sparsetable's blog

By sparsetable, history, 3 days ago, In English

TL;DR: I shuffled the array, got 100 points, and have no idea why.
(every instance of BOI here refers to the Baltic Olympiad in Informatics.)

I was working on BOI23 d1p2 yesterday, and came up with the following solution for 50 pts:

spoiler

Here, there were 2 obvious optimizations:

  1. Since I only need the value of $$$\min(b, c)$$$ if $$$\min(a, c) = \min(a, b)$$$, we can throw out some unnecessary queries

  2. Might as well randomize the array since it might reduce such occurrences, idk ¯\_(ツ)_/¯

code

And to my surprise...it got 100 points!

Looking at the official code, this seems to be the intended solution. However, even after thinking on it, I'm not really sure why it works. Why is shuffling the array so effective? (only applying optimization 1 without 2 gets you 58 points.)

  • Vote: I like it
  • +9
  • Vote: I do not like it

»
41 hour(s) ago, # |
  Vote: I like it +17 Vote: I do not like it

After thinking some more, this is what I came up with:

We only query the pair $$$(b,c)$$$ when $$$a < b, c$$$. So basically in every single pair, the min element has a lifetime — it lives until a bigger element is encountered (though this model disregards the fact that we also need the min element to be $$$a$$$ to spend an additional query). Now our question is: If we iterated over the array, keeping a cumulative max in the process, how many times would we expect this cumulative max to change?

Here, I imagined the process from back to front. Let's also pretend the numbers are a permutation of $$$[1, 2, ... n]$$$. Once we encounter $$$n$$$ (the largest element), the max will never change. The expected position of this element is $$$\frac{n+1}{2}$$$, so we expect our range to be reduced to about half its original size. Then we look at $$$n-1$$$. If it is positioned after $$$n$$$, it is useless, we can skip it. Otherwise, since we know it appears earlier than $$$n$$$, we will once again expect it to halve our range. Obviously, since randomness is involved, it will almost never halve the range exactly, but this is the behavior we expect to see on average. And an additional 25 queries should be more than enough for $$$1500 < 2^{11}$$$ since we also need $$$a < b$$$ to hold whenever we encounter such a "maximum" (I think that provides an additional factor of $$$\frac{1}{2}$$$?)

Though obviously, this is mostly intuition based and not rigorous at all.

  • »
    »
    41 hour(s) ago, # ^ |
      Vote: I like it +6 Vote: I do not like it

    Approaches from other people are still welcome :)

»
37 hours ago, # |
Rev. 2   Vote: I like it +12 Vote: I do not like it

It is semi-well-known that the expected number of times that the prefix maximum changes in a shuffled array is O(log(N)). See Blogewoosh #6 for proofs.