Блог пользователя sparsetable

Автор sparsetable, история, 3 дня назад, По-английски

TL;DR: I shuffled the array, got 100 points, and have no idea why.
(every instance of BOI here refers to the Baltic Olympiad in Informatics.)

I was working on BOI23 d1p2 yesterday, and came up with the following solution for 50 pts:

spoiler

Here, there were 2 obvious optimizations:

  1. Since I only need the value of $$$\min(b, c)$$$ if $$$\min(a, c) = \min(a, b)$$$, we can throw out some unnecessary queries

  2. Might as well randomize the array since it might reduce such occurrences, idk ¯\_(ツ)_/¯

code

And to my surprise...it got 100 points!

Looking at the official code, this seems to be the intended solution. However, even after thinking on it, I'm not really sure why it works. Why is shuffling the array so effective? (only applying optimization 1 without 2 gets you 58 points.)

  • Проголосовать: нравится
  • +9
  • Проголосовать: не нравится

»
2 дня назад, # |
  Проголосовать: нравится +17 Проголосовать: не нравится

After thinking some more, this is what I came up with:

We only query the pair $$$(b,c)$$$ when $$$a < b, c$$$. So basically in every single pair, the min element has a lifetime — it lives until a bigger element is encountered (though this model disregards the fact that we also need the min element to be $$$a$$$ to spend an additional query). Now our question is: If we iterated over the array, keeping a cumulative max in the process, how many times would we expect this cumulative max to change?

Here, I imagined the process from back to front. Let's also pretend the numbers are a permutation of $$$[1, 2, ... n]$$$. Once we encounter $$$n$$$ (the largest element), the max will never change. The expected position of this element is $$$\frac{n+1}{2}$$$, so we expect our range to be reduced to about half its original size. Then we look at $$$n-1$$$. If it is positioned after $$$n$$$, it is useless, we can skip it. Otherwise, since we know it appears earlier than $$$n$$$, we will once again expect it to halve our range. Obviously, since randomness is involved, it will almost never halve the range exactly, but this is the behavior we expect to see on average. And an additional 25 queries should be more than enough for $$$1500 < 2^{11}$$$ since we also need $$$a < b$$$ to hold whenever we encounter such a "maximum" (I think that provides an additional factor of $$$\frac{1}{2}$$$?)

Though obviously, this is mostly intuition based and not rigorous at all.

  • »
    »
    2 дня назад, # ^ |
      Проголосовать: нравится +6 Проголосовать: не нравится

    Approaches from other people are still welcome :)

»
2 дня назад, # |
Rev. 2   Проголосовать: нравится +12 Проголосовать: не нравится

It is semi-well-known that the expected number of times that the prefix maximum changes in a shuffled array is O(log(N)). See Blogewoosh #6 for proofs.