Блог пользователя shash42

Автор shash42, история, 7 лет назад, По-английски

I have read this on various posts on Codeforces (some very old, some a little more recent) that Amortized Complexity algorithms lose this property when we perform rollbacks. An example: Comment by Retrograd here.

Rollbacks here means storing changes in some data structure and reversing back to the previous state until required.

While I would love a general answer, I am specifically asking for DSU Rollbacks with Path Compression. Is there any particular reason why it should not work? Intuitively I'm performing just as many operations as I did to reach the new state to go back to the old state.

Also some might state that between a Union and the time we rollback to it, there could be a 100 odd queries which would cause path compression. We lose all this data after rollback and have to recompute them for further queries. However I am not sure if this realllly messes up the complexity. Every query is still the same complexity?! Notice that this is rollbacks and not persistent DSU.

While I know union by size vs path compression is just NlogN vs N*alpha, there's this particular problem that had me interested: Codechef — Chef and Graph Queries. I had implemented an NRootNLogN solution with Mo's and DSU Rollbacks with Union by Size which TLE's despite many small optimizations. On the other hand the tester's solution does path compression with rollbacks. (Note I am aware a Mos+DSU solution without any rollbacks also exists, and it is quite beautiful but the question isn't about that).

I am attaching a rough implementation just for the idea:

Code

To sum it up, if we are really just reversing X operations, why would the complexity be more than 2*X, why do Amortized Algorithms fail on rollbacks. Thanks.

  • Проголосовать: нравится
  • +36
  • Проголосовать: не нравится

»
7 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Auto comment: topic has been updated by shash42 (previous revision, new revision, compare).

»
7 лет назад, # |
  Проголосовать: нравится +42 Проголосовать: не нравится

If you revert to the old state, you may be asked to do all the work which you've done already all over again. Most amortization techniques say something like "this value may only increase while we're performing operations, and in the end it's linear of N, therefore we've performed not more than N operations, therefore we have constant amortized time". But if rollback is performed, we're basically "going back in time", and not for free.

Say, a typical vector implementation would have a buffer of 2k elements so that every time you do push_back it either pushes right away or doubles size of the buffer (which takes linear time to reallocate all elements). If the vector only grows, it's linear time in total (therefore, constant time amortized). But once rollback (i.e. shrinking the buffer) is allowed, one can repeat "rollback/push back" many times, and each iteration will take linear time, yielding quadratic time.

As for DSU, it's harder to notice, because ranking heuristics guarantees you time for each operation (in a non-amortized way).

On the other hand, if you use path compression only, one can call a lot of joins, construct a very long line (join 1 and 2, join 1 and 3, join 1 and 4, etc), and then call get in the very end. It will take linear time. It's ok for amortized analysis (we've got only one heavy get for linear number of operations), but then we can roll back the structure and call the same get again, and get linear time again. Sad sax.

  • »
    »
    7 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Thanks a lot! Just one follow up question. I got the point for linear time if we don't do union by size. But if we do union by size plus path compression would it be (slightly) faster/slower than just union by size.

    I know it depends a lot on implementation but should path compression speed it up a little in general?

    • »
      »
      »
      7 лет назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      I haven't heard of any good bounds about DSU with rollbacks (I haven't looked for them, though). I think that using both heuristics instead of just one would perform slightly better "in the wild" even when rollbacks are present.

    • »
      »
      »
      7 лет назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      It's easy to create a test on which it will do more operations than version without path compression (constant factor, not asymptotically of course). Just create a perfect binary tree from nodes by union operations. Then repeatedly do the following: call find to leaf of this tree, rollback last union, make that union again. So we will not gain any speed up from path compression, but shall write and rollback all log n path compression modifications all the time.