Okay, so I'm going to write about a new sorting algorithm which I discovered recently while playing with sqrt decomposition.
I've told 4 people I know about it and in 4 out of 4 cases their reaction was "man, you're a genius, how could nobody actually come up with this before?!".
Well, no, I'm just kidding, 4 out of 4 asked what is it for and told me that I'm just wasting my time coming up with useless crap, BUT I am going to write about it nevertheless.
The idea is simple. You just do sqrt decomposition for RMQ. Suppose $$$ \mathrm{BlockSize} $$$ is (wow) your blocksize and $$$ \mathrm{BlockMin}[k] $$$ is the index of minimum element in $$$ k $$$-th block. Then in every iteration, suppose you already have prefix $$$ [0, s) $$$ sorted. Then what you do is you find the index of minimum in blocks from $$$ \mathrm{B} := 1 + \left \lfloor \frac{s}{\mathrm{BlockSize}} \right \rfloor $$$ to the last one, call that index $$$ \mathrm{min}_1 $$$. Minimum could also be in the block containing $$$ s $$$, so you also iterate in $$$ [s, \mathrm{BlockSize} \cdot \mathrm{B}) $$$, finding index of the min-element $$$ \mathrm{min}_2 $$$, then you compare $$$ a[\mathrm{min}_1] $$$ with $$$ a[\mathrm{min}_2] $$$ and pick a minimum of them. Then in your array you swap $$$ a[\mathrm{min}] $$$ with $$$ a[s] $$$ and update $$$ \mathrm{BlockMin}\left[\left \lfloor \frac{\mathrm{min}}{\mathrm{BlockSize}} \right \rfloor\right] $$$ if necessary (if $$$ \left \lfloor \frac{\mathrm{min}}{\mathrm{BlockSize}} \right \rfloor \geq B $$$). Then you now have $$$ [0, s+1) $$$ sorted and all invariants maintained. $$$ \blacksquare $$$
Setting $$$ \mathrm{BlockSize} = \left \lceil \sqrt{n} \right \rceil $$$ yields $$$ \mathcal{O}(n^{3/2}) $$$ run time. This algorithm is not in-place, $$$ \mathcal{O}(\sqrt{n}) $$$ additional memory is required.
Actually, there's an interesting implication of such $$$ \left( \mathrm{data \; structure} \Rightarrow \mathrm{sorting \; algorithm} \right) $$$ argument. Suppose you have a data structure that uses only comparisons and can do updates and queries in true $$$ \mathcal{O}(f(n)) $$$. Then this instantly gives you a $$$ \mathcal{O}(n f(n)) $$$ time sorting algorithm, and if $$$ f(n) $$$ is asymptotically strictly smaller than $$$ \log n $$$, then such data structure cannot exist, because of the lower bound for sorting.
So this can actually be considered a proof of optimality of segment trees.
Very marvel description. Could you find tasks from cf, timus where it's possible to use it?
Basically any problem that requires sorting with $$$ n \leq 3\cdot10^5 $$$ with a generous enough TL.
I think all of sorting-based problems on codeforces with $$$ n \leq 10^5 $$$ could be solved.
What is significant difference of this algorithm from well-known merge-sort?
Well,
It uses $$$ \mathcal{O}(\sqrt{n}) $$$ additional memory, compared to mergesort's $$$ \mathcal{O}(n) $$$.
It's asymptotic behaviour is not $$$ \mathcal{O}(n \log n) $$$.
It is data structure-based algorithm, while mergesort does not use any ds at all, it's main and only idea is divide and conquer.
I think it is actually a heavily downgraded version of heapsort, but mergesort is completely different imo.
EDIT: but yes, I can see where you're going with this. It can be viewed as merging $$$ \sqrt{n} $$$ blocks together, so I guess yes, it could be viewed as an exotic variant of mergesort. Kind of like mergesort and heapsort came together, had a good time and now you have an asymptotically worse off-spring.
There is implace merge sort
Wow, Hope it will be a new good sort haha <3