After reading the editorial for RCC Elimination round problem E, I thought of an easier problem of merging two strings such that the result is lexicographically minimal. Formally, a merge of two strings a and b is a string s of length |a| + |b| such that there exist two strictly increasing sequences of indices i1, i2, ..., i|a| and j1, j2, ..., j|b| such that a = si1si2... si|a|, b = sj1sj2... sj|b| and each index in s appears exactly once in i1, ..., i|a|, j1, ..., j|b|.
The above mentioned editorial provides an algorithm for solving this problem that works in time and uses hashes. Actually, this problem can be solved in linear time. The solution works roughly like this: maintain current position pa in a and pb in b. On each step, lexicographically compare the suffix of a starting at pa with the suffix of b starting at pb, and take a character from the suffix that is smaller (actually, for this to work, it is necessary to terminate each string with a character that is greater than any character in the strings, so that if one of the suffixes is a prefix of the other, the shorter suffix is considered larger, not smaller). The author proposes to compare the suffixes by using binary search and hashing, which takes time. However, this can be done in constant time.
Actually, this is a well known Longest Common Extension problem. One of the constant-time solutions is as follows: construct a suffix tree from the strings, then preprocess it using one of Lowest Common Ancestor algorithms that can answer LCA queries in constant time. It is easy to see that the lowest common ancestor of two leaves in a suffix tree that correspond to two suffixes can be used to find the length of the longest common prefix of those suffixes. From that, performing lexicographical comparison is easy.
It is possible to build and preprocess a suffix tree in linear time, so the overall running time is O(n), but the algorithm is quite complex. Does anyone know of a simpler algorithm with the (asymptotically) same running time?