I was working on a different problem and I would need to solve this as a subproblem in faster than $$$O(n^2)$$$ to solve the actual problem.
Statement
You are given 2 arrays $$$a_{0}, a_{1}, ..., a_{n-1}$$$ and $$$b_{0}, b_{1}, ..., b_{n-1}$$$, both increasing i.e. $$$\forall_{0 \leq i < n - 1}$$$ $$$a_{i} \leq a_{i+1}$$$ and $$$b_{i} \leq b_{i+1}$$$ and also $$$\forall_{i} a_{i} \geq 0, b_{i} \geq 0$$$. You task is to find array $$$c$$$ such that:
$$$c_i = max_{0 \leq j < 2 * n} a_{j} + b_{i-j}.$$$
For example, for $$$a = [2, 3, 5, 8]$$$ and $$$b = [1, 4, 7, 8]$$$, the result would be $$$c = [3, 6, 9, 10, 12, 15, 16]$$$.
So the simple code to solve this is:
for (int i = 0; i < n; ++i) {
for (int j = 0; j < n; ++j) {
c[i + j] = max(c[i + j], a[i] + b[j]);
}
}
Tried approach
I had a few ideas of attacking this by drawing out the full matrix of sums of $$$a$$$ x $$$b$$$ and trying to notice that the $$$c_{i}$$$'s will form a path within this matrix from the top-left to lower-right corner, but I wasn't able to complete this. Is there a better solution than the proposed brute-force solution?
It's called concave max plus convolution.
https://cstheory.stackexchange.com/questions/19982/complexity-of-convolution-in-the-max-plus-ring
If both $$$a$$$ and $$$b$$$ are convex ($$$a_{i+1}-a_{i}$$$ is monotonically increasing), then $$$c$$$ is convex and we can calculate $$$c$$$ in $$$O(n)$$$ time using SMAWK, or $$$O(n\log n)$$$ time using divide and conquer — the divide and conquer solution is to notice that for any $$$i$$$ the optimal $$$j$$$ is monotonic.
I'm not sure if there is a solution for when $$$a$$$ and $$$b$$$ are merely increasing — the general problem, max-plus convolution, currently has no solution significantly faster than $$$O(n^2)$$$.
If both $$$a$$$ and $$$b$$$ are convex, you can just do simple two pointers technique to compute the "convolution" (no need for D&C/SMAWK).
If you keep the partial differences arrays instead of the original ones, it's even simpler:
Just merge the two sorted arrays to obtain the partial difference array of the "convolution".
Could you elaborate more on the 2 pointers technique? Thanks.
Start with $$$a_0 + b_0$$$ and iteratively choose the best value from $$$a_{i+1} + b_j$$$ or $$$a_{j+1}+b_i$$$ (and increment the corresponding pointer).
Note that this approach only works for concave functions (in the maximization scenario) or convex functions (in the minimization scenario).
Basically merge the difference arrays then do prefix sum.
You probably can't do much better than brute-force solution unless values are bounded. https://arxiv.org/abs/1811.12554 could be relevant.
To add to all that's been said, this is (almost) as difficult as max-plus convolution since you can do
a[i] += i * INF; b[i] += i * INF
.How is this not a perfect reduction and thus precisely as difficult?
I just meant the numbers can get $$$n$$$ times larger.