Let some sequence $$$b=[$$$ $$$a[i_1],a[i_2], \ldots a[i_k]$$$ $$$]$$$. If $$$b$$$ is constant, then $$$a[i_1]+1=a[i_2]+2= \ldots = a[i_k]+k$$$. Therefore, $$$b=[$$$ $$$a[i_1],a[i_1]-1, \ldots, a[i_1]-(k-1)$$$ $$$]$$$.
Let dp[i] be the maximum length of a constant subarray ending in $$$a[i]$$$. We'll also need maxdp[x]=max( dp[i] for which $$$a[i]=x$$$).
Let $$$i$$$ be the current position. a[i] can be appended to any subarray ending in a[i]+1. Therefore, *dp[i]=dpmax[a[i]+1]+1(. dpmax[a[i]] will also be updated accordingly: dpmax[a[i]]=max(dpmax[a[i]],dp[i]).
The maximum length of any constant subarray is equal to $$$k=max_{i=1}^n(dp[i])$$$.
Reconstructing a maximal constant subarray will also require prev[i] — the last appearance of a[i]+1 to the ``left'' of $$$i$$$. From some position $$$p$$$ where dp[p] is maximal, $$$p$$$ will be replaced repeatedly by prev[p] $$$k-1$$$ times. This traversal will visit, in reverse, all the elements of a constant subarray ending in $$$a[p]$$$.
Is it really necessary to set some segment $$$[l,r]$$$ as the solution, where $$$l != 1$$$ ?
Let's say X is the last element of the solution. How can we find how many elements it overwrites?
let's say the optimal solution is $$$[l,r]$$$ (where $$$[l,r]$$$ denotes the subarray between indexes $$$l$$$ and $$$r$$$). Then, We can prove that l=1 the following way: let's consider $$$[l-1,r]$$$ (which should exist assuming $$$l!=1$$$), then $$$sum[l-1,r] = sum[l,r] + min(v[i] | l-1 \le i \le r)$$$ The second term of the sum is always greater than 1, so $$$sum[l-1,r]>sum[l,r]$$$ (where sum[x,y] denotes the sum of the elements in between indexes x and y after the segment is gorbasorted)
Then, we propose the following soltuion: Let gorba[i]= the sum of elements in $$$[1,i]$$$ after gorbasorted. Then, if X is the greatest position such that $$$X \le i$$$, and $$$v[X] \le v[i]$$$, then gorba[i]=gorba[X]+(X-i)*v[i] (v[i] overwrites all elements between X+1 and i-1). Then how do we find X for each i? This is an easy question that can be solved using a stack. more on that here
\huge{Subtask 1 : $$$N,K \le 1000$$$}
Since $$$N,K \le 1000$$$ , we can start to develop an $$$O(N \cdot K)$$$ algorithm. Let DP[i][j] store the maximum money i can have if i helped $$$j$$$ homeless men from the first i places. We have 2 cases :
The value at $$$A_i$$$ is positive ( including 0 ).
The value at $$$A_i$$$ is negative.
First case is easy , because it is always optimal to withdraw money ( easy proof by contradiction ). So in this case we can just say that
Second case is pretty easy as well. Just like in the well known 0-1 knapsack algorithm , we have 2 choices : either help the i-th homeless man , or ignore him. Transitions are :
- DP[i][j] = max(DP[i][j],DP[i-1][j-1] — $$$|A_i|$$$) if $$$DP[i-1][j-1] - |A_i| \ge 0$$$
- DP[[i][j] = max(DP[i][j] , DP[i-1][j])
So , with $$$O(1)$$$ transitions and $$$O(N \cdot K)$$$ states , the time complexity will be $$$O(N \cdot K)$$$.
\huge{Subtask 2 : $$$N,K \le 10^5$$$}
For this subtask , we need a better time complexity. First notice that an optimal solution that contains a maximum amount of homeless men , will also contain the optimal solution of any less homeless men as a subsolution
. Let's find the best solution in which we help the most homeless men. Let's assume we are currently at the ith place and we know an optimal solution for the first i-1 places. We have two cases :
- The value at $$$A_i$$$ is positive ( including 0 )
- The value at $$$A_i$$$ is negative.
If $$$A_i$$$ is positive , we will just take the money.
If $$$A_i$$$ is negative and we have enough money , we will help the i-th homeless man ( we want maximum number of homeless men ). Else we will look at the optimal solution for the first i-1 places. If in that solution we have a homeless man begging for more money, we will add the homeless man at position i and remove the other homeless man from the optimal solution. Else , there is no reason for us to help that homeless man.
This can easily be simulated using a priority_queue
, with at most n removals and n insertions , we have an $$$O(NlogN)$$$ time complexity.