Recently i came accross this question it would be great if anyone could solve this.
There are 'n' stones in a row from 0 to n-1. For every ith stone , there are 2 values associated with it, a[i] and b[i] . You have to remove all the stones from the row one by one. Removing the ith stone follows the rule :
If (i-1)th and (i+1)th stones are still present , then , cost of removing the ith stone is b[i].
if either (i-1)th or (i+1)th stone is present , then cost of removing the ith stone is a[i].
if neither (i-1)th nor (i+1)th stone is present , the cost of removing the ith stone is 0.
Find the minimum total cost of removing all the stones.
Constraints : 1 <= n <= 50000 1 <= a[i] , b[i] <= 1000
Auto comment: topic has been updated by samadeep (previous revision, new revision, compare).
This method is not fast enough but might put you on the right track.
There is an $$$\mathcal{O}(n^3)$$$ dynamic programming solution which can be optimized to $$$\mathcal{O}(n^2 \log{n})$$$ or even $$$\mathcal{O}(n^2)$$$ with proper a data structure and technique.
Let $$$dp(i, j)$$$ be the optimal answer for the subarray of length $$$i$$$ which starts at index $$$j$$$. Removing a stone with two present neighbors is like splitting the subarray into two subarrays. So the cost for that will be $$$b_k + dp(k - j, j) + dp(i - 1 - (k - j), k+1)$$$. You also have special cases on the ends of each subarray so removing the first or last element. The answer in that case would be $$$a_j + dp(i-1, j+1)$$$ for removing the first element and $$$a_{j+i-1} + dp(i-1, j)$$$ for removing the last element.
This gives us the following recurrence:
$$$dp(i, j) = \begin{cases} 0 & \text{if } i \leq 1\ \min\left(a_j + dp(i-1, j+1), a_{j+i-1} + dp(i-1, j), \\min_{j+1 \leq k \leq j+i-1}\left(b_k + dp(k - j, j) + dp(i - 1 - (k - j), k+1)\right)\right) & \text{otherwise} \end{cases}$$$
So we have $$$\mathcal{O}(N^2)$$$ states and each state takes $$$\mathcal{O}(N)$$$ time to compute. We can speed up the min query to $$$\mathcal{O}(\log{n})$$$ by using segment trees or amortized $$$\mathcal{O}(1)$$$ with a minimum queue and sliding window (also known as two pointers).
Hope this helps!
Hopefully this is correct:
Suppose instead operation 1 has $$$a[i]-b[i]$$$ cost, operation 2 has 0 cost and operation 3 has $$$-b[i]$$$ cost. We can add $$$\sum b[i]$$$ back to our result to get the actual answer.
Assume we know the set $$$S$$$ of stones we remove using operation $$$1$$$. Then, between stone 0 and the leftmost stone in $$$S$$$, the two leftmost stones in $$$S$$$, etc, the rightmost stone in $$$S$$$ and stone $$$n-1$$$, in each of these ranges we must remove exactly 1 stone with operation $$$3$$$, which should be the one that minimizes $$$-b[i]$$$.
This observation immediately leads to an $$$O(n^2)$$$ dp of the form $$$dp[i]=\min(\min_{0<j<i-1}(dp[j]-\max_{j<k<i} b[k]),0)+a[i]-b[i]$$$, where $$$dp[i]$$$ represents the minimum cost of removing up to stone $$$i$$$ if stone $$$i$$$ must be removed using operation $$$1$$$, which is not fast enough. However, note that if $$$\max_{x<k<i} b[k]=\max_{y<k<i} b[k]$$$, then only $$$\min(dp[x],dp[y])$$$ matters. Then, we can compute $$$dp[i]$$$ from the values $$$dp2[m]=\min_{\max_{j<k<i}b[k] =m}dp[j]$$$ for each possible $$$m$$$ and update $$$dp2$$$ according to $$$b[i]$$$, which is $$$O(n\max b)$$$ and should pass with some optimizations. The answer can be recovered from the $$$dp$$$ array easily.
I don't understand why other people here are suggesting some complicated (and slow) dp solutions, when there exists a simple $$$O(n)$$$ dp solution:
Let $$$dp[i][0]$$$ equal the minimum cost of removing the elements $$$1..i$$$, if element $$$i+1$$$ is removed before element $$$i$$$. Note that the cost of removing element $$$i+1$$$ is not counted yet.
Similarly, let $$$dp[i][1]$$$ equal the minimum cost of removing the elements $$$1..i$$$, if element $$$i+1$$$ is removed after element $$$i$$$. Note that the cost of removing element $$$i+1$$$ is not counted yet.
The transitions are super simple:
Base case ($$$i = 1$$$):
$$$dp[1][0] = 0$$$
$$$dp[1][1] = a[1]$$$
Transitions ($$$2 \le i \le n$$$):
$$$dp[i][0] = \min(dp[i-1][0] + a[i], dp[i-1][1] + 0)$$$
$$$dp[i][1] = \min(dp[i-1][0] + b[i], dp[i-1][1] + a[i])$$$
The answer is clearly $$$dp[n][0]$$$, because element $$$n+1$$$ doesn't exist (and thus it is "removed" before element $$$n$$$ adding zero extra cost).
Explanation of transitions (base cases should be obivous):
We're trying to calculate the value of $$$dp[i][0]$$$. This means that element $$$i+1$$$ is removed before element $$$i$$$. Now there are two cases. We either remove element $$$i$$$ before element $$$i-1$$$ or we remove element $$$i$$$ after element $$$i-1$$$.
If we remove element $$$i$$$ before we remove element $$$i-1$$$, the cost of removing element $$$i$$$ will be $$$a[i]$$$ by definition. But removing element $$$i$$$ before element $$$i-1$$$ is the same thing as removing element $$$i-1$$$ after element $$$i$$$. This means that the cost of removing all elements $$$1..i-1$$$ is $$$dp[i-1][0]$$$ by definition. The total cost of this case is then $$$dp[i-1][0] + a[i]$$$.
If, on the other hand, we remove element $$$i$$$ after we remove element $$$i-1$$$, the cost of removing element $$$i$$$ will be $$$0$$$ by definition. But removing element $$$i$$$ after element $$$i-1$$$ is the same thing as removing element $$$i-1$$$ before element $$$i$$$. This means that the cost of removing all elements $$$1..i-1$$$ is $$$dp[i-1][1]$$$ by definition. The total cost of this case is then $$$dp[i-1][1] + 0$$$.
The value of $$$dp[i][0]$$$ is the smaller one of these, since these are the only two possible cases.
The calculations of $$$dp[i][1]$$$ work similarly, and should hopefully be clear by now.
Similar Problem