I was recently solving a problem (ongoing contest. Will post the link after the contest ends) where I came up with a DP solution where dp[i][j] only depended on dp[][j-1]. Clearly one of the techniques to handle this type of problems is to use 1D array to compute dp[i] in an iterative manner for space optimization.
However, in the problem I was solving, even using a 2D array for dp[i][j] would not exceed memory limits. So I solved the problem using a 2D array and the verdict was Time Limit Exceeded.
But when I simply changed the 2D array to a 1D array and submitted again, the verdict was Accepted.
Why does changing a 2D array to a 1D array provide time complexity optimization too ?
Auto comment: topic has been updated by HereToDisappoint (previous revision, new revision, compare).
was your 2D solution implemented using recursion ? or both of them were iterative ?
Both of them were iterative. The only change in the two solutions was that i changed the 2D array (where the second component was a dummy anyway) to a 1D array.
Not time complexity optimization in any way (constant factors don't count, remember?), it is more like cache optimization.
A matrix a[n][m] is actually just an array of length n * m, and indexing it like a[x][y], it actually looks at the x*n+y-th place in the array. So the indexing is a little bit slower by itself.
But this difference is really small. And the complexity is the same, it just changes the constant by a tiny margin.