neal's blog

By neal, 5 years ago, In English

saketh and I were looking at 1272F - Две скобочные последовательности and challenged ourselves to solve it using as little memory as possible. From my initial submission using 118 MB, I was able to get down to 19 MB using int16_t for DP values and using single bits for the previous step markers: 66735626.

Saketh then got it down to 3.2 MB by solving the DP using BFS to remove the DP array entirely: 66740348 (according to him, the DP values are implicit in the BFS).

Finally, we came up with a new idea to both eliminate the previous step markers and store the DP values efficiently using the observation that each DP value is +1 or -1 compared to the previous value. This uses 1.4 MB: 66796767. It's actually still $$$n^3$$$ memory, but it uses only 1 bit per entry.

Can anyone do better than this? Perhaps someone can even come up with an approach using sub-$$$n^3$$$ memory.

Update: dinosaurs managed to remove 80 KB. 66773915

  • Vote: I like it
  • +136
  • Vote: I do not like it

»
5 years ago, # |
Rev. 2   Vote: I like it +4 Vote: I do not like it

I think it's possible to solve the problem in time $$$O(n^3 \log n)$$$ and memory $$$O(n^2 \log n)$$$ using divide and conquer it's (I didn't implement it). However I'm not sure if it's possible to use bitset to optimize the memory complexity of this solution, so it may or may not give any memory usage reduction.

I don't know what this method is called (please tell me if you know), but it's similar to the sliding window optimization.

Assumes that:

  • If it's only necessary to compute the answer and not necessary to backtrace (reconstruct the sequence) it's possible to solve in $$$O(n^2)$$$ with sliding window DP.
  • To reconstruct segment $$$i$$$ of the answer, given the segments $$$i+ 1$$$ and after of the answer, it's necessary to have table $$$dp[i]$$$ and $$$dp[i+1]$$$.
  • From table $$$dp[i]$$$ it's possible to compute $$$dp[i+1]$$$ in $$$f(n)$$$.

Then the algorithm is: denote $$$Compute(dp[l], dp[r], ans[r..n])$$$ to be the procedure to compute answer segment $$$[l..r]$$$ given answer segment $$$r..n$$$, two arrays $$$dp[l]$$$ and $$$dp[r]$$$. Then $$$Compute(dp[l], dp[r], ans[r..n]) =$$$

  • Handle the trivial case ($$$r-l \leq 1$$$)
  • Let $$$mid \leftarrow \lfloor (l+r)/2 \rfloor$$$
  • Compute $$$dp[mid]$$$ (in time $$$f(n) \times (mid-l)$$$)
  • $$$ans[mid..r] \leftarrow Compute(dp[mid], dp[r], ans[r..n])$$$
  • $$$ans[l..mid] \leftarrow Compute(dp[l], dp[mid], ans[mid..n])$$$

You can see that at any point in time there are only $$$\log n$$$ dp arrays being kept in memory, and the time complexity is $$$n \log n \times f(n)$$$.

In this case, for each added character the matching length only increase by 1 or kept unchanged, so this algorithm should be applicable (scroll by one of the matching length dimension).

»
5 years ago, # |
  Vote: I like it +42 Vote: I do not like it

Probably Hirschberg's algorithm: The idea is originally in 2D: view the problem through a weighted 2D grid (with very specific moves), and find the best path from topleft corner to bottomright. Split the grid to half horizontally, compute from start to all of middle the best result, and from all of middle to end, without maintaining the actual path. Find the best breakpoint in the middle and solve recursively, it ends up at $$$\mathcal{O}(nm)$$$ time and linear memory, since you can now remember a constant number of rows per dp.

Here it is 3D, we can view it as a cuboid, where we also go up and down depending on bracket balance. However we should be able to break the cuboid in the same way (not breaking by balance) and still obtain $$$\mathcal{O}(n^3)$$$ time, $$$\mathcal{O}(n^2)$$$ memory.