I've always found pure dynamic programming problems to be easier than pure greedy problems (at the same rating) and I don't really understand why. I wonder if other people feel the same.
When I have a solution for a DP problem, it just... makes sense. I don't really understand how the states and the recurrences come up in my mind, but it feels really intuitive. When I try to explain my solution to a student, for example, I just get stumped. I can't find an easy, step-to-step approach for explaining it (I'm a tutor and it really annoys me).
It was just an "Eureka!" moment for me (I solved a couple of problems by myself a long time ago, and from then on, it became easy) and I didn't overthink it until now.
For context, I've never watched tutorials / been taught by someone, I just knew the solution for the maximum non-increasing subsequence problem and that's it.
When explaining, I always try to answer myself this questions:
- Why are there $$$n$$$ states? Why do we need all of those? Why $$$n - 1$$$ states aren't sufficient? How did I come up with $$$n$$$ states in the first place?
- How did I come up with the recurrence?
- Why is it correct?
- Do we cover all cases with this dynamic programming approach?
- How did you find the base cases? Why did you choose those?
- Why are the standard base cases: $$$0$$$ / $$$1$$$ for counting problems, $$$-\infty$$$ / $$$v$$$ (some specific value $$$v$$$) for maximum problems, $$$+\infty$$$ / $$$v$$$ (some specific value $$$v$$$) for minimum problems?
I find decent explanations for me personally, but when trying it to express to a person that doesn't have this acquired intuition feels impossible.
So, the questions for you is:
- Is there an algorithmic way of clearly teaching / explaining dynamic programming solutions?
- Do you feel that greedy problems are harder generally than dynamic programming problems?
Thanks for reading my blog!! :)