Dynamic Programming: The Complete Introduction and Problem Type Guide
Master Dynamic Programming from fundamentals to advanced patterns. Learn the 5-step framework, explore all DP problem types with classic LeetCode examples, and build intuition with interactive visualizations.
What is Dynamic Programming?
Dynamic Programming (DP) is an optimization technique that solves complex problems by breaking them down into simpler, overlapping subproblems. Instead of solving the same subproblem multiple times, DP stores the results and reuses them.
The Software Engineering Analogy
If you’re familiar with software architecture, think of DP as intelligent caching:
| Software Concept | DP Equivalent |
|---|---|
| Cache/Memoization | DP table storing computed results |
| Cache hit | Reusing a stored subproblem solution |
| Cache miss | Computing a new subproblem |
| Cache key | State (e.g., dp[i] or dp[i][j]) |
| Cache invalidation | Not needed - subproblems don’t change |
The key insight: DP trades space for time by storing intermediate results.
Two Essential Properties
For a problem to be solvable with DP, it must have these two properties:
1. Optimal Substructure
The optimal solution to the problem can be constructed from optimal solutions of its subproblems.
Example: Shortest Path
Shortest path A → C = Shortest path A → B + Shortest path B → C
Example: Fibonacci
F(n) = F(n-1) + F(n-2)
2. Overlapping Subproblems
The same subproblems are solved multiple times during recursion.
Fibonacci recursive call tree for F(5):
F(5)
/ \
F(4) F(3) ← F(3) computed twice!
/ \ / \
F(3) F(2) F(2) F(1) ← F(2) computed 3 times!
/ \
F(2) F(1)
Without DP, computing F(5) makes 15 recursive calls. With DP, we only compute 6 unique values once.
Top-Down vs Bottom-Up
There are two approaches to implementing DP:
Top-Down (Memoization)
Start from the main problem and recursively solve subproblems, caching results.
Pros:
- More intuitive - follows natural recursive thinking
- Only computes necessary subproblems
- Easier to implement for complex state transitions
Cons:
- Recursion overhead (stack space)
- Potential stack overflow for large inputs
Bottom-Up (Tabulation)
Start from the smallest subproblems and build up to the main problem.
Pros:
- No recursion overhead
- Often easier to optimize space
- Generally faster due to better cache locality
Cons:
- Requires understanding the correct iteration order
- May compute unnecessary subproblems
Top-Down: F(5) → F(4) → F(3) → F(2) → F(1) → F(0) [recursive]
Bottom-Up: F(0) → F(1) → F(2) → F(3) → F(4) → F(5) [iterative]
The 5-Step DP Framework
Use this framework to solve any DP problem systematically:
Step 1: Define the State
What does dp[i] (or dp[i][j]) represent?
The state should capture all information needed to make a decision. Common patterns:
dp[i]= answer for firstielementsdp[i]= answer ending at indexidp[i][j]= answer for subarray[i, j]dp[i][j]= answer using firstiitems with capacityj
Step 2: Find the State Transition
How do we compute dp[i] from smaller subproblems?
This is the recurrence relation - the heart of DP. Ask yourself:
- What choices do I have at state
i? - How does each choice relate to previous states?
Step 3: Initialize Base Cases
What are the smallest subproblems we can solve directly?
Common base cases:
- Empty input:
dp[0] = 0ordp[0] = 1 - Single element:
dp[1] = input[0] - Edge values:
dp[i][0] = ...,dp[0][j] = ...
Step 4: Determine Iteration Order
In what order should we fill the DP table?
The rule: A state must be computed only after all states it depends on are ready.
- If
dp[i]depends ondp[i-1], iterateifrom small to large - If
dp[i]depends ondp[i+1], iterateifrom large to small - For 2D DP, consider row-by-row or diagonal order
Step 5: Extract the Result
Where is the final answer in the DP table?
Common patterns:
- Last element:
dp[n]ordp[n-1] - Maximum/minimum:
max(dp)ormin(dp) - Specific target:
dp[target]
Interactive Visualization: Fibonacci
Watch how DP builds the solution step by step:
Notice how each value is computed exactly once and reused!
Interactive Visualization: Climbing Stairs
Code Evolution: From Recursion to Optimized DP
Let’s see the progression from naive recursion to optimized DP:
Complexity Comparison
| Approach | Time | Space | Notes |
|---|---|---|---|
| Pure Recursion | Exponential - TLE for n > 40 | ||
| Memoization | Each subproblem solved once | ||
| Tabulation | No recursion overhead | ||
| Space-Optimized | Only keep needed states |
Universal DP Template
DP Problem Type Catalog
Here’s a comprehensive catalog of DP problem types you’ll encounter in interviews:
Type 1: Linear DP (1D)
Definition: State depends on previous elements in a single sequence.
State Pattern: dp[i] = optimal value considering elements 0 to i
Classic Problems:
Climbing Stairs (LC 70)
- State:
dp[i]= number of ways to reach stepi - Transition:
dp[i] = dp[i-1] + dp[i-2] - This is essentially Fibonacci!
House Robber (LC 198)
- State:
dp[i]= max money robbing houses0toi - Transition:
dp[i] = max(dp[i-1], dp[i-2] + nums[i]) - Classic “take or skip” pattern
More Linear DP Problems:
- LC 53: Maximum Subarray (Kadane’s Algorithm)
- LC 91: Decode Ways
- LC 139: Word Break
Type 2: Grid DP (2D)
Definition: State defined on a 2D grid, typically for path problems.
State Pattern: dp[i][j] = optimal value to reach cell (i, j)
Classic Problems:
Unique Paths (LC 62)
State: dp[i][j] = number of paths to reach (i, j)
Transition: dp[i][j] = dp[i-1][j] + dp[i][j-1]
Base: dp[0][j] = dp[i][0] = 1
Minimum Path Sum (LC 64)
State: dp[i][j] = minimum sum to reach (i, j)
Transition: dp[i][j] = grid[i][j] + min(dp[i-1][j], dp[i][j-1])
More Grid DP Problems:
- LC 63: Unique Paths II (with obstacles)
- LC 221: Maximal Square
- LC 174: Dungeon Game
Type 3: String DP
Definition: DP on one or two strings, often for matching/editing problems.
State Pattern: dp[i][j] = answer for s1[0..i] and s2[0..j]
Classic Problems:
Longest Common Subsequence (LC 1143)
State: dp[i][j] = LCS length of s1[0..i-1] and s2[0..j-1]
Transition:
if s1[i-1] == s2[j-1]: dp[i][j] = dp[i-1][j-1] + 1
else: dp[i][j] = max(dp[i-1][j], dp[i][j-1])
Edit Distance (LC 72)
State: dp[i][j] = min operations to convert s1[0..i-1] to s2[0..j-1]
Transition:
if s1[i-1] == s2[j-1]: dp[i][j] = dp[i-1][j-1]
else: dp[i][j] = 1 + min(dp[i-1][j-1], dp[i-1][j], dp[i][j-1])
(replace) (delete) (insert)
More String DP Problems:
- LC 5: Longest Palindromic Substring
- LC 115: Distinct Subsequences
- LC 10: Regular Expression Matching
Type 4: Knapsack DP
Definition: Select items with constraints (weight, count) to optimize value.
State Pattern: dp[i][w] = optimal value using first i items with capacity w
Classic Problems:
Coin Change (LC 322)
State: dp[amount] = minimum coins to make amount
Transition: dp[i] = min(dp[i], dp[i - coin] + 1) for each coin
Base: dp[0] = 0
Partition Equal Subset Sum (LC 416)
State: dp[i][sum] = can we achieve sum using first i elements?
Transition: dp[i][s] = dp[i-1][s] || dp[i-1][s - nums[i]]
(0/1 Knapsack variant)
More Knapsack DP Problems:
- LC 518: Coin Change II (count combinations)
- LC 494: Target Sum
- LC 474: Ones and Zeroes
Type 5: Interval DP
Definition: State defined by intervals [i, j], solve by combining smaller intervals.
State Pattern: dp[i][j] = optimal value for subarray/substring [i, j]
Classic Problems:
Longest Palindromic Subsequence (LC 516)
State: dp[i][j] = LPS length in s[i..j]
Transition:
if s[i] == s[j]: dp[i][j] = dp[i+1][j-1] + 2
else: dp[i][j] = max(dp[i+1][j], dp[i][j-1])
Burst Balloons (LC 312)
State: dp[i][j] = max coins bursting balloons in range (i, j)
Transition: dp[i][j] = max(dp[i][k] + dp[k][j] + nums[i]*nums[k]*nums[j])
for all k in (i, j)
More Interval DP Problems:
Type 6: Subsequence DP
Definition: Finding optimal subsequences (not necessarily contiguous).
Classic Problems:
Longest Increasing Subsequence (LC 300)
State: dp[i] = length of LIS ending at index i
Transition: dp[i] = max(dp[j] + 1) for all j < i where nums[j] < nums[i]
Time: O(n²), can optimize to O(n log n) with binary search
More Subsequence DP Problems:
Type 7: State Compression DP (Bitmask DP)
Definition: Use bitmask to represent set states, typically for subset/permutation problems.
State Pattern: dp[mask] where mask is a bitmask representing selected elements
Classic Problems:
Partition to K Equal Sum Subsets (LC 698)
State: dp[mask] = can we partition elements in mask into groups?
Use bitmask to track which elements are used
More Bitmask DP Problems:
Type 8: Tree DP
Definition: DP on tree structures, typically using post-order traversal.
State Pattern: dp[node] = optimal value for subtree rooted at node
Classic Problems:
House Robber III (LC 337)
State: For each node, track [max if robbed, max if not robbed]
Transition:
rob_current = node.val + not_rob_left + not_rob_right
not_rob_current = max(left) + max(right)
More Tree DP Problems:
Type 9: Game Theory DP
Definition: Two-player games where both play optimally.
State Pattern: dp[i] = optimal score difference for current player starting from state i
Classic Problems:
Stone Game (LC 877)
State: dp[i][j] = max score difference for current player in piles[i..j]
Transition: dp[i][j] = max(piles[i] - dp[i+1][j], piles[j] - dp[i][j-1])
Predict the Winner (LC 486)
Similar to Stone Game but return whether Player 1 wins.
Time & Space Complexity Patterns
| DP Type | Typical Time | Typical Space | Space Optimization |
|---|---|---|---|
| Linear 1D | with rolling variables | ||
| Grid 2D | row by row | ||
| String (2 strings) | |||
| Knapsack | |||
| Interval | to | Usually cannot optimize | |
| Bitmask | Usually cannot optimize |
When NOT to Use DP
DP isn’t always the answer. Consider alternatives when:
1. No Overlapping Subproblems
If subproblems don’t overlap, recursion or divide-and-conquer is sufficient.
Example: Binary Search, Merge Sort
Each subproblem is solved exactly once - no benefit from caching.
2. Greedy Works
If a greedy choice at each step leads to the optimal solution, DP is overkill.
Example: Activity Selection, Huffman Coding
Greedy: O(n log n), DP: O(n²) - use greedy!
3. State Space Too Large
If the DP table would be too large, consider alternatives.
Example: If state is dp[i][j][k] with n³ space and n > 1000
Consider: Approximation algorithms, heuristics, or different modeling
4. The Problem is NP-Hard
Some problems don’t have polynomial DP solutions.
Example: Traveling Salesman (except with small n using bitmask DP)
DP Problem Identification Checklist
When you see a problem, ask these questions:
- “Find the minimum/maximum…” → Likely DP
- “Count the number of ways…” → Likely DP
- “Is it possible to…” → Likely DP (boolean DP)
- “Given constraints on selections…” → Likely Knapsack DP
- “Optimal play in a game…” → Likely Game Theory DP
Signal words:
- “Maximum”, “Minimum”, “Optimal”
- “Number of ways”, “Count”
- “Longest”, “Shortest”
- “Subsequence”, “Substring”
- “Can you achieve”, “Is it possible”
Summary: Key Takeaways
-
DP = Recursion + Memoization (or smart iteration)
-
Two essential properties: Optimal Substructure + Overlapping Subproblems
-
5-Step Framework:
- Define state
- Find transition
- Initialize base cases
- Determine iteration order
- Extract result
-
Master these core types:
- Linear DP → House Robber
- Grid DP → Unique Paths
- String DP → Edit Distance
- Knapsack DP → Coin Change
- Interval DP → Burst Balloons
-
Space optimization: Often possible by keeping only necessary previous states
-
Practice identifying DP problems by looking for optimization keywords and overlapping substructure
What’s Next?
This introduction covers the foundations. In upcoming articles, we’ll dive deep into each DP type:
- Linear DP Mastery - Advanced patterns and optimizations
- Grid DP Problems - From basics to complex variations
- String DP Deep Dive - Edit distance, regex matching, and more
- Knapsack Family - 0/1, unbounded, and multi-dimensional variants
- Interval DP Techniques - Matrix chain multiplication and beyond
- Bitmask DP - When and how to use state compression
- Tree DP - Patterns for tree-structured problems
- Game Theory DP - Minimax and optimal play strategies
Stay tuned!