Dynamic programming is an algorithmic paradigm that solves a given complex problem by breaking it into several sub-problems and storing the results of those sub-problems to avoid the computation of the same sub-problem over and over again.

It’s just a fancy name for saying that you’ll store their values when breaking the problem down into subproblems. Next time, if you encounter the same subproblem, you’ll reuse the stored value instead of recalculating.

If you use recursion to break down your problem, it’s called **Top-Down Approach (Memorization), **and if you use an iterative approach, it’s called **Bottom-Up Approach (Tabulation)**.

In recursion, you start with the whole problem and break it into smaller pieces. Whereas in the iterative approach, you start with the slightest problem (base case), keep storing values, and work your way up to the real issue at hand. Often, the iterative way is a bit computationally more efficient in terms of time than the recursive approach, but the iterative process consumes more space.

Let’s discuss both these approaches in detail:

**Top-Down Approach (Memoization Method):**

• The top-Down Approach takes a **COMPLEX SYSTEM **and breaks it down to the **“SIMPLE INDIVIDUAL COMPONENTS” **to better understand the inner layers.

• For example, let’s consider a fibonacci series fib(n) with n = 5:

fib(5) = fib(4) + fib(3)

fib(4) = fib(3) + fib(2)

fib(3) = fib(2)+ fib(1)

fib(2) = fib(1) + fib(0)

Now we know value of fib(1) and fib(0) so putting it in above equations

fib (2) = 1 +0 = 1

fib(3) = 1 + 1 = 2

fib(4)= 2 + 1 = 3

fib(5) = 3 + 2 =5

In this approach, we started with the whole problem. Since we did not know the answers of the intermediate states, we **recursively **broke the problem into smaller subproblems until we got to that particular state for which we knew the answer. Then we merged the subproblems to get the answer to the whole problem.

**Bottom-Up Approach (Tabulation Method):**

A Bottom-Up Approach is building up a **“COMPLEX SYSTEM” **by piecing

together “**SIMPLE INDIVIDUAL COMPONENTS.”**

For example, let’s consider a fibonacci series fib(n) with n = 5:

we know value of fib(0) and fib(1), so start with it

fib(0) = 0

fib(1) = 1

fib(2) = 1 + 0 = 1

fib(3) = 1 + 1 =2

fib(4) = 2 + 1 = 3

fib(5) = 3 + 2 =5

In this approach, we started with the state for which we knew the answer and then **iteratively **proceeded further to find the answer to the whole problem.

**Characteristics of Dynamic Programming**

** ***Dynamic programming can only be applied to the given problem if it follows the properties of dynamic programming. Those properties are:*

* ***Optimal Substructure: **If the optimal solution to a problem, S, of size n can be calculated by **JUST **looking at optimal solution of a subproblem, s, with size < n and **NOT ALL **answers to a subproblem, **AND **it will also result in an optimal solution for problem S, then this problem S is considered to have optimal substructure.

**Example (Shortest Path Problem)**: consider an undirected graph with vertices a,b,c,d,e and edges (a,b), (a,e), (b,c), (c,d), (d,a) & (e,b) then shortest path between a & c is a — b — c, and this problem can be broken down into finding the shortest path between a & b and then shortest path between b & c, and this will give us a reasonable solution. Note that we have two ways of reaching b from a:

• a — b (Shortest path)

• a — e — b

The longest Path Problem does not have an optimal substructure. The longest path between a & d is a — e — b — c — d, but the sum of longest paths between a & c (a — e — bc) and c & d (c — b — e — a — d) won’t give us a valid (non-repeating vertices) longest the path between a & d.

**Overlapping Subproblems: **A problem has **overlapping subproblems **if finding its solution involves solving the *same *subproblem multiple times.

For example, let’s look at the Fibonacci sequence (the series where each number is the sum of the two previous ones—0, 1, 1, 2, 3, 5, 8, etc…).

If we wanted to compute the *nn*th Fibonacci number, we could use this simple recursive algorithm:

public static int fib(int n) { if (n == 0 || n == 1) { return n; } return fib(n-1) + fib(n-2); }

We’d call fib(n-1) and fib(n-2) **subproblems **of fib(n).

Now let’s look at what happens when we call fib (5):

Our method ends up recursively calling fib (2) ** three times**. So the problem of finding the

*n*th Fibonacci number has overlapping subproblems.

Fib(5)

/ \

Fib(4) Fib(3)

/ \ / \

Fib(3) Fib(2) Fib(2) Fib(1)

/ \ / \ / \

Fib(2) Fib(1) Fib(1) Fib(0) Fib(1) Fib(0)

/ \

Fib(1) Fib(0)

**How to Solve Dynamic Programming Problems? **

There are specific steps that one must follow while solving a dynamic programming problem:

1. Recognize the DP problem

2. Identify problem variables

3. Clearly express the recurrence relation

4. Identify the base cases

5. Decide if you want to implement it iteratively or recursively

6. Add memoization

7. Determine time complexity

**Conclusion : **

Dynamic programming is simply a better solution for a problem. In this, we tend to divide the problem statement into subproblems, then each subproblem is solved individually, and after solving all subproblems, they are merged into a single solution. It lets you optimize the problem. Unlike any other programming paradigm, Dynamic programming can learn on its own. Dynamic Programming is not often very intuitive or straightforward. Hopefully, this post served as a good starting point.

#### Vidhi Sharma

Vidhi Sharma is an Application Engineer at Hexaview Technologies. She has persistent knowledge of Data Structures, C, C++, Python, ASP.NET, and its frameworks.