## Introduction

The article Data Structures and Algorithms – Recursion is the third in series, of online course for Data Structure Algorithm. This is an effort to introduce and explain the Recursion methodology of algorithm design and programming. We will try to write some recursion based code and analyze the complexity of the algorithms in detail.

## Recursion

It is the concept of defining a method that makes a call to itself and the call is termed as a recursive call. This is a readable and still efficient way of designing algorithms in the scenario where the problem in hand can be redefined by a smaller version of the same problem; here we try to exploit the repetitive structures of the solution of a problem. Few examples are listed below:

- Calculating the factorial of a number.
- Raising a number to the power of n
- Listing the directory structure on a file system.
- Calculating the on disk size of a directory structure.

There are many ways to classify recursion based on the structure and number of recursive calls in each invocation of the caller.

## Linear Recursion

The simplest form of recursion, where a method is defined so that it makes at most one recursive call each time it is invoked. This type of recursion is useful when we view an algorithmic problem in terms of a first or last element plus a remaining set that has the same structure as the original set but in a smaller form.

Few of the examples would be as below:

Summing the elements of an array of size N recursively, we observe that the sum of all the elements in an array is A[0] is N is equal to 1 else, the sum is equal to sum of the first N-1 elements plus the last element in A. Let us write a recursive algorithm for the same:

**Algorithm: **LinearSum(A, n)

**Input: **An integer array A, an integer n>=1, such that A has at least n elements.

**Output: **The sum of the first n integers

1 2 3 4 |
if n = 1 then return A[0] else return LinearSum(A, n-1) + A[n-1] |

Few points worth paying attention while writing a recursive function/algorithm:

- It must always terminate i.e. we must write a base case and once it is satisfied, the method exits
- It must be invoked for a smaller set in every subsequent call.

## Analyzing Recursive algorithms using recursion traces

A recursion trace is the footprint of an algorithm which employs recursion. We will try to draw a recursion trace for the recursive algorithm of summing the elements of an array.

** A= {4, 3, 6, 2, 5}**

This illustration shows that the amount of work done in each box is constant which is the sum of work done by making a recursive call(which is constant work C1), adding a number to the sum returned from the subsequent call (another constant work C2) and returning to the calling method (constant work again C3).

So total amount of work done in each box is C1+ C2+ C3 and we have n = 5 boxes, so we can say that the total work or the running time can be defined as f(n) = (C1+C2+C3) n. We can say that f(n) = O(n), basically linear time.

Reversing an array recursively, we observe that reversal of an array of size n can be achieved if we can swap the first and the last element and recursively reverse the remaining array. Let us write the recursive algorithm for the same:

**Algorithm:** ReverseArray(A, i, j)

**Input:** An array A, and non negative indices i and j

**Output:** The reversal of the elements in A starting at index i and ending at j

1 2 3 |
if i < j then // base case Swap A[i] and A[j] ReverseArray(A, i+1, j-1) |

Point worth noticing is that n can be either odd or even, in case n is odd, we will eventually reach the condition i = j and when n is even we will reach the condition i > j. So it is guaranteed that the call will terminate. This will be linear running time again; we can draw the recursion trace and prove the same.

To successfully write a recursive routine, it is really important to think about a problem and redefine it in a way where it can be expressed using sub problems of the same nature as the original problem as in the array reversing algorithm we redefined the problem to take the two indices i and j, so that we can easily do the swapping and write a base case.

Computing powers of a number, we observe that raising a number to a power of n is very easily recursively broken. By definition x^{n} = x . x^{n-1}. This is going to take n steps and will be linear but we can definitely do better by the following definition. So let us define a power function p(x, n) which calculates x^{n}.

**Note : ** You can also check the post for Master Method for solving recurrences which is essential to find out running time of certain types of recurrences.

## Tail Recursion

As we are reading about recursion, I would like to point out a very important feature/improvement in all the recursive algorithms which is called tail recursion. Tail recursion by definition means that the last job of the recursive method is to make a recursive call and after that it must not do anything except for returning.

Tail recursive algorithms are chosen over other recursive algorithms. There is a reason behind it and it is quite obvious, when a recursive call is made, it internally uses a stack to keep track of all the live calls, and hence it consumes extra memory per live call, which increases the chances of getting a StackOverFlowException frequently. How does the tail recursion solve the problem?

When using tail recursion the algorithm can be converted into a non recursive algorithm by iterating through the recursive calls rather than calling them explicitly. Let us convert our ReverseArray algorithm to a non recursive one because it is a tail recursive code.

This is not as simple as putting the recursive call in the last line of the method, when we say it must not do anything after the recursive call; this constraint is sometimes tough to meet. For e.g. the LinearSum algorithm defined above is not a tail recursive algorithm, as we can see it is actually doing an addition after the recursive call is returned back.

**Algorithm:** IterativeReverseArray(A, i, j)

**Input:** An array A, and non negative indices i and j

**Output:** The reversal of the elements in A starting at index i and ending at j

1 2 3 |
while i < j do Swap A[i] and A[j] i |

## Binary Recursion

When an algorithm makes two recursive calls, we say that it uses binary recursion. Binary recursion is useful where we can think of dividing the problem in hand in almost equal halves. We can think of summing the array using binary recursion as well, this can be done by dividing the array into two equal halves and summing the halves recursively and then adding the two sums.

Let us define our BinarySum algorithm to represent the same.

**Algorithm:** BinarySum(A, i, n)

**Input:** An array A, and integers i and n

**Output:** The sum of n integers in A starting at index i

If n = 1 then

return A[i]

return BinarySum(A, i, n/2) + BinarySum(A, i+n/2, n/2)

Let us try and draw the recursion trace for this algorithm to analyze it as below:

The value of the parameter n is halved at each recursive call, so the max height of the recursion trace is 1 + logn. This also means that the maximum number of method instances that are active at the same time is 1 + logn. This also means that, the maximum space required by this algorithm is O(logn) which is a significant improvement over the space usage in the LinearArraySum which is O(n). However, the running time remains same as O(n) which is linear, because in all the cases we have to visit all the boxes which is of order n, exactly 2n – 1.

## Exercise for Reader

By definition the i^{th} term of a Fibonacci series can be defined as below:

F_{i} = F_{i-1} + F_{i-2} for i > 1

Write the algorithm and find the running time to find the i^{th} Fibonacci number.

# Conclusion

We discussed about recursion in detail and learnt the ways to analyze algorithms which are based on recursion, more about data structures (Stacks & Queues) and their implementation in next article.

Stay connected and stay Subscribed