# teaching machines

## CS 240: Lecture 15 – Recurrence Relations

September 27, 2021 by . Filed under cs3, fall-2021, lectures.

Dear students:

This semester is moving right along. We started by discussing the need to gauge the expected running time of our code in a machine-independent way. Coming up with an exact number of operations is too hard. Instead we decided to focus only on the dominant terms of an algorithm’s cost function. The result of this choice is that our algorithms lump together in a small set of performance families that relatively easy to rank against each other.

Then we looked at three abstract data types: list, stack, and queue. These may be implemented with arrays or linked lists or through yet other means. Our implementation choices determine the complexity of the operations. In many cases we can achieve constant time performance, but it may come at the cost of another operation being more expensive.

From here on we are going to keep on learning new abstract data types and a few more data structures that we can use to implement them. But we are going to need another tool for gauging their performance.

### Counting So Far

We’ve learned a few tricks to measure an algorithm’s cost so far. For simple algorithms, we can just count by hand the number of operations that get executed. Suppose we’re counting assignments in this function:

swap(array, i, j)
tmp = array[i]
array[i] = array[j]
array[j] = tmp

One. Two. Three.

When the operations are enclosed in a loop, counting is more involved. Suppose we’re counting how many times process is called in this algorithm:

for i = 0; i < n; i++
for j = 0; j < 3; j++
process(item[i])
process(-item[i])

It’s called 2 times inside the inner loop. The inner loop itself is executed 3 times. When the cost of each iteration is fixed, we can use multiplication to add quickly. The inner loop has a fixed cost of 6 calls. The outer loop runs $n$ times. Since its block has a fixed cost, we can multiply again. The total cost is $6n$.

The cost of each iteration is not always the same, which frustrates simple counting. Suppose we’re counting how many times process is called in this algorithm:

for i = 0; i < n; i++
for j = i; j < n; j++
process(item[i])

In this algorithm, the inner loop makes $n$ calls on its first iteration, $n – 1$ on its second, $n – 2$ on its third, and so on. The total number of calls is $1 + 2 + \ldots + n$. This is the triangular number $\sum_{i = 1}^n i = \frac{n(n + 1)}{2}$.

### Recurrence Relations

But what happens when we see an algorithm like this?

function sum(list)
if list.empty?
return 0
else
return list.head.value + sum(list.tail)

How do we count the number of additions in this code? The code is recursive, so the counting tricks we’ve been using don’t apply. Well, if the code is recursive, then we can make the cost function recursive too. In the base case, we have 0 additions. In the general case, we have 1 addition, but also however many additions we’ll get in the recursive call. We have these cases:

\begin{align}T(0) &= 0 \\T(n) &= 1 + T(n – 1)\end{align}

A mathematical function that produces a sequence in which each term is defined using a previous term is called a recurrence relation. An initial condition must be provided to seed the sequence at its starting value. To analyze recursive algorithms, we first form recurrence relations.

Let’s count the number of recursive calls we see in this code:

public static int split(int n) {
if (n == 0) {
return 1;
} else {
return split(n - 1) + split(n - 1);
}
}


We’ll first draw a tree representing the recursive execution of split(3). Fromt the tree, we determine the following:

\begin{align}T(3) &= 15 \\T(2) &= 7 \\T(1) &= 3 \\T(0) &= 1\end{align}

In the base case, we have 1 calls. In the general case, we have two recursive calls plus the originating call. But each of these calls has recursive calls bundled into it. Though we don’t know how many calls will be in each recursive call, we have a function that will tell us the cost of a call. That leads to these cases:

\begin{align}T(0) &= 1 \\T(n) &= 2 \cdot T(n – 1) + 1\end{align}

In real algorithms, you will encounter a mix of iteration and recursion, so you will need to use all counting techniques when analysizing these algorithms. Consider this method:

public static int split1(int n) {
for (int i = 0; i < n + 10; ++i) {
// perform basic operation
}

if (n == 0) {
return 1;
} else {
return split(n - 1) + split(n - 1);
}
}


In the base case, we execute 10 basic operations. In the general case, we execute $n + 10$ basic operations, plus the basic operations that will be executed by the recursive calls. That leads us to this cost function:

\begin{align}T(0) &= 10 \\T(n) &= n + 10 + 2 \cdot T(n – 1)\end{align}

That’s enough of me talking. Let’s figure out what you don’t understand by completing some exercises.

### Exercise A

What recurrence relation describes the number of multiplications in this code?

public static int fun(int n) {
if (n == 0) {
return 5;
} else {
int sum = 1;
for (int i = 0; i < 4; i++) {
sum *= 2;
}
return sum * fun(n - 1);
}
}


### Exercise B

What recurrence relation describes the number of additive operations in this code?

public static int fun1(int n) {
if (n == 0) {
return 20;
} else {
int result = 2 * n;
return result - (fun(n - 1) + fun(n - 1));
}
}


### TODO

You have some work to do before the next class: