## CS 240: Lecture 3 – Measuring Algorithms

Dear students:

We talked last week about the arsenal of tools we computer scientists have at our disposal and how each tool has its strengths and weaknesses. To be able to choose which tool to use, we need a way of judging them. Here are a few possible ways we might measure algorithms:

- Correctness. A tool passes muster if it produces the right result. But this is too simple. If a tool takes years to produce a correct answer, it’s not a good choice.
- Simplicity. A tool passes muster if it’s easy for humans to understand. Unfortunately, this is also too simple for the same reason as correctness.
- Space efficiency. A tool passes muster if it uses less memory or disk than other tools. This is an important measure used in some situations, but this class does not focus on it.
- Time efficiency. A tool passes muster if it completes a computation in less time than other tools. This is the measure that we focus on in the class.

A faster algorithm is a better algorithm. Our choice of data structure will influence an algorithm’s speed. Much of our discussion together will be devoted to measuring the time efficiency of algorithms and data structures. Let’s examine how we measure time efficiency with a case study.

### Biggest Difference

Suppose you told by your project manager to write an algorithm to scan a set of numbers and find the biggest, pairwise difference among them. Because your economic model is extremist capitalism, you turn to StackOverflow and find this solution in Ruby:

```
#!/usr/bin/env ruby
def biggest_difference(numbers)
max_diff = 0
for x in numbers
for y in numbers
diff = x - y
if diff > max_diff
max_diff = diff
end
end
end
max_diff
end
```

How does this look to you? Is it correct? Is it simple? Yes on both accounts. It is very space efficient. Apart from the array, it only allocates a couple of extra variables. Is it time efficient? Let’s measure its execution time. We create an array of random numbers and find the biggest difference:

```
#!/usr/bin/env ruby
require 'benchmark'
require_relative './biggest_difference1.rb'
n = 100
g = Random.new
numbers = Array.new(n) { g.rand(1000000) }
time = Benchmark.measure {
biggest_difference(numbers)
}
puts time.utime
```

The execution time seems fine. What if we bump `n`

up? It’s hard to get a sense of how the execution time is changing when doing isolated runs. Instead, let’s make many runs with varying values of `n`

and plot the execution time as a function of `n`

:

```
#!/usr/bin/env ruby
require 'benchmark'
require_relative './biggest_difference1.rb'
g = Random.new
ns = 30.times.map { |i| i * 100 }
for n in ns
numbers = Array.new(n) { g.rand(1000000) }
time = Benchmark.measure {
diff = biggest_difference(numbers)
}
print "(#{n},%.7f)," % time.utime
end
```

When we paste the output into Desmos and fit the window to the data, we see an interesting shape. As `n`

increases, the execution time increases much faster. This solution is simply not going to scale to large values of `n`

. Perhaps Ruby is the problem. In languages like Java and C++, the high-level code gets compiled down into the “language” of the processor, into very fast machine instructions. This language is really just patterns of electrons. In Ruby, the high-level code is interpreted, which means it never gets turned into fast machine code. Instead, it is evaluated by software, which is slower as it involves a lot of decision making at runtime.

Let’s compare this solution in Java:

```
public class BiggestDifference {
public static void main(String[] args) {
int[] ns = {
100, 200, 300, 400, 500, 600, 700, 800, 900, 1000,
1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000,
2100, 2200, 2300, 2400, 2500, 2600, 2700, 2800, 2900, 3000,
};
Stopwatch timer = new Stopwatch();
for (int n : ns) {
int[] numbers = randomArray(n);
timer.start();
biggestDifference(numbers);
double seconds = timer.stop();
System.out.printf("(%d,%.6f),", n, seconds);
}
}
private static int[] randomArray(int n) {
Random g = new Random();
int[] numbers = new int[n];
for (int i = 0; i < n; ++i) {
numbers[i] = g.nextInt(100000);
}
return numbers;
}
public static int biggestDifference(int[] numbers) {
int maxDiff = 0;
for (int x : numbers) {
for (int y : numbers) {
int diff = x - y;
if (diff > maxDiff) {
maxDiff = diff;
}
}
}
return maxDiff;
}
}
```

The numbers reveal that Java is indeed faster. However, we see the same sort of growth when we plot the execution times. As `n`

increases, the execution time grows faster. If we are dealing with large values of `n`

, even the Java solution will get bogged down. We have reached the point where perhaps we need to rethink the algorithm. Could we implement it in a way that is more time efficient? Yes. Instead of looking at each pair of numbers, we could just find the minimum and the maximum and compute their difference. That would require only one pass through the array instead of a pass per number. Our revised algorithm looks like this in Ruby:

```
#!/usr/bin/env ruby
def biggest_difference(numbers)
if numbers.empty?
return 0
end
min = numbers[0]
max = numbers[0]
for number in numbers
if number < min
min = number
elsif number > max
max = number
end
end
max - min
end
```

When we plot the times now, we see that the execution time grows at a constant rate. Though the Java solution would be faster, the Ruby solution is plenty fast.

### What to Measure

With both the list experiment and the biggest difference experiment, we used execution time to measure the time efficiency. Execution time turns out not to the best measurement for several reasons:

- Measuring execution time on a single machine measures more than the tool. It also measures your hardware and compiler.
- Comparing executions on different machines is not scientific because too many variables change.
- Comparing executions even on the same machine is not scientific. Caching and load will muddy the execution time.

A better measure is the number of *steps*. A step is loosely defined as a single operation. An algorithm’s number of operations is expressed as a function of `n`

:

Consider this algorithm:

```
total = 0;
for (int i = 0; i < numbers.length; ++i) {
total += numbers[i];
}
```

How many steps are taken? The `+=`

happens `n`

times. So does the array indexing. So does `++i`

. So does the comparison. We could define *T* for this algorithm in this way:

However, this isn’t the code that ultimately gets executed. Rather, the code gets compiled down to a bunch of machine instructions. The array index might expand into five machine instructions. The comparison might expand into three. So, *T* might more realistically be defined this way:

While this may be more accurate, the definition of *T* now depends on the machine again, which will make comparing algorithms on different machines difficult. For this reason, we define *T* more loosely in terms of some constant coefficient:

All algorithms whose number of steps can be expressed this way are said to have *linear* time.

How should we define *T* for this algorithm?

```
total = 0;
for (int i = 0; i < numbers.length; ++i) {
for (int j = 0; j < numbers.length; ++j) {
total += numbers[i];
}
}
```

The outer loop executes `n`

times. On each iteration, the inner loop executes `n`

times. That gives us this loose definition of *T*:

All algorithms whose number of steps can be expressed this way are said to have *quadratic* time. Would you rather have a linear time algorithm or a quadratic time algorithm?

### Growth Families

Every algorithm we encounter will be judged by its number of operations. In practice, we’ll find that there are just a handful of functions used to describe a function’s time efficiency:

- $\log_2 n$
- $n$
- $n\log_2 n$
- $n^2$
- $n^3$
- $2^n$
- $n!$

We’ll plot these one at a time in Desmos to see how they compare.

Finding which of these families an algorithm belongs to is a major part of theoretical computer science. Even software developers need to have some sense of them in order to judge the quality and even possibility of their solutions.

Suppose you are trying to crack a numeric password of length `n`

, which means you must test every possible password. What growth family does this algorithm belong to?

- constant
- logarithmic
- linear
- polynomial ($n^c$)
- exponential ($c^n$)
- factorial

We’ll answer this question using peer instruction.

### TODO

- Submit Friday’s lab before 11 PM.
- Read 6.04-6.07 in OpenDSA.
- Be prepared to submit a
*muddiest point*on Wednesday. - Start the first programming assignment, which listed under the Assignments on Canvas.

See you next time.

P.S. It’s time for a haiku!

Never order food

The menu is way too big

Just say what you want