teaching machines

CS 352 Lecture 34 – Memory Hierarchy Cont’d

December 5, 2016 by . Filed under cs352, fall 2016, lectures.

Dear students,

Last time we introduced the memory hierarchy. This hierarchy is not unlike a league of superheroes, each with its strengths and weaknesses. But the weaknesses are minimized, as long as you rotate in the proper superhero for each villain you encounter. For the memory hierarchy, the goal of stacking together these technologies is to provide a storage system that is fast, cheap, and capacious. It’ll only be fast if most of our time is spent with registers and cache. It’ll only be cheap if we don’t have a lot of the fast/expensive technology. It’ll only be capacious if the hierarchy widens at the bottom.

Remember the quicksort algorithm? It’s held up as one of the better sorting algorithms. But its worst case performance is O(n^2). How can we call that good? Well, worst case is not important if we rarely visit the worst case. The average case is a better measure of quicksort, and so it is with the memory hierarchy. It is designed to make our average access times be closer to the ideal times.

Let’s examine some canonical architecture-ish problems that help us see if the memory hierarchy accomplishes its goal of making average access times closer to the best case than the worst case:

Problem #1
Suppose 75% of data requests are for data that is already cached. Each of these requests takes 12 ns to complete. For data that is not already cached, the request takes 100 ns to complete. On average, how long does a request take?
Problem #2
Suppose DRAM technology costs $20 per GB and disk costs $2 per GB. You have 16 GB of RAM and 500 GB of disk storage. On average, how much is each gigabyte worth?
Problem #3
Suppose disk reads take 10 ms, memory reads take 100 ns, and cache reads take 5 ns. 100% of disk reads are hits, 99.5% of memory reads are hits, and 80% of cache reads are hits. On average, how long does a request take?

In general, we can say that the access time for a given level of the memory is this:

average access time = hit percentage * hit time + (1 - hit percentage) * miss time

When you implement quicksort, you don’t just cross your fingers and hope you pick an average pivot point. One strategy is to randomly pick three pivot candidates, and choose the middle one, expecting that it will bisect the list in two similarly-sized sublists. Similarly, we can make sure that the average metrics are better in the memory hierachy by doing a couple of things:

  1. Issue all data requests through the top of the hierarchy.
  2. If the top level cannot service the request, it delegates to the level below.
  3. Once a lower level satisfies the request, the upper level holds on to that data for a while.
  4. The lower level returns not only the specific data that was requested, but also the nearby data.

Items 3 and 4 are necessary because many of our data accesses follow two trends:

Consider this code:

int sum = 0;
for (int i = 0; i < 100; ++i) {
  sum += numbers[i];
}

Where do we see temporal locality in this code? Spatial locality?

The key to achieving our goal is to ensure that most accesses hit at the top level of the hierarchy. Let’s discuss some strategies for maximizing locality:

Here’s your TODO list for next time:

See you next class!

Sincerely,

P.S. It’s Haiku Monday!

Fridge: “Sorry, I’m late”
“Store had no cheese that you like”
“So Store ran to Farm”