CS 240: Lecture 13 - Queue
Stacks have a cousin: the queue. Like the stack, queue is only an interface. It can be implemented many different ways. Today we'll examine some of the possible implementations and look at some situations in which queues are useful.
A queue is similar to a stack; it is an abstract data type representing a growable sequence of items that are ordered by their time of arrival. The two differ in a couple of ways. First, their operations have different names. Items are are pushed and popped in a stack, but enqueued and dequeued in a queue. Second, they remove items from different ends. Popping from a stack gives you the newest item. Dequeueing from a queue gives you the oldest item.
What we call lines at the bank or grocery store are often called queues in the British Commonwealth. The oldest end of a queue is called the front and the newest end is called the back.
Imagine designing a printer driver. Jobs might come in faster than the printer can process them. Your driver spools them, or stores them in a queue until the printer is free. The first job in is the first job out. Imagine designing a music player. Users queue up a song to be played after the others that have already been queued up finish. Imagine designing a download manager in the olden days. You queue up the requests and download them in the order they arrive.
Like a stack, a queue may be implemented with an array list or a linked list. Items are added at one end of the structure and removed from the other. If new items are enqueued in an array queue by prepending them to the array, then the operations have the following complexities:
|Operation||Array List||Linked List|
If we enqueue new items at the end of an array queue, then the complexities of enqueue and dequeue flip. It seems that that we are going to be stuck with a linear time algorithm one way or another. Bummer. Linked lists might be the better data structure for queues, as they would allow for both enqueue and dequeue to be constant time operations. But a linked list means we won't get a performance benefit from caching, and we'll burn up memory on the links. These may be neglible costs.
Coin Flip Tree Generation
Queues are especially useful for methodically traversing a problem space that has many fronts. For example, consider the problem of generating a table of possible coin flip sequences. Suppose the flipper may flip up to \(~n~\) coins. These are some of the possible sequences:
H T HH HT TH TT HHH HHT HTH HTT THH THT TTH TTT HHHH ...
To generate this list, we keep a queue that maintains a list of positions on the expanding tree. Any time we dequeue a position, we both print it and throw its two followup flips back into the queue.
We can get better performance from an array queue if we don't force the oldest or newest item to be at index 0. Instead, the indices of the first and last elements are allowed to float. Suppose we have this implementation of enqueue:
to enqueue item assert an empty slot increment newIndex with wrapping items[newIndex] = item increment itemCount
And this implementation of dequeue:
to dequeue assert items item = items[oldIndex] increment oldIndex with wrapping decrement itemCount return item
We'll also assume that
oldIndex both start at 0. What will an array queue with a capacity of 4 elements look like after this code is executed?
queue = new ArrayQueue() queue.enqueue(4) queue.enqueue(10) queue.enqueue(15) queue.dequeue() queue.enqueue(3) queue.dequeue() queue.dequeue() queue.enqueue(7) queue.enqueue(1)
This implementation with floating indices is called a circular queue. It behaves as if the array circled around in memory.
You've got a few things to do before we meet again:
See you next time!
P.S. It's time for a haiku!