teaching machines

CS 240: Lecture 9 – Linked Lists

September 12, 2021 by . Filed under cs3, fall-2021, lectures.

Dear students:

We are in the middle of a discussion of linear structures and the complexities of their operations. We started with an array-based list, and now we move on to a linked list. Today we’ll examine how the two approaches to lists differ.

Muddiest Points

But first, I will address a sample of your muddiest points.

Differentiating the difference between big omega and big theta keeps tripping me up. It is not confusing anymore but I still mix them up on occasion.

I think that in class we addressed the muddiest point I had last week, and that was the importance of amortized analysis. Especially with the examples, I now realize that although the worst case scenario is very important and might mean more to some than the best case and average case (a la Murphy’s law), it’s important to be aware of how often your worst case even occurs compared to the other cases. If you make your reasoning based off of the worst case alone, you fail to look at the bigger picture.

Determining which other time complexity classes a function can be classified as based on a based on a single Big O or Big Omega statement. Such as if a function is Ω(n), is it also Ω(n^2) or Ω(1)? and vice versa with big O.

I think the muddiest point from what was covered this week was understanding exactly what was the most important point. I still find understanding the difference between worst case/upper bound and the best case/lower bound at times difficult, because they require analysis techniques that I feel I am still not good at. For instance if we are doing a binary search, would the best case match (it being the middle of the array) be a lower bound of Omega(1). Furthermore, would the worst case, searching through half the array (n/2), simply be an upper bound of O(n)?

I think the muddiest point is that we didn’t go over enough examples/didn’t have enough time unfortunately. We never got to see an example where nlogn was used or even logn. If there is a direct example or a way you could explain it to show when the time complexity would be logn would be very nice.

Tracing Linked List Code

Consider the following code for the singly-linked list class from OpenDSA:

class LList implements List {
  private Link head;         // Pointer to list header
  private Link tail;         // Pointer to last element
  private Link curr;         // Access to current element
  private int listSize;      // Size of list

  // Constructors
  LList(int size) { this(); }     // Constructor -- Ignore size
  LList() { clear(); }

  // Remove all elements
  public void clear() {
    curr = tail = new Link(null); // Create trailer
    head = new Link(tail);        // Create header
    listSize = 0;
  }

  public Object remove() throws NoSuchElementException {
    if (curr == tail) // Nothing to remove
      throw new NoSuchElementException("remove() in LList has current of " + curr + " and size of "
        + listSize + " that is not a a valid element");
    Object it = curr.element();             // Remember value
    curr.setElement(curr.next().element()); // Pull forward the next element
    if (curr.next() == tail) tail = curr;   // Removed last, move tail
    curr.setNext(curr.next().next());       // Point around unneeded link
    listSize--;                             // Decrement element count
    return it;                              // Return value
  }

  // ...

  class Link<E> {         // Singly linked list node class
    private E e;          // Value for this node
    private Link<E> n;    // Point to next node in list

    // Constructors
    Link(E it, Link<E> inn) { e = it; n = inn; }
    Link(Link<E> inn) { e = null; n = inn; }

    E element() { return e; }                        // Return the value
    E setElement(E it) { return e = it; }            // Set element value
    Link<E> next() { return n; }                     // Return next link
    Link<E> setNext(Link<E> inn) { return n = inn; } // Set next link
  }
}

Using this code, complete the following diagramming and code-tracing exercises:

  1. Draw a linked diagram of a freshly constructed list. Mark head, tail, and curr.
  2. Draw a linked diagram of a list containing the strings "A", "B", and "C" and which is currently positioned at "B". What happens when remove is called? Update the diagram as you trace through the method’s code. Do not erase any values or links. Just cross them out.
  3. Draw a linked diagram of a list containing the strings "A", "B", "C", and "D". and which is currently positioned at "C". What happens when list.hasPredecessor("D") is called? Update the diagram as you trace through the method’s code.
    public boolean hasPredecessor(E target) {
      Link<E> cursor = head.next();
      while (cursor != curr) {
        if (cursor.element().equals(target)) {
          return true;
        }
        cursor = cursor.next();
      }
      return false;
    }
    

Analyzing List Algorithms

Analyze each of the methods below as described.

  1. What is the $\Theta$ running time of this method if this list is an ArrayList? A LinkedList?
    public static int sumByIndex(List<Integer> list) {
      int sum = 0;
      for (int i = 0; i < list.size(); i++) {
        sum += list.get(i);
      }
      return sum;
    }
    
  2. What is the $\Theta$ running time of this method if this list is an ArrayList? A LinkedList?
    public static int sumWithIterator(List<Integer> xs) {
      int sum = 0;
      for (int x : xs) {
        sum += x;
      }
      return sum;
    }
    
  3. What is the $\Theta$ running time of this method if toList is initially empty and both lists are of type ArrayList? LinkedList?
    public static void copy(List<Integer> fromList,
                            List<Integer> toList) {
      for (int item : fromList) {
        toList.add(item);
      }
    }
    
  4. What is the $\Theta$ running time of this method if toList is initially empty and both lists are of type ArrayList? LinkedList?
    public static void copyReverseA(List<Integer> fromList, 
                                    List<Integer> toList) {
      for (int item : fromList) {
        toList.add(0, item);
      }
    }
    
  5. What is the $\Theta$ running time of this method if toList is initially empty and both lists are of type ArrayList? LinkedList?
    public static void copyReverseB(List<Integer> fromList, 
                                    List<Integer> toList) {
      for (int i = fromList.size() - 1; i >= 0; i--) {
        int value = fromList.get(i);
        toList.add(value);
      }
    }
    
  6. What is the worst-case $\Theta$ running time of this method if toList is initially empty and both lists are of type ArrayList? LinkedList? What is the average-case $\Theta$ running time of this method if toList is initially empty and both lists are of type ArrayList? LinkedList?
    public static void copyScramble(List<Integer> fromList, 
                                    List<Integer> toList) {
      Random gen = new Random();
      for (int i = 0; i < fromList.size(); i++) {
        int randomIndex = gen.nextInt(fromList.size());
        int value = fromList.remove(randomIndex);
        toList.add(value);
      }
    }
    

TODO

See you next time.

Sincerely,

P.S. It’s time for a haiku!

Notebook or binder?
I stand there for hours choosing
Just stationery