# teaching machines

## CS 430: Lecture 5 – Haskell

February 25, 2021 by . Filed under lectures, programming languages, spring-2021.

Dear students,

This class has hit upon several themes so far this semester. In week 1, we discussed the spectrum programming language features. In week 2, we discussed the ideas behind a language’s syntax and how source code is lexed and parsed. In week 3, we talked of how we name data with variables. In week 5, we talked of how data is shaped into types. The next topic of discussion is functional programming, which has two distinctive qualities that separate it from the imperative programming of C++ and Java:

1. Functions can be treated as values. They can be assigned to variables. They can be passed as parameters to functions. They can be returned from functions.
2. We focus on mathematical truths and less on the manipulation of the underlying hardware. The notions of time and mutable state are gone. Stateful loops and pointers are not available.

The lens we will use to examine functional programming is Haskell. This language is probably very different from other languages you have used. You might not ever use it again after this class. But you know how sometimes it takes revolutionaries to bring about even the smallest of changes? Haskell will be this for us, and we will listen to its heretical ideas, shake our heads in disgust, and be changed forever.

### Lists

Simple lists can be expressed as literals, like [5, 7, 10, 12]. Character-list literals can be expressed as string literals, like "AA Bottom". Numeric ranges can be expressed as [-10..10] or [-10, -5..10] or [1.0, 1.1..2.0].

Lists in Haskell are implemented as linked lists. Each node contains its value and a pointer to the remainder of the list. The first item of a list can retrieved using the head function. For example, head "AA Bottom" is "A". The remainder of the list can be retrieved using the tail function. For example, tail "AA Bottom" is "A Bottom". What is the type of head? The type of tail?

To make a list with an item prepended to it, we use the cons operator :. For example, we get AAA Bottom from 'A' : "AA Bottom". The name comes from Lisp and is short for “construct”. The developers named the node of a linked list a cons cell.

### Functions

Let’s write some functions. Here’s one:

Write a function majority, which yields the minimum number of people of some population to form a majority.

How would you compute this?

In Haskell, we define functions like this:

functionName :: TypeOfFormal1 -> TypeOfFormal2 -> ... -> TypeOfReturnValue
functionName id1 id2 ... = expression


For majority, we’d expect to write something like this:

majority :: Integer -> Integer
majority = n / 2 + 1


But this doesn’t compile. The error message isn’t entirely inscrutable, but the issue is /. Only instances of Fractional support this operation, but Integer is not an instance of Fractional and does not support /. This is why I like to declare types. If I had left out the declarations, this would have compiled just fine:

> :t majority
majority :: Fractional a => a -> a


But the inferred types wouldn’t have matched my intent. I may have needed only half a person to achieve a majority. As we all know, it’s much easier to find half a person than a full one.

To get integer division, we use a different operator:

majority :: Integer -> Integer
majority n = div n 2 + 1


#### Slice

We’ve seen head and tail to extract either the first element of a list or everything but the first. There’s also take, which pulls out the first n elements from the list. And there’s drop, which casts out the first n elements. What if we wanted to pull out just a portion of these elements? How can we implement slice? Here’s the type signature we’ll aim for:

slice :: Int -> Int -> [a] -> [a]


To implement this, we employ take and drop. We drop the first from elements, and then we take just to - from of the sublist:

slice from to list = take (to - from) (drop from list)


Haskell has this strange operator that lets us alter the association of expressions in a less noisy way than parentheses. It’s $. If we insert a dollar sign between parameters of a function call, everything to its right will get evaluated first, and we won’t need parentheses anymore. Our slice function looks like this using $:

slice from to list = take (to - from) \$ drop from list


#### Pattern Matching

Let’s write another function to solve this problem:

You have two points. How far apart are they?

A very natural thing to do is start off with something like this:

distance :: Double -> Double -> Double -> Double -> (Double, Double, Double)
distance x1 y1 x2 y2 = ...


What’s not so great about this is that all the parameters are loose. x1 and y1 really belong together. As do x2 and y2. We’d like to treat them as such. For that, we can use tuples:

distance :: (Double, Double) -> (Double, Double) -> (Double, Double, Double)


On the calling side, we can say:

distance (0, 0) (1, 1)


Then on the definition side, we apply the Pythagorean theorem to compute the distance. Tuples don’t have named fields, so we can’t reach inside them with something like point.x. To access the fields of a tuple, we use fst and snd:

distance a b = sqrt((fst b - fst a) ** 2 + (snd b - snd a) ** 2)


Ick. In any other language, we’d break this code up and introduce some local variables. We can and should do the same in Haskell. Let’s use a where clause to declare some local variables (constants):

distance a b = span
where x1 = fst a
y1 = snd a
x2 = fst b
y2 = snd b
dx = x2 - x1
dy = y2 - y1
span = sqrt (dx ** 2 + dy ** 2)


This is still longer than it needs to be. To clean it up further, we will use one of Haskell’s most beautiful features: pattern matching. We can use it to automatically destructure parameters into their constituent components:

distance (x1, y1) (x2, y2) = sqrt (dx ** 2 + dx ** 2)
where dx = x2 - x1
dy = y2 - y1


#### Recursion

One of the consequences of Haskell not having variables that can change their value is that we can’t write loops. So how are we going to solve this task?

Turn strings into StRiNgS.

With recursion. To gradually introduce what recursion looks like in Haskell, let’s start with the simpler problem of implementing a boring old lowercasing function, which we can implement like this in a standard recursive form:

import Data.Char -- to get toLower

lowerize :: String -> String
lowerize s =
if s == [] then
[]
else
toLower first : lowerize rest
rest = tail s


We could also have written this using guards, which scales better when there are many cases:

lowerize :: String -> String
lowerize s
| s == [] = []
| otherwise = toLower first : lowerize rest
rest = tail s


We could also have written this in a form that more easily supports pattern matching using a case statement:

lowerize :: String -> String
lowerize s
case s of
[] -> []
(first : rest) -> toLower first : lowerize rest


Which do you like best? Wait, don’t pick yet. It turns out we can define the different cases in a syntactically simpler way. We add the pattern matching directly into the formals:

lowerize [] = []
lowerize (first : rest) = toLower first : lowerize rest


I think this is beautiful. It feels like we are defining multiple versions of the function for different shapes of the parameters. Each version is self-contained and doesn’t have to worry about the others. In reality, this form is just syntactic sugar for the case version.

Now we’re ready to write a function for producing our odd casing:

oddcase :: String -> String
oddcase [] = []
oddcase (first : second : rest) = toUpper first : toLower second : oddcase rest
oddcase (first : rest) = toUpper first : oddcase rest


Note the order here. The third definition is more general than the second, so if we swap their order, the more general will subsume the less general. The compiler is kind enough to warn you about this situation.

#### Subdivide

Let’s write this recursive function:

Write a function subdivide that accepts a list and returns a list in which the mean element between each consecutive pair has been interposed. For example subdivide [1, 2] yields [1, 1.5, 2] and subdivide [0, 50, 40] yields [0, 25, 50, 45, 40].

Since we’re doing division to get the mean, the list must be of some type that is an instance of Fractional. That leads us to the type signature:

subdivide :: Fractional a => [a] -> [a]


There are two base cases here: the empty list and the singleton list. Neither situation requires any special processing. We just give back what we are given:

subdivide [] = []
subdivide (first : []) = [first]


In general, however, we have two elements and we need to find their mean and insert it. Something like this does the trick:

subdivide (first : second : rest) = first : mean : subdivide (second : rest)


### Partial Function Application

So, we have seen one way of defining functions in Haskell. We write a named abstraction. We’re going to learn three other ways of building functions, each conferring certain benefits on us as developers.

We first define a word. Arity is the number of parameters a function accepts. Robert Martin says this in Clean Code:

The ideal number of arguments for a function is zero (niladic). Next comes one (monadic), followed closely by two (dyadic). Three arguments (triadic) should be avoided where possible. More than three (polyadic) requires very special justification—and then shouldn’t be used anyway.

He goes on to describe why high arity should be avoided: each parameter is a cognitive burden imposed on the client, and the combinatorical immensity of multiple parameters makes testing difficult.

What’s the arity of +? Two, right? Not in Haskell. In Haskell, you actually provide + (or any function) just one parameter. To what do you suppose that evaluates? We can answer this by examining the type signature. It’s really more general than this, but let’s suppose + worked only with Ints:

(+) :: Int -> Int -> Int


If Haskell behaved more like Java, the type signature of + would look something like this:

(+) :: (Int, Int) -> Int


We feed in the pair of operands all at once and it gives us back the sum. But Haskell doesn’t behave like Java, and it allows to only partially apply parameters. If we provide only a single Int, what’s left of the signature is Int -> Int. That means the value returned by (+) 10 is a function waiting on the second parameter! The arrow syntax makes a lot more sense if you parenthesize it, observing its right associativity:

(+) :: Int -> (Int -> Int)


So, all Haskell functions truly have an arity of 1. If something has an arity of 0, it’s a variable. This behavior leads us to a new way of defining functions: we can supply a subset of the parameters to some existing function to define a convenience function that has a simpler interface.

Consider this very generic function that surrounds a String with some prefix and suffix:

surround :: String -> String -> String -> String
surround prefix suffix body = prefix ++ body ++ suffix


Suppose we want to parenthesize a String. We could say this:

surround "(" ")" "Lincoln 1863"


But if we have to do this a lot, it’d be nice to provide a convenience function:

parenthesize :: String -> String
parenthesize s = surround "(" ")" s


We can take one more step to get to idiomatic Haskell. Function parenthesize doesn’t really need to relist its formal parameters. It gets those from surround. We can, in fact, leave them off:

parenthesize :: String -> String
parenthesize = surround "(" ")"


How I think of this is that parenthesize isn’t so much a function as a synonym for its right-hand side. We know that the right-hand side is only partially evaluated, and that the third parameter is still missing. This practice of chopping off the formals when they can be deduced from partial evaluation is called point-free style.

Partial evaluation allows each function to be a little factory from which we can build other functions. Here are a few more surrounders:

quote = surround "\"" "\""
tag = surround "<" ">"
italicize = surround "*" "*"


This ability to make functions from other functions is a superpower.

### Composition

We have seen two ways of defining functions in Haskell: as named abstractions of algorithms or as partially evaluated versions of other functions. Let’s look at a third way of defining functions: composition. Suppose we wanted to get the second element out of a list, the neck. We can write this with a chain of two calls:

neck :: [a] -> a
neck list = head (tail list)


The output of the first function becomes the input of the second function. We’ve seen this idea before in our math classes. When we see f(g(x)), we sometimes rewrite this as f compose g or f o g. In Haskell, we write this as follows:

neck :: [a] -> a
neck list = (head . tail) list


One of the advantages of expressing this as a composition of functions is that we can rewrite it in point-free style. Notice how the left- and right-hand sides share the same suffix. Let’s leave it out:

neck :: [a] -> a


Both partial evaluation and composition eliminate a lot of the overhead that we programmers find ourselves doing. Much of our development life is spent gluing together abstractions into some mega-abstraction, and these two techniques give us some of our life back.

### Lambdas

Another way to generate a function is through a lambda. Lambdas are expressions of the function’s parameters and body, but they leave of its name. This lambda adds 1 to its parameter:

\x -> x + 1


This lambda reports if its parameter is the empty list or a list with one element:

\l -> null l || length l == 1


The general grammar of lambdas is this:

\param1 param2 param3 -> definition


Pattern matching is legal in the formals list. Here’s a lambda for swapping fields in a 2-tuple:

\(a, b) -> (b, a)


I could assign these lambdas to identifiers:

add1 = \x -> x + 1
few = \l -> null l || length l == 1


But that’s not usually what we do. If we wanted a named function, we would have used that syntax. Instead, lambdas love to be anonymous. They love to be used for a moment and thrown away. They are firecrackers. We light them and toss them into the hands of our friend functions, where they explode in glory.

Those functions that are willing to take on other functions as parameters are called higher-order functions. We’ve seen at least one of them: filter. Let’s write a function that filters out all the numbers in a particular range:

innies :: Ord a => a -> a -> [a] -> [a]
innies lo hi = filter (\x -> lo <= x && x <= hi)


Let’s write count, which is a higher-order function that accepts a predicate and a list. It runs this algorithm:

count = 0
for each element in list
if predicate is true of element
count += 1
return count

We implement this in Haskell like so:

count :: (a -> Bool) -> [a] -> [a]
count _ [] = 0
count predicate (first:rest)
| predicate first = 1 + count predicate rest
| otherwise = count predicate rest


We can use lambdas with count!

count (\(a, b) -> a < b) [(1, 2), (3, 3), (4, 5), (11, 10)]
count (\s -> length s < 5) ["I", "think", "it's", "impossible", "to", "really", "understand", "somebody", "what", "they", "want", "what", "they", "believe", "and", "not", "love", "them", "the", "way", "they", "love", "themselves"]


Lambdas are pretty great, but be careful. What’s wrong with these?

count (\x -> x > 0) [-3..3]
count (\c -> isUpper c) "You Get What You Need"
count (\x -> x % 2 == 0) [-4..4]


All three of these are cases of UFW—unnecessary function wrapping. We have existing functions that do the same thing, and there’s no need to define a new one:

count (> 0) [-3..3]
count isUpper "You Get What You Need"
count even [-4..4]


### Filter

Let’s start with filter. In SQL, this is the WHERE clause. We want to reduce a collection to some subset matching some criteria. That criteria is the hole that we must leave blank, but the looping structure is the same everytime:

keepers = []
for each item in list
if keeper meets criteria
append item to keepers

Does this algorithm look familiar to you? It should. It’s a lot like count. In Haskell, we can write it like so:

filter' :: (a -> Bool) -> [a] -> [a]
filter' _ [] = []
filter' predicate (first : rest)
| predicate first = first : filter' predicate rest
| otherwise = filter' predicate rest


### Map

Suppose we want to append ", Lord of Catan" to a bunch of names. We might write this function:

lordify [] = []
lordify (first : rest) = (first ++ ", Lord of Catan") : lordify rest


Suppose we want to round up a bunch of numbers. We might write this function:

roundUp [] = []
roundUp (first : rest) = ceiling first : roundUp rest


We write hundreds of little bits of code that perform similar tasks like this. This pattern is called map. Here’s the algorithm written abstractly in imperative pseudocode:

news = array same size as olds
for each item in olds
news[i] = transform olds[i]
return news

What does this look like in Haskell?

map' :: (a -> b) -> [a] -> [b]
map' _ [] = []
map' xform (first : rest) = xform first : map' xform rest


With this pattern in our toolbelt, we can express all our mappings from one array to another with a one-liner:

lordify = map (++ ", Lord of Catan")
roundUp = map ceiling


### Conclusion

Haskell shows us the power, time, and expressiveness that we gain from raising functions up to first-class citizens. Most of our modern languages support something like map, filter, and fold. We can let the library do the looping.

See you next time!

Sincerely, P.S. It’s time for a haiku!

If code is magic
Then my functions must be spells
And I’m Mickey Mouse