teaching machines

CS 430: Lecture 7 – Subprograms and Parameters

March 16, 2021 by . Filed under lectures, programming languages, spring-2021.

Dear students,

Programs are recursive. They are built out of smaller chunks of code that are themselves programs. We call these smaller chunks of code subprograms. These subprograms organize our code into coherent pieces, with each piece solving a small computational task and deferring to other pieces as needed. We expect that we’ll write better code if we use subprograms, because our mind will be focused on the subprogram’s task as we write it and the subprogram can be tested in isolation. Today we discuss how programming language designers have incorporated subprograms into our modern languages.

Warm-up Questions

Let’s start our discussion with a few high-level questions. You can be a perfectly competent programmer without really knowing the answers to simple questions like these.

What’s a procedure? A function? A method?
A procedure is a function that returns no value. Rather, it achieves some side effect, like printing, allocating memory, or performing I/O. Functions return a value, and in some languages, they may produce side effects. A method is a subprogram defined in the context of an class.

How does a subprogram begin executing? How does it stop executing?
A subprogram is called by some other subprogram or perhaps even by itself. It stops executing when a return statement is encountered or when the subprogram body is fully executed.

Where does execution start within a subprogram? Where does it end?
In most of our modern language, execution begins at the first statement of the subprogram’s body. Fortran had a way of establishing multiple entry points, like this:

program main
  call foo()
  call foo_without_setup()
end program main

subroutine foo()
  call setup()
entry foo_without_setup()
  call process()
end subroutine foo
But Fortran 2008 threw it out. Many of our languages allow multiple return statements, though some programmers think there should only be one exit point. If the return statement is within a try/catch statement, the exiting will happen after any finally statement.

When subprogram f is called, how does one know what subprogram definition to invoke?
This is complicated. The subprogram’s name is one distinguishing factor. Some languages, like Ruby and JavaScript, consider only the name. In other languages, the number, types, and ordering of parameters are also considered. Your book calls this the parameter profile. Usually, only the name and parameter profile are used to uniquely identify a subprogram and resolve overloading ambiguities. Interestingly, return values are often not included in this signature. See Java’s Class.getMethod documentation for an example.

The types of the actual parameters, which are the expressions sent to the subprogram in the call, are compared to the types of the formal parameters, which are the variables declared as holes that the caller must fill. Only subprograms whose types are compatible are eligible to match a call. When we have multiple subprograms with the same name but which can serve different types, we have ad hoc polymorphism. The subprogram has many shapes.

When the subprogram is a method, the set of eligible subprograms is constrained by the host object. If the host type is known at compile time, one can figure out the exact method. This is called static dispatch. If the host’s type is not quite known, then we can’t figure out what method is run until runtime using dynamic dispatch. For example, Java’s PrintStream.println for objects might be implemented like this:

public void println(Object o) {

We have no idea what toString method is going to be invoked. We say that println supports subtype polymorphism, as any object that is a subclass of Object can be passed in.


The fallout of giving things identity is that we also increase the separation between things. So it is with subprograms. We slap a meaningful name on an algorithm, and then it becomes its own entity with its own names and its own memory. Our vehicle for bridging that gulf of separation is parameters and return values.

We express the identity of a function in its header, which is generally the top line or lines of the function definition. The header establishes the protocol, the interface of parameter and return types between the subprogram and the caller. In C and C++, the header is sometimes declared separately in a prototype. These prototypes are collected up in a header file. The definitions themselves are compiled down to machine code and distributed. So that other code that calls into the compiled code can be type checked, the header file is consulted since there’s nothing in the compiled code that communicates the interface.

Semantic Models

In theory, there are really only three types of transactions that can occur between a caller and a subprogram:

Some languages like GLSL (OpenGL Shading Language) have syntax that allows the semantic intent of a parameter to be expressed directly using the modifiers in, out, or inout. In GLSL, we could write this function to compute both the quotient and remainder in an integer division:

void divmod(in int dividend, in int divisor, out int quotient, out int remainder) {
  quotient = dividend / divisor;
  remainder = dividend % divisor;

Out parameters can be used to return multiple values. They have no incoming value. Inout parameters are meant to be both read from and written to, as in this function:

void complement(inout float x) {
  x = 1.0 - x;

Implementation Models

GLSL is relatively unique in that it supports the theoretical semantic models verbatim. Most of our languages take different approaches to achieve the same semantics. Let’s examine these approaches.

The subprogram gets its own copy of the actual parameter values. Because any changes to the parameters will be applied to the copied value and not the caller’s values, pass-by-value implements in parameter semantics. C, Java, JavaScript, and Ruby are all pass-by-value.

Pass-by-value is a simple scheme to implement and understand. It pervades modern languages. The copying of values can incur extra runtime costs if structs and arrays themselves are cloned. In Java, Ruby, and JavaScript, we have implicit pointers to any non-primitive, and only the pointers are copied, which tend to be 4- or 8-bytes.

The subprogram handles an out parameter by storing the value to be returned in a local variable and copying the value over to the caller’s memory space with the subprogram finishes.

Pass-by-copy or pass-by-value-result
The subprogram handles an inout parameter by copying the passed value to a local variable, which is referenced in the body of the subprogram, and copying the value back into the caller’s memory space with the subprogram finishes.

The caller shares its actual parameters with the subprograms. Assignments to the formal parameters will effect changes in the caller’s memory. C++ is one of the only modern languages that truly supports this passing scheme. Some folks will try to tell you Java or Ruby are pass-by-reference, perhaps claiming that a method can change an object that’s sent as a parameter, like this:

public void budge(ArrayList<String> names) {
  names.add(0, "Chris");

They are right that the method is changing memory beyond itself. But the parameter is not being changed. The object the parameter refers to is what’s changing. If you tried to change the parameter, that change would be local:

public void budge(ArrayList<String> names) {
  names = new ArrayList<String>();

In C++, we can see true pass-by-reference at work in the canonical swap routine:

void swap(int &a, int &b) {
  int tmp = a;
  a = b;
  b = tmp;

Those assignments to a and b modify the caller’s actual parameters. That makes pass-by-reference an implementation of inout parameter semantics.

Most of this argument is due to terminology, as reference is an overloaded term. We need better distinction between C++ references and the implicit pointers of Java, Ruby, and JavaScript that we call references.

The caller shares an unevaluated form of its actual parameters with the subprogram. Each time the subprogram refers to the parameter, the unevaluated form is evaluated. Not many modern languages support this scheme; Scala is a notable exception. One benefit of pass-by-name semantics is that they allow the user to define their own control structures as functions, perhaps like this:

function repeatAround(n, outerBlock, innerBlock)
  for i to n - 1

// prints xoxoxox
repeatAround(4, print "x", print "o")

We want the second and third parameters to get evaluated each time they are called in the repeatAround function, not just once.

In most of our languages, the parameters are evaluated eagerly—before the subprogram takes over. Why pass-by-name, we delay their evaluation. We can simulate delayed evaluation by manually turning the second and third parameters into lambdas. Here’s how we might write repeatAround in JavaScript:

function repeatAround(n, outerBlock, innerBlock)
  for (let i = 0; i < n - 1; i += 1) {

// prints xoxoxox
repeatAround(4, () => console.log("x"), () => console.log("o"))


Let’s examine these various parameter passing schemes with a little exercises. Imagine we have this code that supports four of the schemes mentioned above:

foo(a, b, c, d):  # 0
  a = a + 1  # 1; a is passed by value
  b = b + 1  # 2; b is passed by copy
  c = c + 1  # 3; c is passed by reference
  d = d + 1  # 4; d is passed by name

x = [1, 2, 3, 4]
y = 2
foo(x[0], x[1], y, x[y])  # 5

What is the state of memory after each of the numbered lines? Here’s my answer:

0: x=[1, 2, 3, 4]  y=2  a=1  b=2  c=&y  d=x[y]
1: x=[1, 2, 3, 4]  y=2  a=2  b=2  c=&y  d=x[y]
2: x=[1, 2, 3, 4]  y=2  a=2  b=3  c=&y  d=x[y]
3: x=[1, 2, 3, 4]  y=3  a=2  b=3  c=&y  d=x[y]
4: x=[1, 2, 3, 5]  y=3  a=2  b=3  c=&y  d=x[y]
5: x=[1, 3, 3, 5]  y=3  a=2  b=3  c=&y  d=x[y]

Changes to c automatically propagate to y since c is a reference. Changes to b propagate to x[1] once the function finishes.

Positional vs. Keyword

Many of you grew up on Java, and you therefore have an irrational fondness for it. I similarly have an irrational fondness for toast. When you see a function call in Java, you automatically understand that the first actual parameter drops into the function under the name given by the first formal parameter. The second actual parameters drops in under the name given by the second formal parameter. And so on. When actuals are mapped to formals simply by their order of appearance, we have positional parameters. But that’s not the only way.

An alternative is a scheme of keyword parameters in which each actual parameter is labeled with the formal parameter’s name. Consider this Python function for calculating the cost of a taxable good:

def tax(price, rate)
  return price * (1 + rate)

With positional parameters, we must retain the incidental knowledge of the parameter order. With keyword parameters, the ordering becomes unimportant, and the name imbues semantic meaning on our parameter expressions. Compare these three calls:

tax(10, 0.07)
tax(price = 10, rate = 0.07)
tax(rate = 0.07, price = 10)

Which do you like better? Imagine the function had 11 parameters. Does your answer change? Keyword parameters enhance readability, provided the API designers picked meaningful names. To satisfy the maximum number of opinionated programmers, some languages allow a mix of both positional and keyword parameters even within the same function call.

Besides Python, Ruby, C#, Swift, and PHP all support named parameters. Several of these languages expect : instead of =, presumably because assignment is a legal expression that may appear as an actual parameter.

Default Parameters

Occasionally we want to leave a hole in an algorithm, but we expect it usually to be filled by a particular value. We can assign a default value to the parameter. For example, if the tax rate is usually 7.5%, then we’d define tax like so:

def tax(price, rate = 0.075)
  return price * (1 + rate)

We can then call it in abbreviated form:

tax(price = 10)

Besides Python, C++, C#, Ruby, and JavaScript support default parameters, along with many others.

Variadic Functions

Suppose you are organizing a raffle. You might write the following code:

class Raffle {
  private ArrayList<String> entrants = new ArrayList<>();

  public void add(String name) {

  public static void main(String[] args) {
    Raffle raffle = new Raffle();
    raffle.add("Nuna Toodle");
    raffle.add("Gary Fodmother");
    raffle.add("Molly Coddle");

Making those three separate add calls is not a large burden, but it’d be convenient to have a way to add names en masse. We could write a helper function that takes in an array of names and then pass in a gangly array literal:

class Raffle {
  private ArrayList<String> entrants = new ArrayList<>();

  public void add(String name) {

  public void add(String[] name) {
    for (String name : names) {

  public static void main(String[] args) {
    Raffle raffle = new Raffle();
    raffle.add(new String[] {"Nuna Toodle", "Gary Fodmother", "Molly Coddle"});

But there’s something better. Java, along with JavaScript, C, C++, Ruby, PHP, and many other languages allow us to define functions that can take in an arbitrary number of parameters. Functions whose arity is not fixed are called variadic functions. Generally, the actual parameters are bundled up in a collection like an array that can be iterated through inside the function. Here’s Java’s take on variadic functions, which includes annotating the parameter type with ...:

class Raffle {
  private ArrayList<String> entrants = new ArrayList<>();

  public void add(String... names) {
    for (String name: names) {

  public static void main(String[] args) {
    Raffle raffle = new Raffle();
    raffle.add("Nuna Toodle", "Gary Fodmother", "Molly Coddle");

Inside add, the parameter names is just an array. The variadic definition is no different than the explicit array definition above. However, we can call add without bundling up the array literal ourselves.

For those trivia nights, here’s a few more terms to class functions with particular arities. Functions that take 3 parameters triadic, 2 parameters dyadic, 1 parameter monadic, and 0 parameters niladic.


As you write C, C++, and Java, you always write functions and methods at the top level of their file or class. Sometimes its convenient to define a function inside the context of another function, especially if its just a helper function that doesn’t need to have a broad scope. For example:

function formatTuple(a, b) {
  function pad8(x) {
    return (x + '').padStart(8, '0');
  return pad8(a) + ',' + pad8(b);


Interesting things can happen when we define a function that references data that’s in its surrounding scope. Consider this JavaScript code and note how the nested enclose accesses start and end, which are parameters to the outer function:

function encloser(start, end) {
  function enclose(content) {
    return start + content + end;
  return enclose;

const parenthesize = encloser('(', ')');
const embrace = encloser('{', '}');
const bracket = encloser('[', ']');

What are the types of parenthesize, embrace, and bracket? They are functions. Will these three functions do what we think they should? Yes. JavaScript forms what’s called a closure when we define a function. The closure is a marriage between the function and its scope. Even if the defining context goes out of scope, as it does here, the closure hangs onto the values of start and end so that the concatenation can use the right values.

Closures are an incredibly convenient tool that I simply cannot fully motivate in our brief time together. They allow you to tailor the behavior of a function without burdening the caller to add a bunch of extra parameters.

Generic Subprograms

Some languages allow a parameter’s type to also be a parameter. Type parameters can be attached to either a subprogram or a class structure. We call constructs with a type parameter in C++ templates and in Java generics. Using generics, we can add tuples to Java:

class Tuple<T, U> {
  private T first;
  private U second;

  public Tuple(T first, U second) {
    this.first = first;
    this.second = second;

The types T and U are not fixed. They will be passed in at the time a tuple is instantiated.


Some subprograms can pause themselves and cede control to some companion subprogram. Such subprograms all called coroutines. We use them when we want to suspend an algorithm until some condition is met but we want to allow other code to run in the meantime. Sometimes we can achieve this behavior with threads, but coroutines are generally cleaner and use fewer resources.

The Unity game engine supports coroutines. Consider the task of getting the user to place 10 spheres in the scene. We write the loop as a coroutine, waiting for a new mousedown even before adding a sphere:

public class PlayerController : MonoBehaviour {
  IEnumerator AddSpheresOnMouse() {
    for (int i = 0; i < 10; ++i) {
      yield return new WaitUntil(() => Input.GetMouseButtonDown(0));
      GameObject sphere = GameObject.CreatePrimitive(PrimitiveType.Sphere);
      sphere.transform.position = new Vector3(Random.Range(-3.0f, 3.0f), Random.Range(-3.0f, 3.0f), Random.Range(-3.0f, 3.0f));
      yield return new WaitForEndOfFrame();

  void Start() {

The yield return statements pause execution until the succeeding condition is met.


There’s a whole lot going on with subprograms, and there’s a potential for a whole lot more go on if we’re willing to embrace the wild ideas of languages outside our comfort zone. A fully-featured language will give us more power with less effort. Seek out these languages that respect your time.

See you next time!


P.S. It’s time for a haiku!

My function just broke
Told the store it won’t return
Exactly, they said


Leave a Reply

Your email address will not be published. Required fields are marked *