CS 430: Lecture 11 – History
Dear students,
We’ve spent the semester lifting up the hood on programming languages to see their technical details. Today we do something different: we look at the languages through a more historical lens.
Mechanical Computers
The earliest computers used machinery like gears and springs to perform mathematical calculations. The most famous of these was the Difference Engine designed in England in the 1800s by Charles Babbage. The work got its start with Babbage trying to generate accurate tables of logarithms. Babbage did a lot of design work for the Difference Engine and its successors, but most of his ideas never left the prototype stage, despite funding from the British government.
Babbage partnered with Ada Lovelace to write papers about the machines. Lovelace envisioned future machines as being able to crunch more than just numbers:
[The Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine…Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.
Many of these early machines were “hardcoded” to do one thing. The program was not a separate idea, and programming was not really any different than building the computer in the first place. However, the textile industry had been using punched cards to control how a loom weaved together threads.
Mechanical computers are necessarily slower than the digital computers that we use today. A rotating gear will not move as quickly as electricity. Additionally, because of their moving parts, they need more maintenance and are more susceptible to hardware failures.
An American named Herman Hollerith built a computer that used both mechanics and electricity to help calculate the 1890 census. The 1880 census had taken 8 years to tabulate. Data was encoded in punched cards. Wires were pressed against the card, and where the cards were punched, the wires would poke through and make contact with a pool of mercury. The current would trigger a solenoid, which is coiled wire that acts like a magnet. Each solenoid was wired up so that it incremented a counter when activated. The 1890 census was completed early and under budget. Hollerith’s company was one of several that were bundled together in the early 1900s and eventually renamed IBM.
Early Digital Computers
Mathematical and scientific endeavors continued to expand computing. The Atanasoff-Berry computer, for example, was built by a physicist and graduate student at Iowa State University around 1940. The machine solved systems of 29 simultaneous linear equations. However, before all the kinks of the machine were worked out, war reared its ugly head. Atanasoff didn’t think his invention was very important, and he abandoned it when he left the university for a wartime job on the east coast. The machine was left in the basement of a university building and scrapped.
World War II accelerated interest in computing. The American military sponsored the ENIAC, a machine intended to help calculate tables that troops could use to aim their artillery at the enemy in various environmental conditions. These early digital machines started to moved beyond fixed functionality. Programming them to do different things meant physically rearranging wires. You occasionally see black-and-white pictures from the 1940s of the team of six women responsible for this rewiring. Our modern minds would like to say that the administration was enlightened and progressive to hire six women. In truth, the “programming” of the ENIAC was considered a menial task. The design work and credit went to men.
The ENIAC was not finished in time to be used during the war. But John von Neumann used it for nuclear fusion research. And from John von Neumann came two big ideas:
- The von Neumann architecture, a model that decomposes computing machinery down into a processor, a bus, and storage. This is the model of computation that drives nearly all of our imperative languages.
- The stored program. Instead of a computer’s operation being dictated by its physical wiring, the program itself was stored inside the machine’s memory.
The second of these is where programming languages start their own history.
1940s
Konrad Zuse had been building computers in Germany during World War II. Bombs destroyed much of his lab, and he fled to the mountains with the one computer that he was able to salvage. There he pondered where to pick up his research:
It was unthinkable to continue practical work on the equipment; my small group of twelve co-workers disbanded. But it was now a satisfactory time to pursue theoretical studies. The Z4 Computer which had been rescued could barely be made to run… Thus the PK arose purely as a piece of desk-work, without regard to whether or not machines suitable for PK’s programs would be available in the foreseeable future.
Zuse called the language he invented Plan Calculus. It was meant to be a general purpose language for solving computational problems. He used it to write algorithms for sorting, checking the connectivity of a graph, and validating the syntax of logical expressions. It supported arrays, conditional statements, custom data types, and floating point numbers. The manuscript he wrote on the project began with this big goal:
The mission of the Plan Calculus is provide a purely formal description of any computational procedure.
His manuscript even included 49 pages of chess algorithms.
Each statement in the Plan Calculus was spread across several lines. Consider for example this statement:
Z + 1 => Z V 3 3 K n n
The second line contains subscripts and the third components. The assignment operator is clearly distinct the mathematical equality operator. In a modern language, we might write this as follows:
Z[3].n = Z[3].n + 1
Very little of Zuse’s work had a direct impact on the programming languages being designed in America and elsewhere in Europe. Partly this was because Zuse was in Germany, and partly this was because Zuse started his designs thinking about notation rather than the limits of the machine.
1950s
The first programming languages in the United States were really readable abbreviations of machine code. Mauchly’s Short Code on the BINAC, Backus’ Speedcoding for IBM, and Hopper’s A-0, A-1, and A-2 for the UNIVAC were all examples of these systems. One of the Short Code developers described their “language” in this way:
By means of the Short Code, any mathematical equations may be evaluated by the mere expedient of writing them down. There is a simple symbological transformation of the equations into code as explained by accompanying write-up. The need for special programming has been eliminated.
These pseudocode languages defined catalogs of macros that would be expanded into the actual machine instructions. From this statement, we gather at that time, programming meant translating an algorithm into machine language.
Throughout the 1950s, languages evolved in places like Italy, Switzerland, Great Britain, and the United States. Comments appeared, as did subroutines. The only language of this that is still widely known is Fortran, which was born at IBM for the IBM 704 mainframes. It was the first language to allow variables to have more than one letter in their name; Fortran allowed two letters. The development team’s biggest fear was that the machine code produced by the Fortran compiler would be too slow compared to human-written machine code. Their fear was justified. Many computer scientists resisted the idea that a machine could do as good a job as a human in converting mathematical formulas into machine code, which we see now was short-signed and a bit ironic. Despite these concerns, Fortran was adopted quickly. It’s still in use today, perhaps more by chemists and physicists than software developers.
Fortran was an IBM product, and some folks go together to design a super-language that was like Fortran but managed by an independent and international consortium. Their proposal was Algol. This was the first language whose syntax was described using Backus-Naur form, which we still use. It introduced the block structures that pervade modern languages, using begin
and end
instead of curly braces. Algol served as intermediate language used to describe algorithms in scientific writing, but it ultimately lost out to Fortran.
Grace Hopper spearheaded the development Cobol for the Department of Defense. It favored readability instead of obtuse symbolism. You’ll find keywords instead of curly braces:
OPEN INPUT sales, OUTPUT report-out
INITIATE sales-report
PERFORM UNTIL 1 <> 1
READ sales
AT END
EXIT PERFORM
END-READ
VALIDATE sales-record
IF valid-record
GENERATE sales-on-day
ELSE
GENERATE invalid-sales
END-IF
END-PERFORM
TERMINATE sales-report
CLOSE sales, report-out
Cobol was the first language to have full support for records, which you may remember are product types whose fields are named. The language hasn’t had significant influence on other languages, not the way C or Java has, yet you’ll hear in tech news that Cobol is still in high demand. In 2020, the governor of Kansas said this as unemployment was spiking after the outbreak of Covid-19:
So many of our Departments of Labor across the country are still on the COBOL system. You know very, very old technology. Our Department of Labor had recognized that that was an issue and had initiated modernization, and, unfortunately, that’s something that takes time. This (virus) interfered and they had to cease the transition to a much more robust system. So they’re operating on really old stuff.
At MIT, researchers wanted a language that was more suited for artificial intelligence than scientific calculations. They developed LISP, which threw out many ideas of the von Neumann architecture. Data was constant, not stored in mutable cells of memory. Accordingly, loops were not expressible, as there was no notion of time or an iterator changing state. One used recursion. LISP has sparked a number of dialects and related languages, including Common Lisp, Scheme, Racket, and Clojure.
Also happening in this decade were the emergence of the Cold War between the United State and Russia, anti-communist paranoia, the rise of the suburbs, Brown v. the Board of Education that ruled against segregation, Rosa Parks maintaining her seat on a bus in Montgomery and the accompanying boycott, Elvis Presley, and Dwight D. Eisenhower.
1960s
In the 1960s, PL/I was introduced by IBM as an attempt to unite all the features of Fortran, Algol, and Cobol. Its significant new features included multitasking, a preprocessor, case statements, and full support for pointers. Lisp had exceptions that bubbled back up through the call chain, but PL/I popularized them.
For lots of reasons, most of our early languages were targeted at working adults. These were the people who had computers, money, and big important problems to solve. But universities began to acquire computers. For several decades, these computers were mainframes, big centralized computers that others could log in to from a terminal, much like you log in to the departmental server. Cheaper minicomputers also started to appear and offered similar features. In 1964, some professors at Dartmouth in New Hampshire decided that they needed a simpler language for their liberal arts students. There wasn’t a lot of software available, so students wanting to learn about computers had to log in to the mainframe and write code. To make this easier for their young learners, they developed BASIC. In contrast to the popular sentiment at the time, these professors cared more about developer time than execution time. They also gave away the compiler for free, which encouraged its adoption and pre-installation on lots of systems. A decade later, Bill Gates got his start by helping write a BASIC compiler for the Altair home computer.
While BASIC was making code easier to read, APL was going in the opposite direction. Its inventor Kenneth Iverson wanted a notation for describing multidimensional arrays and operations that could be performed on them. His notation was used to describe the behavior of systems in scientific literature, and it eventually became a full-fledged programming language. Like other ideas that started away from the computer, it went a little rogue with its symbols. This APL program implements Conway’s Game of Life, a model of survival and decay that plays out over a grid:
life←{↑1 ⍵∨.∧3 4=+/,¯1 0 1∘.⊖¯1 0 1∘.⌽⊂⍵}
To write APL code, one needed a special keyboard:
Also happening in this decade were baby boomers coming of age, Woodstock, the moon landing, the KKK bombing a church in Birmingham, John Kennedy’s election and assassination, Lyndon Johnson’s attempt to patch up the racial divide, which was overshadowed by America’s involvement in the Vietnam War.
1970s
Computer scientist Niklaus Wirth was part of an effort to improve Algol, but that fell through. Wirth was trying to write a textbook at the time and needed a simpler language for teaching. Thus was born Pascal, a language that embraced structured programming. The 1970s smiled upon this language as universities adopted it. Early versions of Photoshop were written in it. Apple’s initial technical documentation was all written in Pascal. My favorite features were that arrays could start at any index and function definitions could be nested. Sometimes history moves backward.
In France, two computer scientists looked at the way logic was represented in procedures in the largely American languages of the day. They wanted a system that was more about expressing facts in a declarative form. The result was Prolog.
Object-oriented programming made its first appearance around this time. Simula was an expansion of ALGOL by some Norwegian computer scientists who wanted to make it easier to conduct simulations. To model the components of a simulation, Simula programmers used classes, which had properties and procedures under one umbrella. Smalltalk also used classes. It appeared out of Xerox’s research laboratory at the same time graphical user interfaces were born.
In the late 1970s, researchers at Bell Labs decided they needed a language that was specially designed for writing systems software for the Unix operating system, which was also invented at Bell Labs. They built C, which was inspired by an untyped language named B. C didn’t offer a lot of extra features compared to assembly beyond structured programming, but its compiler generated fast code. Forty years later, it still has a foothold.
Also happening in this decade were a conservative rebound against the tumultuous 1970s that brought Nixon into office, the Watergate scandal that took Nixon out of office, and the sparks of the Environmental Movement.
1980s
Around the same time, the Department of Defense decided they needed to converge on a single language for their many projects. They ran a competition to see who could propose the best language. The resulting Ada was a superlanguage that combined many different ideas, but the primary theme was software safety. When I arrived at college, my first two computer science courses used Ada. I took a third course later on real-time, embedded systems that was also in Ada. I haven’t touched it since then. But if the world had played out just a bit differently, we might all be writing Ada right now.
In the mid-1980s, Stroustrup synthesized C and the object-oriented paradigm to make C++. The extra layers that C++ added on top of C were mostly paid for at compile time. Only dynamic dispatch incurred more runtime overhead. C++ and C effectively dethroned Pascal at universities.
Also at this time, two developers who had seen Smalltalk thought that C could use some of what Smalltalk had. They started what would come to be Objective-C by adding a custom preprocessor to C, much as Stroustrup did with C with Classes. NeXT, the company that Steve Jobs founded after being ousted from Apple, licensed Objective-C in order to make the core of what would eventually become the foundational libraries of macOS after Jobs returned to Apple and Apple acquired NeXT.
The scripting language Perl arrived in 1987. Its designer Larry Wall wanted to combine a lot of ideas from various Unix utilities into one system. Perl was seen as an ideal language for processing text. It was poised for a while to be the language for serving out dynamic web pages, but PHP swooped in and took on that role.
In this decade, some lazy functional languages started to appear. By lazy, I mean that expressions weren’t evaluated until their values were absolutely needed. Many of the researchers wanted to combine their efforts. They first looked to a language named Miranda as a point of convergence, but this language was proprietary. So, they wrote Haskell.
Also happening in this decade were Reagan serving two terms, the space shuttle Challenger exploding, environmental disasters like the Exxon-Valdez oil spill and Chernobyl, the emergence of the AIDS crisis, and my birth.
1990s
In the grand scheme of things, C++ didn’t reign for long. Java showed up in 1995 at Sun Microsystems as a language for writing software for appliances. The language was simpler and safer than C++: there were no explicit pointers, dynamic memory was garbage collected, and the entire ecosystem pivoted around the idea of classes. The appliance market didn’t work out, so Sun rebranded Java as language for writing web applications. Developers wrote Java code and compiled it against a portable and virtual CPU called the Java Virtual Machine (JVM). Anyone with JVM installed, could run the code. This was a departure from the way C and C++ code was compiled down to object code for a specific architecture, which meant applications could not just be copied from machine to machine.
Around this same time, the web started to take off. Netscape built a lightweight scripting language marketed as a companion language to Java. While in development, it was named Mocha. But it was released under the name JavaScript, a name chosen expressly for its marketing benefit. The language has very little in common with Java. Rather, JavaScript has more in common with functional languages. It took ideas from these languages, namely first-class functions, and squeezed them into a Java-like syntax.
Two scripting languages appeared in the 1990s: Python and Ruby. Python was written in order to replace an earlier language named ABC. Its designer Guido van Rossum says this of its origin:
I decided to try to design a simple scripting language that possessed some of ABC’s better properties, but without its problems. So I started typing.
This seems to be a pattern. Ruby was first released in 1995. Its developer Yukihiro Matsumoto says this of its origin:
I was talking with my colleague about the possibility of an object-oriented scripting language. I knew Perl (Perl4, not Perl5), but I didn’t like it really, because it had the smell of a toy language (it still has). The object-oriented language seemed very promising. I knew Python then. But I didn’t like it, because I didn’t think it was a true object-oriented language—OO features appeared to be add-on to the language. As a language maniac and OO fan for 15 years, I really wanted a genuine object-oriented, easy-to-use scripting language. I looked for but couldn’t find one. So I decided to make it.
Both these languages sport dynamic typing, brevity of expression, and multiple paradigms, including structured, functional, and object-oriented programming.
Also happening in this decade were the Oklahoma City bombing, the Columbine massacre, riots in Los Angeles, Clinton’s scandal, and some militia and cult activity like Waco and Ruby Ridge.
2000s
Microsoft felt threatened by the rise of Netscape, Java, and JavaScript, and their response was C# and the larger family of languages that is .NET. C# is an interesting hybrid of C++ and Java. The designers were in the position of Goldilocks, able to examine both extremes and merge their good ideas. There’s a strong association between .NET and the Windows operating system. But some of the tools are available elsewhere. For example, the Unity game engine on macOS and Linux uses Mono, an open source implementation of C#.
With the rise of the web and data, many groups decided they needed new languages to better cope with multiple cores and networking. Google introduced Go, which strips out a lot features like a type hierarchy. EPFL in Switzerland introduced Scala, which provided interoperability with Java. Mozilla introduced Rust, which prevents memory violations.
Conclusion
We have a lot of languages. They may be Turing complete, all capable of computing anything that a Turing machine can compute. However, they are not all the same. Each was designed with different values in mind. How do we respond to the abundance of languages? Let me tell you the story of two swordsmiths of Japan—courtesy of Wikipedia:
A legend tells of a test where Muramasa challenged his master, Masamune, to see who could make a finer sword. They both worked tirelessly and eventually, when both swords were finished, they decided to test the results. The contest was for each to suspend the blades in a small creek with the cutting edge facing the current. Muramasa’s sword, the Juuchi Yosamu (“10,000 Cold Nights”) cut everything that passed its way; fish, leaves floating down the river, the very air which blew on it. Highly impressed with his pupil’s work, Masamune lowered his sword, the Yawarakai-Te (“Tender Hands”), into the current and waited patiently. Only leaves were cut. However, the fish swam right up to it, and the air hissed as it gently blew by the blade. After a while, Muramasa began to scoff at his master for his apparent lack of skill in the making of his sword. Smiling to himself, Masamune pulled up his sword, dried it, and sheathed it. All the while, Muramasa was heckling him for his sword’s inability to cut anything. A monk, who had been watching the whole ordeal, walked over and bowed low to the two sword masters. He then began to explain what he had seen.“The first of the swords was by all accounts a fine sword, however it is a blood thirsty, evil blade, as it does not discriminate as to who or what it will cut. It may just as well be cutting down butterflies as severing heads. The second was by far the finer of the two, as it does not needlessly cut that which is innocent and undeserving.”
Some our languages are dangerous, while others are safe. Some are concerned with performance; others with developer time. Some achieve compactness of expression; others readability. Some unify features of other languages; others are decidedly different. Which sword do we pick up? Thus we come to the final haiku of the semester:
Which sword is better?
The unchecked one or the checked?
Well, I’ve got two hands
See you next time!