I want to take a few minutes to explore the next homework assignment, which was inspired by the game Human Resource Machine. But let’s do that at the end of class, rather than the beginning.
At this point in the semester, we leave our textbook behind and jump into an industry-grade architecture: ARM. To fully understand a technology, one must delve into its history, so we will take a quick peek into how ARM came to be. Our study of ARM will be conducted on Raspberry Pis. Ultimately, it’s my hope to investigate the following: reading from mouse and keyboard, interacting with hardware via memory-mapped I/O, learning how an executable turns into a RAM-resident process, and examining runtime behavior of a program with function calls and flow control.
A computer’s architecture is how a programmer views the computer—the computer hardware itself, not abstracted away by some language or framework. Essentially, an architecture boils down to a few things: an instruction set, registers, and memory. In reality, the architecture is not the simplest layer of abstraction. For example, the x86 architecture is supported by many different processors, each implementing in different ways.
Why would we want to divorce an architecture from a hardware implementation? Interoperability’s a good reason. A common architectural standard allows manufacturers to compete at the bottom-most layer without forcing the consumer to pay the cost when switching. Each manufacturer can optimize their implementation differently, perhaps to maximize battery life, performance, or price.
Several mainstream architectures exist: x86, ARM, MIPS, Sparc, and PowerPC. I used to talk about x86 in our programming languages class, but I abandoned it because ARM is simpler and the dominant player in the mobile market. Interestingly, ARM itself doesn’t manufacture processors. It licenses its architecture to companies that do.
Here’s your TODO list:
See you next class!