Introduction

This month, you will “scout” one software system and judge how suitable it is for a case study. Scouting involves a somewhat cursory look at the source code and documentation (let’s say at most a couple of hours) and a short, written report which answers these questions:

Your report should be emailed to me by 11:59PM on Thursday, September 28. Please send either PDF or Markdown with Pandoc extensions.

You must choose a system and get my approval to scout it by the end of class on Monday, September 18.

How to rate a system you scout

The ideal system for study is clear, simple, performant, and well documented. It provides a lot of useful abstractions, and it even supports a popular or important programming language. Tragically, the ideal system does not yet exist. Nevertheless, keeping these ideals in mind, you will rate the system you scout as “pro” or “con,” and your rating will be either “strong” or “weak.”

Strong ratings are generally more helpful than weak ratings—use a weak rating only if you must.

Partial list of candidate systems

You may scout any system you choose, provided I approve your choice. To help you choose, I list potential systems in four categories:

For some systems, I identify what aspect of the system I think we might focus on. This potential focus represents only an educated guess. My guess may inform a scout, and the scout will provide an informed recommendation.

Lua

Lua is small, simple, and well crafted. To my knowledge, it is the most performant language in its class (scripting languages). The architects know their literature, and they are consummate engineers.

The major downside is that the system is made simple by cutting back on the ambitions of the language: no exceptions, no parallelism. But there is a carefully crafted interface for foreign calls, plus an interesting coroutine mechanism. And there is a (somewhat creaky) module that adds message-passing concurrency on top of pthreads.

The designers write,

For studying Lua’s run-time system, see these modules:

 lapi.c ldebug.c ldo.c ltm.c lvm.c

The virtual machine is the obvious place to start. But if you are already familiar with Lua, the API is also good.

The garbage collector is a world in itself:

 lgc.c

For papers, see Roberto Ierusalimschy, Luiz H. de Figueiredo, Waldemar Celes. The Implementation of Lua 5.0. Journal of Universal Computer Science, 11(7):1159-1176, July 2005. http://www.tecgraf.puc-rio.br/~lhf/ftp/doc/jucs05.pdf

Other papers I think worth looking at include Ierusalimschy, de Figueiredo, and Celes: Lua—an Extensible Extension Language, in Software—Practice & Experience, 1996. And the Lua paper in HOPL III, which provides some good context.

If we were to study Lua, I assume we would focus on what we can steal to put into more ambitious systems.

Chez Scheme

Kent Dybvig and his colleagues have published many interesting papers on aspects of Chez Scheme, but the compiler has always been a proprietary secret. In 2016, however, Kent made Chez Scheme open source. It immediately shot to the top of interesting implementations of Scheme in particular and of dynamically typed languages more generally. A colleague who works on Racket comments,

As a production GC implementation, I think Chez Scheme’s implementation is fairly compact and readable. I also think that Kent and company got finalization more right than anyone else, and we used the ideas even in the current Racket implementation. If you go that direction, see “Guardians in a generation-based collector” (PLDI’93).

Sources would include many of Kent’s papers, including the technical report “Don’t Stop the BIBOP” and two PLDI papers with “Continuations” in the title. Kent’s ICFP’06 invited paper “The Development of Chez Scheme” makes interesting background reading.

If we were to study Chez Scheme, I assume we would focus on what makes it performant.

Standard ML of New Jersey

The run-time system built by Andrew Appel, about which we are reading, provided impressive functionality for its size, but its garbage collector was far from the state of the art. To remedy this deficiency, John Reppy built a new run-time system, which was deployed not long after Andrew’s paper was published. John is knowledgeable, experienced, and highly skilled. John’s system, which is part of SML/NJ version 110.x, has held up quite well. John writes,

It has its flaws, but it is reasonably well documented (both source code and supporting papers), is not too large, and has been ported to a lot of platforms. The main problem is that there is a fair bit of #ifdef cruft, much of which could be removed.

Concurrent ML

John Reppy’s Concurrent ML is a tour de force of language design. It’s a concurrent language built on synchronous message passing, and the key contribution is that synchronization is not limited to sending or receiving: any programmer can define new synchronous events—generalizations of message passing—which are first-class abstractions and have the same privileges as the built-in send and receive. Understanding Concurrent ML is now a prerequisite for anybody who is serious about designing a language with concurrent computation. Concurrent ML has influenced Racket, Guile, Elm, and probably other languages I don’t know about.

Concurrent ML is implemented on top of Standard ML of New Jersey. Some implementation issues are discussed in the final chapter of John’s book, Concurrent Programming in ML.

Permissible systems

Self

Self is a pure object-oriented language which uses message sends even for such operations as access to instance variables and to local variables. It’s an extraordinarily simple design, but it’s also insanely higher-order: effectively every operation is a call to an unknown function. Implementing Self efficiently requires sophisticated compilation techniques, first developed by Craig Chambers and David Ungar. Later developments by Urs Hölzle show that with suitable run-time support, one can get even better results than with the Chambers/Ungar compiler, and that the resulting compiler is also simpler.

Self is the foundation of all the dynamic implementation techniques that have gone into implementations of languages like Java or JavaScript. A good starting point would be the paper by Chambers and Ungar from PLDI 1989, which is in the “20 Years of PLDI” collection. Craig’s dissertation is also worth a look. I’d also like to loook at Urs’s dissertation, which I haven’t yet gotten to.

If we were to study Self, I assume we would focus on the technology that enables the system to “devirtualize” method calls and otherwise to execute very dynamic code quite efficiently. This technology informs younger systems such as HotSpot and V8.

Dart

Dart is one of Google’s recent entries in the language-design sweepstakes. It’s relatively new, and I hear that the run-time system is still relatively simple and has not yet accumulated a lot of cruft.

V8

V8 is attractive because it is so widely deployed, but I might recommend instead to study these ideas in their original form in Self.

V8 is the most recent recipient of SIGPLAN’s Programming Languages Software Award.

Squeak

Squeak is a Smalltalk machine implemented in Smalltalk. It is bootstrapped by translating a carefully written core into C. The Squeak machine supports a thriving community. From our point of view, it would provide an opportunity to study a run-time system written in a high-level language, or at least I would so hope. (The same idea was a key part of Jalapeño, which is now called Jikes, but Squeak predates Jalapeño by two years.1) A good starting point is the “Back to the Future” paper from OOPSLA’97, updated in 2000 by Dan Ingalls as “Back to the Future Once More.” And http://squeak.org/research/ cites many related papers.

If we were to study Squeak, I assume we would focus on what parts of the system benefit from being written in a high-level language, and what parts remain something like “C transliterated into Smalltalk.”

Oberon

The Oberon programming language is Niklaus Wirth’s final entry in his lifelong quest to develop a worthy successor to Algol (see also Pascal and Modula). But Oberon is also a system: the run-time system also serves as an operating system, and it runs on bare metal. Wirth writes,

Project Oberon is a design for a complete desktop computer system from scratch. Its simplicity and clarity enables a single person to know and implement the whole system, yet still providing enough power to make it useful and usable in a production environment.

I first encountered Oberon in 1990, when I was a summer intern at the Digital Systems Research Center. My senior colleagues, many of whom had helped invent the personal computer while at Xerox PARC, were astounded by this nice little system which could compile and boot itself in under a minute, on 1990 hardware. And which provided the whole WIMP user experience which was only then becoming routine on stock hardware. The Oberon project sprang from the keyboards of Niklaus Wirth and Jürg Gutknecht, who in 1992 wrote a book describing the system in great detail. Even better, Wirth prepared a revision of the book in 2013, which, although never published in hard covers, is available online. See projectoberon.com.

If we were to study Oberon, I assume we would focus on how it is possible that the system can self-compile and boot so quickly.

Erlang

Erlang is a relatively simple functional language inspired by Prolog—pattern matching over terms features heavily. But what’s really interesting about Erlang is its support for parallel and distributed computing, especially for monitoring remote processes and recovering from failure.

Some information can be found at http://erlang.org/faq/implementations.html.

If we were to study Erlang, I assume we would focus parallelism, distributed computing, and recovery from failure.

Guile

Guile is an implementation of Scheme that is designed to be embedded into C programs. It aims to fill the same niche filled by Lua and by Tcl. I’ve listed it primarily because I have cause to believe that Andy Wingo is a well-informed, skilled craftsman.

Elm

Elm is a pure functional language that runs animations (and other things) in the browser. The design borrows from Concurrent ML and from Haskell. I find the work very interesting, but am not sure if there is an interesting run-time system there or if it’s all delegated to the JavaScript engine.

Hotspot

Hotspot is not recommended because if I wanted to learn about dynamic compilation and optimization, I’d rather study Self.

Jikes

Jikes is not recommended because its commitment to abstraction and modularity makes it hard to learn and hard to understand. My credentials for this judgment: I spent ten years designing and building a compiler called Quick C--. Like Jikes, it was designed to be abstract and modular. And because it was abstract and modular, it was very hard to learn.

Jikes has received SIGPLAN’s Programming Languages Software Award.

Python

I’ve read a good bit about the engineering of Python, and I’d rather learn from systems that have been well designed from the beginning, as opposed to something kludged together that kind of works.

Hugs

The Hugs system is Mark Jones’s implementation of Haskell. I love Mark’s work, and I bet that the system is really interesting. But I’m worried that the support for lazy evaluation might distort the design of the whole system (as it has in GHC), and for that reason, I’m not recommending it.

MLton

I highly respect MLton’s engineers. But one of them tells me that although MLton’s runtime isn’t embarassing, it isn’t particularly well documented. I’m hoping we can do better.

Go

Go is an interesting language and system. But I have it on trust that the story about concurrency is not as well put together as Concurrent ML. And Go’s memory management, at least initially, was not something that conscientious people could recommend as near the state of the art—though I hear that it has since improved.

Impermissible systems

There are two systems we definitely will not study.

The Glasgow Haskell Compiler (GHC)

GHC is a beast—when I last took a careful look at the run-time system (in 2005), it was four times the size of Version 6 Unix. But the real reason I’ve ruled it out is that the designers say that the need to support lazy evaluation pervades and distorts the whole system. That makes it not right for our course.

Racket

Racket is another very interesting system, but one of the chief architects of the Racket implementation says that the Racket run-time system is mostly a poor example of how to do things—and that Racket is being rebuilt to run on top of Chez Scheme. So if we’re interested in Racket, we should study Chez Scheme.


  1. “Back to the Future” appeared in October 1997. The Jalapeño project launched in December 1997. Post hoc, ergo propter hoc?