|Time:||MW 1:30–3:30 (two hours, not a misprint)|
|Instructor:||Norman Ramsey, Halligan Extension 006|
This was the original plan, which did not survive the first week of class:
Begin with a few weeks' tutorial on compilers, but just the good parts. I expect not everyone will have compiler background, so we will cover some foundational ideas that relate to program analysis and improvement:
The design of low-level intermediate languages
Derivation of program properties using weakest preconditions and strongest postconditions (bases for 'backward' and 'forward' analyses)
Simple code improvements ('optimizations') based on dataflow analysis
Translation of low-level intermediate languages into machine code
We could go fairly deep into some advanced optimization techniques, which would give us the opportunity to evaluate empirically whether our new perspective genuinely does make it easier to write optimizations.
Our perspective makes it especially easy to combine optimizations, and we could investigate which kinds of optimizations can be combined most effectively.
We could explore interprocedural analyses and optimizations.
For the more theoretically inclined, we could make a closer study of the obligations of each individual analysis and optimization. What constraints are required to ensure that the entire optimizer is sound? Does our new perspective lead to a crisper or simpler statement of those constraints? Could the constraints in the literature be relaxed?
We could do a scholarly comparison with several other dataflow frameworks, including CIL, Soot, LLVM, and so on.
We could build a small research compiler to try out ideas.
We could try out some ideas on the native-code back ends of the Glasgow Haskell Compiler, which are ripe for updating.
Not enough is known about how to test optimizations, how to test a dataflow-analysis framework, or how to generate "interesting" test programs randomly. Answers to these questions should be publishable.