Some Reflections on Compiler Correctness

Robert Dockins, Princeton University

Compilers are some of the most fundamental tools of computer science; they translate the programs we write into a form that can be executed by computers. However, they suffer the same problem as any other software artifact: they may have bugs. Thus, compilers have naturally attracted the attention of the program verification community for decades. Recently, the rise in popularity of mechanical theorem proving has prompted a resurgence of interest in this topic.

However, a question of central importance has yet to have been adequately answered: what do we mean (indeed, what ought we to mean) when we discuss the correctness of compilers? If we look through the literature on compiler verification we will see a certain recurrence of themes but, surprisingly, no comprehensive, encompassing theory of compiler correctness has emerged. This makes it difficult to understand and compare various different compiler correctness proofs, and it makes it difficult to know exactly what program properties are preserved by the compiler.

In this talk, I will examine some of the reasons why no unified theory has emerged and suggest some specific solutions. In particular, I will present an approach based on program refinements that will smoothly incorporate undefined and unspecified behavior, stumbling blocks of previous theories. I will present these notions of refinement from an operational view (based on bisimulations) and from a deductive view (based on temporal logics), and will attempt to argue that these refinements are the "proper" abstractions to characterize undefined and unspecified behavior.