2

The IR given in Appel's book (Compiling with Continuations) is certainly well explored and battle-tested in production compilers. I have been able to find several works based on or inspired by it (either by Appel himself, such as Shrinking Lambda Expressions in Linear Time, or by other people, as this one by Kennedy, Compiling with Continuations, Continued). However, I haven't been able to find any proofs that CPS translations into it are actually correct. By correct I mean computational adequacy: a term $e$ would halt in the source language's semantics if and only if its translation $[\![e]\!]$ would halt in the IR's semantics.

Is there a proof of that for any CPS translation into the IR? I'm particularly interested in finding references for Plotkin's CBN and CBV translations, but any one would be enough.

Edit: Clarifying, the point is that this IR requires that arguments are always variables, and is thus not closed under $\beta$-reduction. Appel's book gives a denotational semantics for it by interpreting it into ML, and Kennedy's paper gives a big-step machine semantics for it; Appel's paper on shrinking reductions suggests using function inlining as a semantics, and Thielecke's PhD thesis gives an equational theory for it. All these notions are well-behaved with regard to each other.

paulotorrens
  • 731
  • 3
  • 11

0 Answers0