Performance Analysis and Verication
Performance debugging is an enormous time sink in program development. Since
the eciency of high-level abstract languages can be hard to assess by analytic
means, it is necessary to resort to proling tools that treat the eciency problem
empirically", as if the program were an object found in nature, rather than
an artifact of known origin with known properties. Improving this situation
would be greatly benecial in practice, and is likely to be a source of many
interesting research ideas. One may view mechanical complexity verication as
the ultimate proling tool|one that does not even require the execution of the
program to obtain useful information! More likely, however, a combination of
static verication and dynamic measurement techniques are likely to be helpful.
Developing these ideas seems to be a prime opportunity for interaction between
the algorithms and languages communities.
The concept of a cost semantics presents opportunities for mechanized proof
that build on recent developments in compiler correctness. Appel and Leroy [Ap-
pel et al., 2014] have amassed an impressive body of work on proving that the
code output by a compiler (for a simple C-like language) is behaviorally equiv-
alent to the source code. Given a cost semantics, the natural next step is to
show not only that the emitted code is functionally correct, but moreover that
it meets the expected complexity bounds as stated in the cost semantics. Doing
so would rule out extremely subtle bugs, such as space leaks, that arise from
compiler errors that, in the case of space leaks, retain live data that truly ought
to be considered dead. (See, for example, Spoonhower [2009] for a real-world
scenario of this kind.)