Abstract
An important feature of functional programs is that they are parallel
by default. Implementing an ecient parallel functional language,
however, is a major challenge, in part because the high rate of
allocation and freeing associated with functional programs requires
an ecient and scalable memory manager.
In this paper, we present a technique for parallel memory
management for strict functional languages with nested parallelism.
At the highest level of abstraction, the approach consists of a
technique to organize memory as a hierarchy of heaps, and an
algorithm for performing automatic memory reclamation by taking
advantage of a disentanglement property of parallel functional
programs. More specifically, the idea is to assign to each parallel task
its own heap in memory and organize the heaps in a hierarchy/tree
that mirrors the hierarchy of tasks.
We present a nested-parallel calculus that specifies hierarchical
heaps and prove in this calculus a disentanglement property, which
prohibits a task from accessing objects allocated by another task
that might execute in parallel. Leveraging the disentanglement
property, we present a garbage collection technique that can operate
on any subtree in the memory hierarchy concurrently as other tasks
(and/or other collections) proceed in parallel. We prove the safety
of this collector by formalizing it in the context of our parallel
calculus. In addition, we describe how the proposed techniques can
be implemented on modern shared-memory machines and present
a prototype implementation as an extension to MLton, a highperformance
compiler for the Standard ML language. Finally, we
evaluate the performance of this implementation on a number of
parallel benchmarks.
Categories and Subject Descriptors D.3.4 [Programming Languages]:
Processors— Memory management (garbage collection);
D.1.3 [Programming Techniques]: Concurrent Programming — Parallel
Programming; D.4.1 [Operating Systems]: Process Management
—Scheduling
General Terms Languages, Performance
Keywords Languages, Memory Management, Parallelism, Scheduling