This is important because multi-job computations are very popular. The primitives provided by big data processing systems (e.g. Hadoop, MapReduce) constrain the amount of work possible in a job. As a result, users need to divide their algorithms into multiple jobs [16], [15] or rely on higher level languages (e.g. Hive [23], Pig [19]) which usually also get compiled into sequences of jobs.