This delay is as high as
the amount of information, useful for the remaining execution of
Ti, evicted during the preemption. The preemption delay varies
during the execution of the job. Let illustrate that with a simple
example. Suppose that a task starts its execution by loading from
the memory an important amount of data. Then the task processes
all these data in a short period of time and finally, it performs a
long-time computation using only a small subset of the data. In
this case, the maximum preemption delay during the beginning
of the task will be high, since in the worst-case scenario all the
loaded data might be evicted during a preemption, hence forcing
the task to reload them at the return from preemption. Then, once
the data have been processed, the maximum preemption delay falls
drastically, since a preemption during the long-time computation
can only force the task to reload the few data that it needs when
resuming its execution