Under fairly weak regularity conditions, Gugerli has shown in
[20] that the value function v can be calculated iteratively as
follows. First, choose the computational horizon N such that after
N jumps, the running time t has reached Tf for almost all
trajectories. Let vN ¼ g be the reward function, and iterate an
operator L backwards, see Eq. (8). The function v0 thus obtained is
equal to the value function v.