An illustration of this is shown in the following image. Here the generic model was trained using the MUCT dataset. The person-specific model was learned from data generated using the annotation tool described earlier in this chapter. The results clearly show a substantially better tracking offered by the person-specific model, capable of capturing complex expressions and head-pose changes, whereas the
generic model appears to struggle even for some of the simpler expressions: