Our method works in two stages: offline personalized wrinkled
blendshape construction and online 3D facial performance capturing.
In the offline stage, the user-specific expressions are recorded
as blendshapes, and the wrinkles on them are generated through
example-based geometric detail synthesis. During the online stage,
given an RGB-D video captured by a Kinect camera, the 3D facial
animation with detailed wrinkles is reconstructed for each frame
as the linear combination of the wrinkled blendshape models.