In this paper, we propose a non-parametric method to
synthesize detailed wrinkle geometries and create personalized
blendshape models with a single low-cost Microsoft RGBD Kinect
camera, which are subsequently used to track RGBD facial
performance videos to create 3D facial animations with detailed
wrinkles (see Fig. 1). We utilize the texture synthesis approach to
synthesize wrinkles on the 3D face expression model for various
people. The distinctive feature of this method is lightweight, since
we only use one high-quality 3D facial model with calibrated
texture as the source in the texture synthesis and a single RGBDepth
camera. The key observation is that, although the facial
wrinkles look different from one person to another, the variation
of their local geometry is much less. This implies that it is possible