Animating 3D faces to achieve compelling realism is a challenging
task in the entertainment industry. Previously proposed face transfer approaches generally require a high-quality animated source
face in order to transfer its motion to new 3D faces. In this work,
we present a semi-automatic technique to directly animate popularized 3D blendshape face models by mapping facial motion capture
data spaces to 3D blendshape face spaces. After sparse markers on
the face of a human subject are captured by motion capture systems while a video camera is simultaneously used to record his/her
front face, then we carefully select a few motion capture frames
and accompanying video frames asreference mocap-video pairs.
Users manually tune blendshape weights to perceptually match the
animated blendshape face models with reference facial images (the
reference mocap-video pairs) in order to create reference mocapweight pairs. Finally, the Radial Basis Function (RBF) regression
technique is used to map any new facial motion capture frame to
blendshape weights based on the reference mocap-weight pairs.
Our results demonstrate that this technique is efficient to animate
blendshape face models, while offering its generality and flexiblity