The framework
involves two generative models, i.e., Kinematics Gait Generative Model (KGGM) and Visual Gait Generative Model
(VGGM). Dual generative models can interpolate and synthesize new gaits visually and kinematically, which allows
us to infer the kinematics of a new gait from its appearances. In order to learn these models, we need large amount
of fully synchronized high quality kinematic and visual motion data from multiple persons.