In Table 3, we compare the average error and properties of four approaches. Method-I has nice features but cannot provide satisfied visual and quantitative results given the incompatibility among resources. Method-II requires intensive labor work and changes the original motion defined by mocap data. Method-III provides natural animation sequences but the data conversion process involves professional knowledge and programming, which makes it difficult to apply onto other data sets that have the skeleton incompatibility issue. The proposed pipeline combines advantages of two commercial software and can be extended to wide range of mocap data and shape models. More importantly, our method reaches the smallest error, that is, the ground truth mocap data is best reflected in the synthesized motion sequence with the smallest differences. This kind of kinematic accuracy is essential in the visual-based research for both the training data generation and the testing data evaluation. Also, we show more animation sequences from various motions generated by our pipeline in Fig. 9. A few sophisticated motions are demonstrated including dancing, kicking