This paper presented a realistic and expressive computer facial animation system by automated learning from Vicon Nexus facial motion capture data. Our approach with 103 markers performed better for expressive facial animation compared with another approach using only 50 markers. Facial motion data of different emotions collected using Vicon Nexus are processed using dimensionality reduction techniques of Principal Components Analysis (PCA) and EM algorithms for PCA (EMPCA). EMPCA with 30 dimensions and 10 iterations best preserved the originality of the data compared with the other techniques. Reducing the dimensions of the original data resulted in a space improvement of 97.6 percent with negligible error of only 1-2 percent. The data with different emotions are mapped to a 3D animated face using Autodesk Motionbuilder 2009, producing reasonable results. The approach presented in this paper used data captured from a real speaker, which makes the mapped facial animation more natural and lifelike