A few eigenvectors and eigenvalues are allowed to be
extracted from the large amount of high dimensional data
with Expectation-Maximization(EM) algorithm for Principal
Component Analysis. Missing information are accommodated
naturally, resulting in high computational efficiency
in both space and time. Principal component analysis can
be viewed as a limiting case of a particular class of linear-
Gaussian models [3]. The covariance structure of an observed
p-dimensional variable can be captured using fewer
than the p(p + 1)/2 free parameters which are required
in a full covariance matrix. Linear-Gaussian models assume
that it was produced as a linear transformation of
some k-dimensional latent variable x plus additive Gaussian
noise[3]. Denoting the transformation by the p × k matrix
C, and the p-dimensional noise by v(with covariance matrix
R), the generative model can be expressed as