The previous discussion only estimates the first non-Gaussian
vector. One way to compute the other higher order vectors
is following what stochastic gradient ascent (SGA) does:
start with a set of orthonormalized vectors, update them using
the suggested iteration step, and recover the orthogonality
using Gram-Schmidt orthonormalization (GSO). For realtime
online computation, avoiding time-consuming GSO is
needed. Further, the non-Gaussian vectors should be orthogonal
to each other in order to ensure the independency. So,
it helps to generate “observations” only in a complementary
space for the computation of the higher order eigenvectors.
For example, to compute the second order non-Gaussian
vector, first the data is subtracted from its projection on the
estimated first-order eigenvector v1(n), as shown in:
where u1(n) = u(n). The obtained residual, u2(n), which is
in the complementary space of v1(n), serves as the input data
to the iteration step. In this way, the orthogonality is always
enforced when the convergence is reached, although not exactly
so at early stages. This, in effect, better uses the sample
available and avoids the time-consuming GSO.
After convergence, the non-Gaussian vector will also be
enforced to be orthogonal, since they are estimated in complementary
spaces. As a result, all the estimated vectors wk
will be