Automatic analysis of brain imaging data is an important topic in
both neuroscience and brain computer interface (BCI) technology.
In many cases, the task is to find the spatiotemporal neural signature
of a task, by performing classification on cortical activations
evoked by different stimuli [1, 2]. Common brian imaging techniques
are Electroencephalography (EEG) and Magnetoencephalography
(MEG). In particular, MEG measures the magnetic fields produced
by electrical activity in the brain via extremely sensitive sensors
distributed across the scalp. These measurements are highdimensional
spatiotemporal data. For instance, in our experiments,
we use a recumbent Elekta MEG scanner with 306 sensors to record
the brain activity for 1100 milliseconds. Furthermore, the measurements
are degraded by various types of noise (e.g., sensor noise,
ambient magnetic field noise, etc.) and the overall noise is difficult
to model (potentially non-Gaussian). The high-dimensionality and
noise limit both the speed and accuracy of the signal analysis, that
may result in unreliable signature modeling for classification. The
high-dimensionality of these signals also increases the complexity of
the classifier. Combination of a complex classifier and availability of
few data samples (due to time, cost, or study limitations) can easily
lead to an overfit model. Thus, for a reliable study of brain imaging
data, there is a need for a robust dimensionality reduction method
that ensures inclusion of task-related information in the transformation
process.