APPLICATIONS
The fact that a compressible signal can be captured efficiently
using a number of incoherent measurements that is
proportional to its information level S n has implications
that are far reaching and concern a number of possible
applications:
■ Data compression. In some situations, the sparse basis
may be unknown at the encoder or impractical to implement
for data compression. As we discussed in “Random Sensing,”
however, a randomly designed can be considered a universal
encoding strategy, as it need not be designed with regards to
the structure of . (The knowledge and ability to implement
are required only for the decoding or recovery of f.) This
universality may be particularly helpful for distributed source
coding in multi-signal settings such as sensor networks [27].
We refer the reader to articles by Haupt et al. and Goyal et al.
elsewhere in this issue for related discussions.
■ Channel coding. As explained in [15], CS principles (sparsity,
randomness, and convex optimization) can be turned
around and applied to design fast error correcting codes over
the reals to protect from errors during transmission.
■ Inverse problems. In still other situations, the only way to
acquire f may be to use a measurement system of a certain
modality. However, assuming a sparse basis exists for
f that is also incoherent with , then efficient sensing will
be possible. One such application involves MR angiography
[1] and other types of MR setups [28], where records a
subset of the Fourier transform, and the desired image f is
sparse in the time or wavelet domains. Elsewhere in this
issue, Lustig et al. discuss this application in more depth.
■ Data acquisition. Finally, in some important situations the
full collection of n discrete-time samples of an analog signal
may be difficult to obtain (and possibly difficult to subsequently
compress). Here, it could be helpful to design physical
sampling devices that directly record discrete, low-rate
incoherent measurements of the incident analog signal.
The last of these applications suggests that mathematical
and computational methods could have an enormous impact
in areas where conventional hardware design has significant
limitations. For example, conventional imaging devices that
use CCD or CMOS technology are limited essentially to the
visible spectrum. However, a CS camera that collects incoherent
measurements using a digital micromirror array (and
requires just one photosensitive element instead of millions)
could significantly expand these
capabilities. (See [29] and an article
by Duarte et al. in this issue.)
Along these same lines, part
of our research has focused on
advancing devices for “analogto-
information” (A/I) conversion
of high-bandwidth signals (see
also the article by Healy et al. in
this issue). Our goal is to help
alleviate the pressure on conventional ADC technology,
which is currently limited to sample rates on the order of 1
GHz. As an alternative, we have proposed two specific architectures
for A/I in which a discrete, low-rate sequence of
incoherent measurements can be acquired from a highbandwidth
analog signal. To a high degree of approximation,
each measurement yk can be interpreted as the inner
product f, ϕk of the incident analog signal f against an
analog measurement waveform ϕk. As in the discrete CS
framework, our preliminary results suggest that analog signals
obeying a sparse or compressible model (in some analog
dictionary ) can be captured efficiently using these
devices at a rate proportional to their information level
instead of their Nyquist rate. Of course, there are challenges
one must address when applying the discrete CS methodology
to the recovery of sparse analog signals. A thorough
treatment of these issues would be beyond the scope of this
short article and as a first cut, one might simply accept the
idea that in many cases, discretizing/sampling the sparse
dictionary allows for suitable recovery. Our two architectures
are as follows:
1) Nonuniform Sampler (NUS). Our first architecture simply
digitizes the signal at randomly or pseudo-randomly sampled
time points. That is, yk = f(tk) = f, δtk
. In effect, these
time points are obtained by jittering nominal (low-rate)
sample points located on a regular lattice. Due to the incoherence
between spikes and sines, this architecture can be
used to sample signals having sparse frequency spectra far
below their Nyquist rate. There are of course tremendous
benefits associated with a reduced sampling rate, as this
provides added circuit settling
time and has the effect of reducing
the noise level.
2) Random Modulation Preintegration
(RMPI). Our second
architecture is applicable to a
wider variety of sparsity domains,
most notably those signals having
a sparse signature in the
time-frequency plane. Whereas it
may not be possible to digitize an analog signal at a very
high rate rate, it may be quite possible to change its polarity
at a high rate. The idea of the RMPI architecture [see
Figure 4(a)] is then to multiply the signal by a pseudo-random
sequence of ±1s, integrate the product over time windows,
and digitize the integral at the end of each time
interval. This is a parallel architecture and one has several
of these random multiplier-integrator pairs running in parallel
using distinct sign sequences. In effect, the RMPI
architecture correlates the signal with a bank of sequences
of ±1, one of the random CS measurement processes
known to be universal, and therefore the RMPI measurements
will be incoherent with any fixed time-frequency dictionary
such as the Gabor dictionary described below.
For each of the above architectures, we have confirmed
numerically (and in some cases physically) that the system is
robust to circuit nonidealities such as thermal noise, clock timing
errors, interference, and amplifier nonlinearities.
The application of A/I architectures to realistic acquisition
scenarios will require continued development of CS algorithms
and theory. To highlight some promising recent directions, we
conclude with a final discrete example. We take f to be a onedimensional
signal of length n = 512 that contains two modulated
pulses [see the blue curve in Figure 4(b)] From this signal, we collect m = 30 measurements using an m× n measurement
matrix populated with i.i.d. Bernoulli ±1 entries. This is an
unreasonably small amount of data corresponding to an undersampling
factor of over 17. For reconstruction we consider a
Gabor dictionary that consists of a variety of sine waves time
limited by Gaussian windows, with different locations and scales.
Overall the dictionary is approximately 43× overcomplete and
does not contain the two pulses that comprise f. The red curve
in Figure 4(b) shows the result of minimizing x1 such that
y = x. The reconstruction shows pronounced artifacts, and
we see f − f2/ f2
≈ 0.67. However, we can virtually
eliminate these artifacts by making two changes to the 1 recovery
program. First, we instead minimize
∗ ˜ f1 subject to
y =
˜ f. (This variation has no effect when is an orthobasis.)
Second, after obtaining an estimate f , we reweight the 1 norm
and repeat the reconstruction, with a lower penalty applied to
those coefficients we anticipate to be large. Figure 4(c) shows the
result after four iterations of reweighting; we see
f − f 2/ f 2
≈ 0.022. We refer the reader to [30] for more
information on these directions. The point here is that even
though the amount of data is ridiculously small, one has nevertheless
captured most of the information contained in the signal.
This, in a nutshell, is why CS holds such great promise.