RANDOM SENSING
Returning to the RIP, we would like to find sensing matrices
with the property that column vectors taken from arbitrary
subsets are nearly orthogonal. The larger these
subsets, the better.
This is where randomness re-enters the picture. Consider
the following sensing matrices: i) form A by sampling n column
vectors uniformly at random on the unit sphere of Rm;
ii) form A by sampling i.i.d. entries from the normal distribution
with mean 0 and variance 1/m; iii) form A by sampling
a random projection P as in “Incoherent Sampling”
and normalize: A =
√
n/mP; and iv) form A by sampling
i.i.d. entries from a symmetric Bernoulli distribution
(P(Ai, j= ±1/
√
m) = 1/2) or other sub-gaussian distribution.
With overwhelming probability, all these matrices obey
the RIP (i.e. the condition of our theorem) provided that
m ≥ C · Slog(n/S), (13)
where C is some constant depending on each instance. The
claims for i)–iii) use fairly standard results in probability theory;
arguments for iv) are more subtle; see [23] and the work of Pajor
and his coworkers, e.g., [24]. In all these examples, the probability
of sampling a matrix not obeying the RIP when (13) holds is
exponentially small in m. Interestingly, there are no measurement
matrices and no reconstruction algorithm whatsoever
which can give the conclusions of Theorem 2 with substantially
fewer samples than the left-hand side of (13) [2], [3]. In that
sense, using randomized matrices together with 1 minimization
is a near-optimal sensing strategy.
One can also establish the RIP for pairs of orthobases as in
“Incoherence and the Sensing of Sparse Signals.” With
A = R where R extracts m coordinates uniformly at random,
it is sufficient to have
m ≥ C · S(log n)
4
, (14)
for the property to hold with large probability; see [25] and [2].
If one wants a probability of failure no larger than O(n−β) for
some β > 0, then the best known exponent in (14) is five
instead of four (it is believed that (14) holds with just log n).