# Sampling Sparsity

### From Wikimization

Dattorro (Talk | contribs)

(New page: ==History of Finite Rate of Innovation Sampling, or Sparse Sampling== In the late 1990's, we got interested to see if signals which are far from being well modeled as bandlimited could n...)

Next diff →

## Revision as of 22:30, 22 November 2010

## History of Finite Rate of Innovation Sampling, or Sparse Sampling

In the late 1990's, we got interested to see if signals which are far
from being well modeled as bandlimited could nonetheless be sampled.
The motivation was image processing, where the *sinc* model and its associated
*pixels* are not a good model, since edges are clearly not bandlimited.
However, the curve followed by the edge might be smooth (e.g. piecewise linear).
An archetypical example is a woodcut picture, which is black and white (or 0 and 1),
with edges that are very sharp, yet the curve, drawn by hand, is smooth.

We started looking at one dimensional signals, and got some initial results in 1999 [1], but not very satisfying ones from a math point of view. Pina Marziliano worked on this topic for her PhD thesis. The breakthrough came when Thierry Blu made the connection to spectral estimation methods, and the annihilation filter, also known in error control coding. This lead to a complete report in 2001 [2], published in part as conference papers in [3,4], and to a patent application [5], all in 2001.

This all went into a journal paper [6] (published in 2002, and which got the IEEE Signal Processing Society best paper award in 2008), which basically shows how to sample continuous-time parametric classes of signals that are very far from being bandlimited, like sequences of Dirac pulses, and this with various kernels like sinc and Gaussians. This was a noiseless sampling theorem, equivalent to Shannon sampling, but where position information was key. Pina Marziliano's thesis [7], available on-line, gives a complete version of all this.

The noisy case was investigated initially by Irena Maravic [8] and then in [9] which gives also a nice overview. The extension to general sampling kernels satisfying the Strang-Fix condition is given in [10].

Now, what is the difference and similarities with compressed sensing.

**(1)** FRI sampling is a continuous-time to discrete-time sampling result,
whereas CS is a map from R^N to R^M. Thus, FRI is in the *classic* sampling tradition of Shannon and the likes. To be more precise,
FRI maps the real line (or a real interval) to a set of samples or the set of integers,
and proves one to one maps for certain parametric classes (like sets of Diracs).
CS maps a finite vector to a smaller vector, and shows invertibility
for certain mapping matrices and sparsities.
FRI can be done in a discrete set-up, and then, it is a version of compressed
sensing with particular matrices (convolution matrices, Fourier matrices, because
the sampling is uniform and shift-invariant). But FRI on a real interval is like CS with an infinite matrix

**(2)** FRI uses somewhat classic reconstruction techniques, similar to
what spectral estimation methods do (it is a dual problem).
It can thus leverage this experience, use the algorithms and the performance bounds (e.g. a lot is known, like Cramer-Rao).
CS uses a range of different algorithms, but most most proofs rely on random
matrices which are not necessarily realistic.

**(3)** CS is more general in the choice of kernels. However many practical scenarios
encountered in implementations will come down to shift-invariant systems
and uniform sampling, since analog front ends are the limiting factor,
and these use analog filters that are time-invariant.

References: all these papers are online at http://lcavwww.epfl.ch/~vetterli/publications/index.html

[1] M. Vetterli, “Sampling of piecewise polynomial signals,” Swiss Fed. Inst. Technol., Lausanne, Switzerland, LCAV, 1999.

[2] M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” Swiss Fed. Inst. Technol., Lausanne, Switzerland, DSC-017, 2001.

[3] M. Vetterli, P. Marziliano, and T. Blu, “Sampling discrete-time piecewise bandlimited signals,” in Proc. Sampling Theory and Applications Workshop, Orlando, FL, May 2001, pp. 97–102.

[4] M. Vetterli, P. Marziliano, and T. Blu, “A sampling theorem for periodic piecewise polynomial signals,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Salt Lake City, UT, May 2001.

[5] M. Vetterli, P. Marziliano, and T. Blu, Sampling method, reconstruction method, and device for sampling and/or reconstructing signals, Patent application, filed March 26 2001.

[6] M. Vetterli, P. Marziliano and T. Blu, Sampling signals with finite rate of innovation, IEEE Transactions on Signal Processing, Vol. 50, Nr. 6, pp. 1417-1428, 2002.

[7] Pina Marziliano, Sampling Innovation, PhD Thesis, EPFL, Mar. 2001. http://infoscience.epfl.ch/record/32845

[8] I. Maravic and M. Vetterli, Sampling and Reconstruction of Signals with Finite Rate of Innovation in the Presence of Noise, IEEE Transactions on Signal Processing, Vol. 53, Nr. 8, pp. 2788-2805, 2005.

[9] T. Blu, P.L. Dragotti, M. Vetterli, P. Marziliano and L. Coulot, Sparse Sampling of Signal Innovations, IEEE Signal Processing Magazine, Vol. 25, Nr. 2, pp. 31?40, 2008.

[10] P.L. Dragotti, M. Vetterli and T. Blu, Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: Shannon Meets Strang-Fix, IEEE Transactions on Signal Processing, Vol. 55, Nr. 5, Part 1, pp. 1741-1757, 2007.