Sampling Sparsity

From Wikimization

(Difference between revisions)
Jump to: navigation, search
(New page: ==History of Finite Rate of Innovation Sampling, or Sparse Sampling== In the late 1990's, we got interested to see if signals which are far from being well modeled as bandlimited could n...)
Current revision (20:35, 24 November 2010) (edit) (undo)
(History of Finite Rate of Innovation (FRI) Sampling, or Sparse Sampling)
 
(10 intermediate revisions not shown.)
Line 1: Line 1:
-
==History of Finite Rate of Innovation Sampling, or Sparse Sampling==
+
==History of Finite Rate of Innovation (FRI) Sampling, or Sparse Sampling==
 +
Martin Vetterli
In the late 1990's, we got interested to see if signals which are far
In the late 1990's, we got interested to see if signals which are far
Line 5: Line 6:
The motivation was image processing, where the ''sinc'' model and its associated
The motivation was image processing, where the ''sinc'' model and its associated
''pixels'' are not a good model, since edges are clearly not bandlimited.
''pixels'' are not a good model, since edges are clearly not bandlimited.
-
However, the curve followed by the edge might be smooth (e.g. piecewise linear).
+
However, the curve followed by the edge might be smooth (''e.g.'' piecewise linear).
An archetypical example is a woodcut picture, which is black and white (or 0 and 1),
An archetypical example is a woodcut picture, which is black and white (or 0 and 1),
with edges that are very sharp, yet the curve, drawn by hand, is smooth.
with edges that are very sharp, yet the curve, drawn by hand, is smooth.
Line 14: Line 15:
The breakthrough came when Thierry Blu made the connection
The breakthrough came when Thierry Blu made the connection
to spectral estimation methods, and the annihilation filter, also known
to spectral estimation methods, and the annihilation filter, also known
-
in error control coding. This lead to a complete report in 2001 [2],
+
in error control coding. This led to a complete report in 2001 [2],
-
published in part as conference papers in [3,4],
+
published in part as conference papers in [3] [4],
and to a patent application [5], all in 2001.
and to a patent application [5], all in 2001.
-
This all went into a journal paper [6] (published in 2002, and which got the
+
This all went into a journal paper [6]
-
IEEE Signal Processing Society best paper award in 2008),
+
'''('''published in 2002, it received the IEEE Signal Processing Society 2006 Best Paper Award''')'''
which basically shows how to sample continuous-time parametric classes of
which basically shows how to sample continuous-time parametric classes of
signals that are very far from being bandlimited, like sequences of Dirac pulses,
signals that are very far from being bandlimited, like sequences of Dirac pulses,
-
and this with various kernels like sinc and Gaussians.
+
and this with various kernels like sinc and Gaussians.
This was a noiseless sampling theorem, equivalent to Shannon sampling,
This was a noiseless sampling theorem, equivalent to Shannon sampling,
-
but where position information was key. Pina Marziliano's thesis [7],
+
but where position information was key. Pina Marziliano's thesis [7],
available on-line, gives a complete version of all this.
available on-line, gives a complete version of all this.
The noisy case was investigated initially by Irena Maravic [8] and
The noisy case was investigated initially by Irena Maravic [8] and
-
then in [9] which gives also a nice overview. The extension to general sampling
+
then in [9] which gives also a nice overview. The extension to general sampling
kernels satisfying the Strang-Fix condition is given in [10].
kernels satisfying the Strang-Fix condition is given in [10].
-
Now, what is the difference and similarities with compressed sensing.
+
Now, what is the difference and similarities with compressed sensing ('''CS''').
-
'''(1)''' FRI sampling is a continuous-time to discrete-time sampling result,
+
'''(1)''' FRI sampling is a continuous-time to discrete-time sampling result,
-
whereas CS is a map from R^N to R^M. Thus, FRI is in the ''classic'' sampling tradition of Shannon and the likes. To be more precise,
+
whereas CS is a map from <math>\mathbb{R}^N</math> to <math>\mathbb{R}^M</math>.
 +
Thus, FRI is in the ''classic'' sampling tradition of Shannon and the likes.
 +
To be more precise,
FRI maps the real line (or a real interval) to a set of samples or the set of integers,
FRI maps the real line (or a real interval) to a set of samples or the set of integers,
-
and proves one to one maps for certain parametric classes (like sets of Diracs).
+
and proves one-to-one maps for certain parametric classes (like sets of Diracs).
CS maps a finite vector to a smaller vector, and shows invertibility
CS maps a finite vector to a smaller vector, and shows invertibility
for certain mapping matrices and sparsities.
for certain mapping matrices and sparsities.
FRI can be done in a discrete set-up, and then, it is a version of compressed
FRI can be done in a discrete set-up, and then, it is a version of compressed
-
sensing with particular matrices (convolution matrices, Fourier matrices, because
+
sensing with particular matrices '''('''convolution matrices, Fourier matrices, because
-
the sampling is uniform and shift-invariant). But FRI on a real interval is like CS with an infinite matrix
+
the sampling is uniform and shift-invariant''')'''.
 +
But FRI on a real interval is like CS with an infinite matrix.
'''(2)''' FRI uses somewhat classic reconstruction techniques, similar to
'''(2)''' FRI uses somewhat classic reconstruction techniques, similar to
what spectral estimation methods do (it is a dual problem).
what spectral estimation methods do (it is a dual problem).
-
It can thus leverage this experience, use the algorithms and the performance bounds (e.g. a lot is known, like Cramer-Rao).
+
It can thus leverage this experience, use the algorithms and the performance bounds
 +
(''e.g.'' a lot is known, like Cramer-Rao).
CS uses a range of different algorithms, but most most proofs rely on random
CS uses a range of different algorithms, but most most proofs rely on random
matrices which are not necessarily realistic.
matrices which are not necessarily realistic.
Line 54: Line 59:
and these use analog filters that are time-invariant.
and these use analog filters that are time-invariant.
-
References: all these papers are online at http://lcavwww.epfl.ch/~vetterli/publications/index.html
+
===References===
 +
All these papers are online:&nbsp; http://lcavwww.epfl.ch/~vetterli/publications
-
[1] M. Vetterli, “Sampling of piecewise polynomial signals,Swiss Fed.
+
[1] M. Vetterli, Sampling of Piecewise Polynomial Signals, Swiss Fed.
Inst. Technol., Lausanne, Switzerland, LCAV, 1999.
Inst. Technol., Lausanne, Switzerland, LCAV, 1999.
-
[2] M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,Swiss Fed. Inst. Technol., Lausanne, Switzerland, DSC-017, 2001.
+
[2] M. Vetterli, P. Marziliano, and T. Blu, Sampling Signals with Finite Rate of Innovation, Swiss Fed. Inst. Technol., Lausanne, Switzerland, DSC-017, 2001.
-
[3] M. Vetterli, P. Marziliano, and T. Blu, “Sampling discrete-time piecewise
+
[3] M. Vetterli, P. Marziliano, and T. Blu, [http://infoscience.epfl.ch/record/34051/files/VetterliMB01b.pdf Sampling Discrete-Time Piecewise Bandlimited Signals], in Proc. Sampling Theory and Applications Workshop, Orlando, FL, May 2001, pp.97–102.
-
bandlimited signals,in Proc. Sampling Theory and Applications
+
-
Workshop, Orlando, FL, May 2001, pp. 97–102.
+
-
[4] M. Vetterli, P. Marziliano, and T. Blu, “A sampling theorem for periodic piecewise polynomial signals,
+
[4] M. Vetterli, P. Marziliano, and T. Blu, [http://infoscience.epfl.ch/record/34050/files/VetterliMB01a.pdf A Sampling Theorem for Periodic Piecewise Polynomial Signals],
in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Salt Lake City, UT, May 2001.
in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Salt Lake City, UT, May 2001.
[5] M. Vetterli, P. Marziliano, and T. Blu,
[5] M. Vetterli, P. Marziliano, and T. Blu,
-
Sampling method, reconstruction method, and device for sampling and/or reconstructing signals,
+
Sampling Method, Reconstruction Method, and Device for Sampling and/or Reconstructing Signals, Patent application, filed March 26 2001.
-
Patent application, filed March 26 2001.
+
-
[6] M. Vetterli, P. Marziliano and T. Blu, Sampling signals with finite rate of innovation, IEEE Transactions on Signal Processing, Vol. 50, Nr. 6, pp. 1417-1428, 2002.
+
[6] M. Vetterli, P. Marziliano and T. Blu, [http://bigwww.epfl.ch/publications/vetterli0201.pdf Sampling Signals with Finite Rate of Innovation], IEEE Transactions on Signal Processing, Vol.50, Nr.6, pp.1417-1428, 2002.
-
[7] Pina Marziliano, Sampling Innovation, PhD Thesis, EPFL, Mar. 2001.
+
[7] Pina Marziliano, [http://biblion.epfl.ch/EPFL/theses/2001/2369/EPFL_TH2369.pdf Sampling Innovations], PhD Thesis, EPFL, Mar. 2001.
-
http://infoscience.epfl.ch/record/32845
+
-
[8] I. Maravic and M. Vetterli, Sampling and Reconstruction of Signals with Finite Rate of Innovation in the Presence of Noise, IEEE Transactions on Signal Processing, Vol. 53, Nr. 8, pp. 2788-2805, 2005.
+
[8] I. Maravic and M. Vetterli, [http://lcavwww.epfl.ch/~vetterli/SP-8-2005.pdf Sampling and Reconstruction of Signals with Finite Rate of Innovation in the Presence of Noise], IEEE Transactions on Signal Processing, Vol.53, Nr.8, pp.2788-2805, 2005.
-
[9] T. Blu, P.L. Dragotti, M. Vetterli, P. Marziliano and L. Coulot, Sparse Sampling of Signal Innovations, IEEE Signal Processing Magazine, Vol. 25, Nr. 2, pp. 31?40, 2008.
+
[9] T. Blu, P.L. Dragotti, M. Vetterli, P. Marziliano and L. Coulot, [http://bigwww.epfl.ch/publications/blu0801.pdf Sparse Sampling of Signal Innovations], IEEE Signal Processing Magazine, Vol.25, Nr.2, pp.31-40, 2008.
-
[10] P.L. Dragotti, M. Vetterli and T. Blu, Sampling Moments and Reconstructing
+
[10] P.L. Dragotti, M. Vetterli and T. Blu, [http://bigwww.epfl.ch/publications/dragotti0701.pdf Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: Shannon Meets Strang-Fix], IEEE Transactions on Signal Processing, Vol.55, Nr.5, Part 1, pp.1741-1757, 2007.
-
Signals of Finite Rate of Innovation: Shannon Meets Strang-Fix, IEEE Transactions on Signal Processing, Vol. 55, Nr. 5, Part 1, pp. 1741-1757, 2007.
+

Current revision

History of Finite Rate of Innovation (FRI) Sampling, or Sparse Sampling

                                                                 Martin Vetterli

In the late 1990's, we got interested to see if signals which are far from being well modeled as bandlimited could nonetheless be sampled. The motivation was image processing, where the sinc model and its associated pixels are not a good model, since edges are clearly not bandlimited. However, the curve followed by the edge might be smooth (e.g. piecewise linear). An archetypical example is a woodcut picture, which is black and white (or 0 and 1), with edges that are very sharp, yet the curve, drawn by hand, is smooth.

We started looking at one dimensional signals, and got some initial results in 1999 [1], but not very satisfying ones from a math point of view. Pina Marziliano worked on this topic for her PhD thesis. The breakthrough came when Thierry Blu made the connection to spectral estimation methods, and the annihilation filter, also known in error control coding. This led to a complete report in 2001 [2], published in part as conference papers in [3] [4], and to a patent application [5], all in 2001.

This all went into a journal paper [6] (published in 2002, it received the IEEE Signal Processing Society 2006 Best Paper Award) which basically shows how to sample continuous-time parametric classes of signals that are very far from being bandlimited, like sequences of Dirac pulses, and this with various kernels like sinc and Gaussians. This was a noiseless sampling theorem, equivalent to Shannon sampling, but where position information was key. Pina Marziliano's thesis [7], available on-line, gives a complete version of all this.

The noisy case was investigated initially by Irena Maravic [8] and then in [9] which gives also a nice overview. The extension to general sampling kernels satisfying the Strang-Fix condition is given in [10].

Now, what is the difference and similarities with compressed sensing (CS).

(1) FRI sampling is a continuous-time to discrete-time sampling result, whereas CS is a map from LaTeX: \mathbb{R}^N to LaTeX: \mathbb{R}^M. Thus, FRI is in the classic sampling tradition of Shannon and the likes. To be more precise, FRI maps the real line (or a real interval) to a set of samples or the set of integers, and proves one-to-one maps for certain parametric classes (like sets of Diracs). CS maps a finite vector to a smaller vector, and shows invertibility for certain mapping matrices and sparsities. FRI can be done in a discrete set-up, and then, it is a version of compressed sensing with particular matrices (convolution matrices, Fourier matrices, because the sampling is uniform and shift-invariant). But FRI on a real interval is like CS with an infinite matrix.

(2) FRI uses somewhat classic reconstruction techniques, similar to what spectral estimation methods do (it is a dual problem). It can thus leverage this experience, use the algorithms and the performance bounds (e.g. a lot is known, like Cramer-Rao). CS uses a range of different algorithms, but most most proofs rely on random matrices which are not necessarily realistic.

(3) CS is more general in the choice of kernels. However many practical scenarios encountered in implementations will come down to shift-invariant systems and uniform sampling, since analog front ends are the limiting factor, and these use analog filters that are time-invariant.

References

All these papers are online:  http://lcavwww.epfl.ch/~vetterli/publications

[1] M. Vetterli, Sampling of Piecewise Polynomial Signals, Swiss Fed. Inst. Technol., Lausanne, Switzerland, LCAV, 1999.

[2] M. Vetterli, P. Marziliano, and T. Blu, Sampling Signals with Finite Rate of Innovation, Swiss Fed. Inst. Technol., Lausanne, Switzerland, DSC-017, 2001.

[3] M. Vetterli, P. Marziliano, and T. Blu, Sampling Discrete-Time Piecewise Bandlimited Signals, in Proc. Sampling Theory and Applications Workshop, Orlando, FL, May 2001, pp.97–102.

[4] M. Vetterli, P. Marziliano, and T. Blu, A Sampling Theorem for Periodic Piecewise Polynomial Signals, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Salt Lake City, UT, May 2001.

[5] M. Vetterli, P. Marziliano, and T. Blu, Sampling Method, Reconstruction Method, and Device for Sampling and/or Reconstructing Signals, Patent application, filed March 26 2001.

[6] M. Vetterli, P. Marziliano and T. Blu, Sampling Signals with Finite Rate of Innovation, IEEE Transactions on Signal Processing, Vol.50, Nr.6, pp.1417-1428, 2002.

[7] Pina Marziliano, Sampling Innovations, PhD Thesis, EPFL, Mar. 2001.

[8] I. Maravic and M. Vetterli, Sampling and Reconstruction of Signals with Finite Rate of Innovation in the Presence of Noise, IEEE Transactions on Signal Processing, Vol.53, Nr.8, pp.2788-2805, 2005.

[9] T. Blu, P.L. Dragotti, M. Vetterli, P. Marziliano and L. Coulot, Sparse Sampling of Signal Innovations, IEEE Signal Processing Magazine, Vol.25, Nr.2, pp.31-40, 2008.

[10] P.L. Dragotti, M. Vetterli and T. Blu, Sampling Moments and Reconstructing Signals of Finite Rate of Innovation: Shannon Meets Strang-Fix, IEEE Transactions on Signal Processing, Vol.55, Nr.5, Part 1, pp.1741-1757, 2007.

Personal tools