**1. Introduction**

168 Applications of Digital Signal Processing

Lindner, D. (January 2009). *Introduction to Signals and Systems*, Mc Graw Hill Company, First

Pan, J.; Tompkins, W. J. (1985). A Real-Time QRS Detection Algorithm, *IEEE Transactions on Biomedical Engineering*, Vol.32, No.3, (March 2007), pp. 230-236, ISSN 0018-9294 Proakis, J. ; Manolakis, D. (2007). *Digital Signal Processing : Principles, Algorithms, and Applications,* Prentice Hall, 3rd edition, ISBN 978-013-3737-622, New Jersey, USA Smith, S. W. (1999). *The Scientist and Engineer's Guide to Digital Signal Processing,* Second

Suppappola, S; Ying, S. (1994). Nonlinear Transform of ECG Signals for Digital QRS

Thakor, N. V.; Webster, J.; Tompkins, W. J. (1984). Estimation of QRS Spectra for Design of a

Townsend, N. (2001). Medical Electronics, *Signal Processing & Neural Networks Group, Dept. of* 

Vidal, C.; Pavesi, L. (January 2004). Implementación de un Electrocardiográfo Digital y

Vidal, C.; Charnay, P.; Arce, P. (2008). Enhancement of a QRS Detection Algorithm Based on

*Engineering Science, University of Oxford*, 21.06.2011, Available from

393-396, ISBN 978-354-0892-076, Antwerp, Belgium, December 2009 Vidal, C.; Gatica, V. (2010). Design and Implementation of a Digital Electrocardiographic

(September 2010), pp. 99-107, ISSN 0120-0230, Antioquia, Colombia Wells, J. K.; Crampton, W. G. R. (2006). A Portable Bioamplifier for Electric Fish Research:

Edition, California Technical Publishing, 1999, ISBN 978-096-6017-632, California,

Detection: A Quantitative Analysis, *IEEE Transactions on Biomedical Engineering*,

QRS Filter, *IEEE Transactions on Biomedical Engineering*, Vol.31, No.11, (2007), pp.

Desarrollo de Algoritmos Relevantes al Diagnóstico Médico. *Bacherlor Thesis,* 

the First Derivative Using Techniques of a QRS Detector Algorithm Based on Non-Linear Transformation, *Proceedings of IFMBE 2008 4th European Conference of the International Federation for Medical and Biological Engineering,* Volume 22, Part 6, pp.

System, *University of Antioquia Engineering Faculty Scientific Magazine*, No. 55,

Design and Construction, *Neotropical Ichthyology,* Volume 4, (2006), pp. 295-299,

MIT DB. (2008). , MIT-BIH Arrhytmia Database, 20.06.2011, Avalaible from http://www.physionet.org/physiobank/database/mitdb/

Vol.41, No. 4, (April 1994), pp. 397-400, ISSN: 0018-9294

http://www.robots.ox.ac.uk/~neil/teaching/lectures/med\_elec/

*Computer Engineering, Catholic University of Maule*, Talca, Chile

Edition, ISBN 978-025-6252-590, USA

USA

702-706, ISSN 0018-9294

ISSN 1679-6225, Porto Alegre, Brazil

Compressive sensing (CS) has been widely investigated as a method to reduce the sampling rate needed to obtain accurate measurements of sparse signals (Donoho, 2006; Candes & Tao, 2006; Baraniuk, 2007; Candes & Wakin, 2008; Loris, 2008; Candes et al., 2011; Duarte & Baraniuk, 2011). CS depends on mixing a sparse input signal (or image) down in dimension, digitizing the reduced dimension signal, and recovering the input signal through optimization algorithms. Two classes of recovery algorithms have been extensively used. One class is based on finding the sparse target vector with the minimum ell-1 norm that satisfies the measurement constraint: that is, when the vector is transformed back to the input signal domain and multiplied by the mixing matrix, it satisfies the reduced dimension measurement. In the presence of noise, recovery proceeds by minimizing the ell-1 norm plus a term proportional to ell-2 norm of the measurement constraint (Candes and Wakin, 2008; Loris, 2008). The second class is based on "greedy" algorithms such as orthogonal matching pursuit (Tropp and Gilbert, 2007) and iteratively, finds and removes elements of a discrete dictionary that are maximally correlated with the measurement.

There is, however, a difficulty in applying these algorithms to CS recovery for a signal that consists of a few sinusoids of arbitrary frequency (Duarte & Baraniuk, 2010). The standard discrete Fourier transform (DFT), which one expects to sparsify a time series for the input signal, yields a sparse result only if the duration of the time series is an integer number of periods of each of the sinusoids. If there are *N* time steps in the time window, there are just *N* frequencies that are sparse under the DFT; we will refer to these frequencies as being on the frequency grid for the DFT just as the time samples are on the time grid. To recover signals that consist of frequencies off the grid, there are several alternative approaches: 1) decreasing the grid spacing so that more signal frequencies are on the grid by using an overcomplete dictionary, 2) windowing or apodization to improve sparsity by reducing the size of the sidelobes in the DFT of a time series for a frequency off the grid, and 3) scanning the DFT off integer values to find the frequency (Shaw & Valley, 2010). However, none of these approaches is really practical for obtaining high precision values of the frequency and amplitude of arbitrary sinusoids. As shown below in Section 6, calculations with time windows of more than 10,000 time samples become prohibatively slow; windowing distorts the signal and in many cases, does not improve sparsity enough for CS recovery algorithms

Applications of the Orthogonal Matching Pursuit/ Nonlinear

transformation by a sensing matrix 4 as shown in eq. (1)

matching pursuit, nonlinear least squares and compressive sensing.

**2. Background** 

given by *y* - 4 *s*

Loris, 2008).

**2.2 Orthogonal Matching Pursuit method** 

frequencies (or other parameters) from discrete grids.

**2.1 Compressive sensing** 

Least Squares Algorithm to Compressive Sensing Recovery 171

Our method and results rely heavily on work in three well-known areas: orthogonal

In compressive sensing (Donoho, 2006; Candes & Tao, 2006; Baraniuk, 2007), a sparse vector *s* of dimension *N* can be recovered from a measured vector *y* of dimension *M* **(***M* **<<** *N***)** after

where *n* is a noise vector. Often, 4 is factored into two matrices, 4 = )< where ) is a "random" mixing matrix and < is a Hermitian matrix with columns that form a basis in which the input vector is sparse. A canonical example is the case in which the input is a time series with samples taken from a single sinusoid with an integer number of periods in the time window. These data are not sparse but are transformed into a sparse vector by the discrete Fourier transform (DFT). Note that although 4 is not square and hence not invertible, < is both square and invertible. Work in compressive sensing has shown (Donoho, 2006; Candes & Tao, 2006; Baraniuk, 2007) that under quite general conditions, all *N* components of *s* may be recovered from the much smaller number of measurements of *y*. With no noise (*n* = 0) recovery proceeds by minimizing the ell-1 norm of a test vector *s***'** (the ell-1 norm of *s'* is given by the sum of the absolute values of the elements of *s***'**) subject to the constraint *y* = 4 *s*'. In the presence of noise, recovery proceeds by minimizing a linear combination of the ell-1 norm of the target vector and the ell-2 norm of the residual vector

where the parameter O is chosen such that the signal is optimally recovered (Baraniuk, 2007;

Orthogonal matching pursuit (OMP) is an alternative method that can be used to find the target vector *s* from the measurement vector *y*. Matching pursuit has a rich history in signal processing long before CS and has appeared under many names (Mallat & Zhang, 1993; Pati et al., 1993; Davis et al., 1997). With the advent of CS, many variants of OMP have been applied to recovery including methods called MOMP, ROMP, CoSaMP, etc. (Needell and Tropp, 2008; Needell and Vershynin, 2009; Huang and Zhu, 2011) but with one exception (Jacques and De Vleeschouwer, 2008) discussed below, all of these methods recover

The basic idea of all matching pursuit algorithms is to minimize a cost function to obtain frequencies of sinusoids present in the signal. First, take the frequency corresponding to the smallest value of the cost function and calculate the linear least squares estimate for the complex amplitude at this frequency. Second, mix this sinusoid with the known CS mixing matrix ) and remove this mixed approximation to the first sinusoid from the CS measurement vector [*y* in eq. (1)]. This process is repeated until a known number of sinusoids is found or a system-defined threshold is reached. For frequencies not on the DFT

*y* = 4 *s* + *n* (1)

*s*'(O) = argmin*s*(O||*s*||1 + || *y* - 4 *s* ||2) (2)

to work; scanning the DFT off integer values requires performing the CS recovery algorithm over and over again with an unknown sparse transform and becomes prohibitively expensive when the number of sinusoids in the signal exceeds 1.

Here we present a new approach to recovering sparse signals to arbitrary accuracy when the parameters of the signal do not lie on a grid and the sparsifying transform is unknown. Our approach is based on orthogonal matching pursuit (OMP), which has been applied to recovering CS signals by many authors (Donoho et al., 2006; Tropp and Gilbert, 2007; Liu and Temlyakov, 2010; Huang and Zhu, 2011). The major difference between our work and previous work is that we add a nonlinear least squares (NLS) step to each OMP iteration. In the first iteration of conventional OMP applied to finding sinusoids, one finds the frequency that maximizes the correlation between the measurement matrix evaluated on an overcomplete dictionary and the CS measurement, solves a linear least squares problem to find the best estimate of the amplitude of the sinusoid at this frequency, and subtracts this sinusoid multiplied by the measurement matrix from the CS measurement. In the second iteration, one finds the frequency that maximizes the correlation between the measurement matrix and the residual measurement, solves a least squares problem for both frequencies to get new estimates of both amplitudes, and subtracts the sum of the two sinusoids multiplied by the measurement matrix from the previous residual. This process is described in detail in "Algorithm 3 (OMP for Signal Recovery)" in the paper by Tropp and Gilbert (2007) and in our Table 1 in Section 3. Our approach proceeds in the same way as conventional OMP but we substitute a **Nonlinear Least Squares** step for the linear least squares step. In the NLS step, we use a minimizer to find better values for the frequencies without reference to a discrete grid. Because the amplitudes are extremely sensitive to the accuracy of the frequencies, this leads to a much better value for the amplitudes and thus to a much more accurate expansion of the input signal. Just as in conventional OMP, we continue our algorithm until a system level threshold in the residual is reached or until a known number of sinusoids is extracted. A related procedure for matching pursuit but not yet applied to compressive sensing or orthogonal matching pursuit is described by Jacques & De Vleeschouwer (2008). What we refer to as the NLS step appears in their Section V, eq. (P.2). Our approach to CS recovery differs from most methods presented to date in that we assume our signal (or image) is sparse in some model as opposed to sparse under some transform. Of course, for every sparse model there is some sparsifying transform, but it may be easier in some problems to find the model as opposed to the transform. Models inevitably involve parameters, and in most cases of practical interest, these parameters do not lie on a discrete grid or lie on a grid that is too large for efficient discrete processing techniques (see the discussion in Section 1 of Jacques & De Vleeschouwer, 2008). For instance, to recover the frequency of a sinusoid between 0 and 1 to precision of 10-16 would require 1016 grid points. While we first developed our method to find the frequency and amplitude of sinusoids, like OMP it is readily adaptable to signals that are the superposition of a wide range of other models. In Section 2, we present background material on the OMP, NLS and CS methods on which our method is based. In Section 3, we develop the modelbased OMP/NLS formulation. Sections 4 and 5 contains the application to signals that consist of a sum of a small number of sinusoids. Section 6 compares performance of our algorithm to conventional OMP using a linear least square step and to penalized ell-1 norm methods.
