Wavelet Toolbox Previous page   Next Page

De-Noising

This section discusses the problem of signal recovery from noisy data. This problem is easy to understand looking at the following simple example, where a slow sine is corrupted by a white noise.

Figure 6-27: A Simple De-Noising Example

The Basic One-Dimensional Model

The underlying model for the noisy signal is basically of the following form:

where time n is equally spaced.

In the simplest model we suppose that e(n) is a Gaussian white noise N(0,1) and the noise level is supposed to be equal to 1.

The de-noising objective is to suppress the noise part of the signal s and to recover f.

The method is efficient for families of functions f that have only a few nonzero wavelet coefficients. These functions have a sparse wavelet representation. For example, a smooth function almost everywhere, with only a few abrupt changes, has such a property.

From a statistical viewpoint, the model is a regression model over time and the method can be viewed as a nonparametric estimation of the function f using orthogonal basis.

De-Noising Procedure Principles

The general de-noising procedure involves three steps. The basic version of the procedure follows the steps described below.

  1. Decompose
  1. Choose a wavelet, choose a level N. Compute the wavelet decomposition of the signal s at level N.

  1. Threshold detail coefficients
  1. For each level from 1 to N, select a threshold and apply soft thresholding to the detail coefficients.

  1. Reconstruct
  1. Compute wavelet reconstruction using the original approximation coefficients of level N and the modified detail coefficients of levels from 1 to N.

Two points must be addressed: how to choose the threshold, and how to perform the thresholding.

Soft or Hard Thresholding?

Thresholding can be done using the function

which returns soft or hard thresholding of input y, depending on the sorh option. Hard thresholding is the simplest method. Soft thresholding has nice mathematical properties and the corresponding theoretical results are available (For instance, see [Don95] in References).

Let us give a simple example.

Figure 6-28: Hard and Soft Thresholding of the Signal s = x

Comment: Let t denote the threshold. The hard threshold signal is x if |x| > t, and is 0 if |x| t. The soft threshold signal is sign(x)(|x| - t) if |x| > t and is 0 if |x| t.

Hard thresholding can be described as the usual process of setting to zero the elements whose absolute values are lower than the threshold. Soft thresholding is an extension of hard thresholding, first setting to zero the elements whose absolute values are lower than the threshold, and then shrinking the nonzero coefficients towards 0 (see Figure 6-28 above).

As can be seen in the comment of Figure 6-28, the hard procedure creates discontinuities at x = ±t, while the soft procedure does not.

Threshold Selection Rules

According to the basic noise model, four threshold selection rules are implemented in the M-file thselect. Each rule corresponds to a tptr option in the command

which returns the threshold value.

Option
Threshold Selection Rule
'rigrsure'
Selection using principle of Stein's Unbiased Risk Estimate (SURE)
'sqtwolog'
Fixed form threshold equal to sqrt(2*log(length(s)))
'heursure'
Selection using a mixture of the first two options
'minimaxi'
Selection using minimax principle

Typically it is interesting to show how thselect works if y is a Gaussian white noise N(0,1) signal.

Because y is a standard Gaussian white noise, we expect that each method kills roughly all the coefficients and returns the result f(x) = 0. For Stein's Unbiased Risk Estimate and minimax thresholds, roughly 3% of coefficients are saved. For other selection rules, all the coefficients are set to 0.

We know that the detail coefficients vector is the superposition of the coefficients of f and the coefficients of e, and that the decomposition of e leads to detail coefficients, which are standard Gaussian white noises.

So minimax and SURE threshold selection rules are more conservative and would be more convenient when small details of function f lie near the noise range. The two other rules remove the noise more efficiently. The option 'heursure' is a compromise. In this example, the fixed form threshold wins.

Recalling step 2 of the de-noise procedure, the function thselect performs a threshold selection, and then each level is thresholded. This second step can be done using wthcoef, directly handling the wavelet decomposition structure of the original signal s.

Dealing with Unscaled Noise and Nonwhite Noise

Usually in practice the basic model cannot be used directly. We examine here the options available to deal with model deviations in the main de-noising function wden.

The simplest use of wden is

which returns the de-noised version sd of the original signal s obtained using the tptr threshold selection rule. Other parameters needed are sorh, scal, n, and wav. The parameter sorh specifies the thresholding of details coefficients of the decomposition at level n of s by the wavelet called wav. The remaining parameter scal is to be specified. It corresponds to threshold's rescaling methods.

Option
Corresponding Model
'one'
Basic model
'sln'
Basic model with unscaled noise
'mln'
Basic model with non-white noise

For a more general procedure, the wdencmp function performs wavelet coefficients thresholding for both de-noising and compression purposes, while directly handling one-dimensional and two-dimensional data. It allows you to define your own thresholding strategy selecting in

where

De-Noising in Action

We begin with examples of one-dimensional de-noising methods with the first example credited to Donoho and Johnstone. You can use the following M-file to get the first test function using wnoise.

Figure 6-29: Blocks Signal De-Noising

Since only a small number of large coefficients characterize the original signal, the method performs very well (see Figure 6-29 above). If you want to see more about how the thresholding works, use the GUI (see De-Noising Signals).

As a second example, let us try the method on the highly perturbed part of the electrical signal studied above.

According to this previous analysis, let us use db3 wavelet and decompose at level 3.

To deal with the composite noise nature, let us try a level-dependent noise size estimation.

Figure 6-30: Electrical Signal De-Noising

The result is quite good in spite of the time heterogeneity of the nature of the noise after and before the beginning of the sensor failure around time 2450.

Extension to Image De-Noising

The de-noising method described for the one-dimensional case applies also to images and applies well to geometrical images. A direct translation of the one-dimensional model is

where e is a white Gaussian noise with unit variance.

The two-dimensional de-noising procedure has the same three steps and uses two-dimensional wavelet tools instead of one-dimensional ones. For the threshold selection, prod(size(s)) is used instead of length(s) if the fixed form threshold is used.

Note that except for the "automatic" one-dimensional de-noising case, de-noising and compression are performed using wdencmp. As an example, you can use the following M-file illustrating the de-noising of a real image.

The result shown below is acceptable.

Figure 6-31: Image De-Noising

One-Dimensional Variance Adaptive Thresholding of Wavelet Coefficients

Local thresholding of wavelet coefficients, for one- or two-dimensional data, is a capability available from a lot of graphical interface tools throughout the MATLAB Wavelet Toolbox (see Using Wavelets).

The idea is to define level by level time-dependent thresholds, and then increase the capability of the de-noising strategies to handle nonstationary variance noise models.

More precisely, the model assumes (as previously) that the observation is equal to the interesting signal superimposed on a noise (see De-Noising).

But the noise variance can vary with time. There are several different variance values on several time intervals. The values as well as the intervals are unknown.

Let us focus on the problem of estimating the change points or equivalently the intervals. The algorithm used is based on an original work of Marc Lavielle about detection of change points using dynamic programming (see [Lav99] in References).

Let us generate a signal from a fixed-design regression model with two noise variance change points located at positions 200 and 600.

The aim of this example is to recover the two change points from the signal x.
In addition, this example illustrates how the GUI tools (see Using Wavelets) locate the change points for interval dependent thresholding.

Step 1. Recover a noisy signal by suppressing an approximation.

The reconstructed detail at level 1 recovered at this stage is almost signal free. It captures the main features of the noise from a change points detection viewpoint if the interesting part of the signal has a sparse wavelet representation. To remove almost all the signal, we replace the biggest values by the mean.

Step 2. To remove almost all the signal, replace 2% of biggest values by the mean.

Step 3. Use the wvarchg function to estimate the change points with the following parameters:

Two change points and three intervals are proposed. Since the three interval variances for the noise are very different the optimization program detects easily the correct structure.

The estimated change points are close to the true change points: 200 and 600.

Step 4. (Optional) Replace the estimated change points.

For 2 i 6, t_est(i,1:i-1) contains the i-1 instants of the variance change points, and since kopt is the proposed number of change points; then

You can replace the estimated change points by computing

More About De-Noising

The de-noising methods based on wavelet decomposition appear mainly initiated by Donoho and Johnstone in the USA, and Kerkyacharian and Picard in France. Meyer considers that this topic is one of the most significant applications of wavelets (cf. [Mey93] page 173). This chapter and the corresponding M-files follow the work of the above mentioned researchers. More details can be found in Donoho's references in the section References and in the section More About the Thresholding Strategies.


Previous page  Noise Processing Data Compression Next page

© 1994-2005 The MathWorks, Inc.