Wavelet Toolbox |
This section discusses the problem of signal recovery from noisy data. This problem is easy to understand looking at the following simple example, where a slow sine is corrupted by a white noise.
Figure 6-27: A Simple De-Noising Example
The Basic One-Dimensional Model
The underlying model for the noisy signal is basically of the following form:
where time n is equally spaced.
In the simplest model we suppose that e(n) is a Gaussian white noise N(0,1) and the noise level is supposed to be equal to 1.
The de-noising objective is to suppress the noise part of the signal s and to recover f.
The method is efficient for families of functions f that have only a few nonzero wavelet coefficients. These functions have a sparse wavelet representation. For example, a smooth function almost everywhere, with only a few abrupt changes, has such a property.
From a statistical viewpoint, the model is a regression model over time and the method can be viewed as a nonparametric estimation of the function f using orthogonal basis.
De-Noising Procedure Principles
The general de-noising procedure involves three steps. The basic version of the procedure follows the steps described below.
Two points must be addressed: how to choose the threshold, and how to perform the thresholding.
Thresholding can be done using the function
which returns soft or hard thresholding of input y
, depending on the sorh
option. Hard thresholding is the simplest method. Soft thresholding has nice mathematical properties and the corresponding theoretical results are available (For instance, see [Don95] in References).
Figure 6-28: Hard and Soft Thresholding of the Signal s = x
Comment: Let t denote the threshold. The hard threshold signal is x if |x| > t, and is 0 if |x| t. The soft threshold signal is sign(x)(|x| - t) if |x| > t and is 0 if |x| t.Hard thresholding can be described as the usual process of setting to zero the elements whose absolute values are lower than the threshold. Soft thresholding is an extension of hard thresholding, first setting to zero the elements whose absolute values are lower than the threshold, and then shrinking the nonzero coefficients towards 0 (see Figure 6-28 above).
As can be seen in the comment of Figure 6-28, the hard procedure creates discontinuities at x = ±t, while the soft procedure does not.
Threshold Selection Rules
According to the basic noise model, four threshold selection rules are implemented in the M-file thselect
. Each rule corresponds to a tptr
option in the command
which returns the threshold value.
tptr = 'rigrsure
' uses for the soft threshold estimator a threshold selection rule based on Stein's Unbiased Estimate of Risk (quadratic loss function). You get an estimate of the risk for a particular threshold value t. Minimizing the risks in t gives a selection of the threshold value.
tptr =
'sqtwolog
' uses a fixed form threshold yielding minimax performance multiplied by a small factor proportional to log(length(s)).
tptr = 'heursure'
is a mixture of the two previous options. As a result, if the signal-to-noise ratio is very small, the SURE estimate is very noisy. So if such a situation is detected, the fixed form threshold is used.
tptr = 'minimaxi'
uses a fixed threshold chosen to yield minimax performance for mean square error against an ideal procedure. The minimax principle is used in statistics to design estimators. Since the de-noised signal can be assimilated to the estimator of the unknown regression function, the minimax estimator is the option that realizes the minimum, over a given set of functions, of the maximum mean square error.
Typically it is interesting to show how thselect
works if y
is a Gaussian white noise N(0,1) signal.
y = randn(1,1000); thr = thselect(y,'rigrsure') thr = 2.0735 thr = thselect(y,'sqtwolog') thr = 3.7169 thr = thselect(y,'heursure') thr = 3.7169 thr = thselect(y,'minimaxi') thr = 2.2163
Because y
is a standard Gaussian white noise, we expect that each method kills roughly all the coefficients and returns the result f(x) = 0. For Stein's Unbiased Risk Estimate and minimax thresholds, roughly 3% of coefficients are saved. For other selection rules, all the coefficients are set to 0.
We know that the detail coefficients vector is the superposition of the coefficients of f and the coefficients of e, and that the decomposition of e leads to detail coefficients, which are standard Gaussian white noises.
So minimax and SURE threshold selection rules are more conservative and would be more convenient when small details of function f lie near the noise range. The two other rules remove the noise more efficiently. The option 'heursure'
is a compromise. In this example, the fixed form threshold wins.
Recalling step 2 of the de-noise procedure, the function thselect
performs a threshold selection, and then each level is thresholded. This second step can be done using wthcoef
, directly handling the wavelet decomposition structure of the original signal s.
Dealing with Unscaled Noise and Nonwhite Noise
Usually in practice the basic model cannot be used directly. We examine here the options available to deal with model deviations in the main de-noising function wden
.
which returns the de-noised version sd
of the original signal s
obtained using the tptr
threshold selection rule. Other parameters needed are sorh
, scal
, n
, and wav
. The parameter sorh
specifies the thresholding of details coefficients of the decomposition at level n
of s
by the wavelet called wav
. The remaining parameter scal
is to be specified. It corresponds to threshold's rescaling methods.
Option |
Corresponding Model |
'one' |
Basic model |
'sln' |
Basic model with unscaled noise |
'mln' |
Basic model with non-white noise |
scal = 'one'
corresponds to the basic model.
Option scal = 'sln'
handles threshold rescaling using a single estimation of level noise based on the first-level coefficients.
This estimation is implemented in M-file wnoisest
, directly handling the wavelet decomposition structure of the original signal s.
Option scal = 'mln'
handles threshold rescaling using a level-dependent estimation of the level noise.
For a more general procedure, the wdencmp
function performs wavelet coefficients thresholding for both de-noising and compression purposes, while directly handling one-dimensional and two-dimensional data. It allows you to define your own thresholding strategy selecting in
opt = 'gbl'
and thr
is a positive real number for uniform threshold.
opt = 'lvd'
and thr
is a vector for level dependent threshold.
keepapp = 1
to keep approximation coefficients, as previously and
keepapp = 0
to allow approximation coefficients thresholding.
x
is the signal to be de-noised and wav
, n
, sorh
are the same as above.
De-Noising in Action
We begin with examples of one-dimensional de-noising methods with the first example credited to Donoho and Johnstone. You can use the following M-file to get the first test function using wnoise
.
% Set signal to noise ratio and set rand seed. sqrt_snr = 4; init = 2055615866; % Generate original signal xref and a noisy version x adding % a standard Gaussian white noise. [xref,x] = wnoise(1,11,sqrt_snr,init); % De-noise noisy signal using soft heuristic SURE thresholding % and scaled noise option, on detail coefficients obtained % from the decomposition of x, at level 3 by sym8 wavelet. xd = wden(x,'heursure','s','one',3,'sym8');
Figure 6-29: Blocks Signal De-Noising
Since only a small number of large coefficients characterize the original signal, the method performs very well (see Figure 6-29 above). If you want to see more about how the thresholding works, use the GUI (see De-Noising Signals).
As a second example, let us try the method on the highly perturbed part of the electrical signal studied above.
According to this previous analysis, let us use db3
wavelet and decompose at level 3.
To deal with the composite noise nature, let us try a level-dependent noise size estimation.
% Load electrical signal and select part of it. load leleccum; indx = 2000:3450; x = leleccum(indx); % Find first value in order to avoid edge effects. deb = x(1); % De-noise signal using soft fixed form thresholding % and unknown noise option. xd = wden(x-deb,'sqtwolog','s','mln',3,'db3')+deb;
Figure 6-30: Electrical Signal De-Noising
The result is quite good in spite of the time heterogeneity of the nature of the noise after and before the beginning of the sensor failure around time 2450.
Extension to Image De-Noising
The de-noising method described for the one-dimensional case applies also to images and applies well to geometrical images. A direct translation of the one-dimensional model is
where e is a white Gaussian noise with unit variance.
The two-dimensional de-noising procedure has the same three steps and uses two-dimensional wavelet tools instead of one-dimensional ones. For the threshold selection, prod(size(s))
is used instead of length(s)
if the fixed form threshold is used.
Note that except for the "automatic" one-dimensional de-noising case, de-noising and compression are performed using wdencmp
. As an example, you can use the following M-file illustrating the de-noising of a real image.
% Load original image. load woman % Generate noisy image. init = 2055615866; randn('seed',init); x = X + 15*randn(size(X)); % Find default values. In this case fixed form threshold % is used with estimation of level noise, thresholding % mode is soft and the approximation coefficients are % kept. [thr,sorh,keepapp] = ddencmp('den','wv',x); % thr is equal to estimated_sigma*sqrt(log(prod(size(X)))) thr thr = 107.6428 % De-noise image using global thresholding option. xd = wdencmp('gbl',x,'sym4',2,thr,sorh,keepapp); % Plots. colormap(pink(255)), sm = size(map,1); subplot(221), image(wcodemat(X,sm)), title('Original Image') subplot(222), image(wcodemat(x,sm)), title('Noisy Image') subplot(223), image(wcodemat(xd,sm)), title('De-Noised Image')
The result shown below is acceptable.
One-Dimensional Variance Adaptive Thresholding of Wavelet Coefficients
Local thresholding of wavelet coefficients, for one- or two-dimensional data, is a capability available from a lot of graphical interface tools throughout the MATLAB Wavelet Toolbox (see Using Wavelets).
The idea is to define level by level time-dependent thresholds, and then increase the capability of the de-noising strategies to handle nonstationary variance noise models.
More precisely, the model assumes (as previously) that the observation is equal to the interesting signal superimposed on a noise (see De-Noising).
But the noise variance can vary with time. There are several different variance values on several time intervals. The values as well as the intervals are unknown.
Let us focus on the problem of estimating the change points or equivalently the intervals. The algorithm used is based on an original work of Marc Lavielle about detection of change points using dynamic programming (see [Lav99] in References).
Let us generate a signal from a fixed-design regression model with two noise variance change points located at positions 200 and 600.
% Generate blocks test signal. x = wnoise(1,10); % Generate noisy blocks with change points. init = 2055615866; randn('seed',init); bb = randn(1,length(x)); cp1 = 200; cp2 = 600; x = x + [bb(1:cp1),bb(cp1+1:cp2)/3,bb(cp2+1:end)];
The aim of this example is to recover the two change points from the signal x
.
In addition, this example illustrates how the GUI tools (see Using Wavelets) locate the change points for interval dependent thresholding.
Step 1. Recover a noisy signal by suppressing an approximation.
% Perform a single-level wavelet decomposition % of the signal using db3. wname = 'db3'; lev = 1; [c,l] = wavedec(x,lev,wname); % Reconstruct detail at level 1. det = wrcoef('d',c,l,wname,1);
The reconstructed detail at level 1 recovered at this stage is almost signal free. It captures the main features of the noise from a change points detection viewpoint if the interesting part of the signal has a sparse wavelet representation. To remove almost all the signal, we replace the biggest values by the mean.
Step 2. To remove almost all the signal, replace 2% of biggest values by the mean.
x = sort(abs(det)); v2p100 = x(fix(length(x)*0.98)); ind = find(abs(det)>v2p100); det(ind) = mean(det);
Step 3. Use the wvarchg
function to estimate the change points with the following parameters:
Two change points and three intervals are proposed. Since the three interval variances for the noise are very different the optimization program detects easily the correct structure.
The estimated change points are close to the true change points: 200 and 600.
Step 4. (Optional) Replace the estimated change points.
For 2
i
6
, t_est(i,1:i-1)
contains the i-1
instants of the variance change points, and since kopt
is the proposed number of change points; then
You can replace the estimated change points by computing
More About De-Noising
The de-noising methods based on wavelet decomposition appear mainly initiated by Donoho and Johnstone in the USA, and Kerkyacharian and Picard in France. Meyer considers that this topic is one of the most significant applications of wavelets (cf. [Mey93] page 173). This chapter and the corresponding M-files follow the work of the above mentioned researchers. More details can be found in Donoho's references in the section References and in the section More About the Thresholding Strategies.
Noise Processing | Data Compression |
© 1994-2005 The MathWorks, Inc.