Neural Network Toolbox |
Preprocessing and Postprocessing
Neural network training can be made more efficient if certain preprocessing steps are performed on the network inputs and targets. In this section, we describe several preprocessing routines that you can use.
Min and Max (premnmx, postmnmx, tramnmx)
Before training, it is often useful to scale the inputs and targets so that they always fall within a specified range. The function premnmx
can be used to scale inputs and targets so that they fall in the range [-1,1]. The following code illustrates the use of this function.
The original network inputs and targets are given in the matrices p
and t
. The normalized inputs and targets, pn
and tn
, that are returned will all fall in the interval [-1,1]. The vectors minp
and maxp
contain the minimum and maximum values of the original inputs, and the vectors mint
and maxt
contain the minimum and maximum values of the original targets. After the network has been trained, these vectors should be used to transform any future inputs that are applied to the network. They effectively become a part of the network, just like the network weights and biases.
If premnmx
is used to scale both the inputs and targets, then the output of the network will be trained to produce outputs in the range [-1,1]. If you want to convert these outputs back into the same units that were used for the original targets, then you should use the routine postmnmx
. In the following code, we simulate the network that was trained in the previous code, and then convert the network output back into the original units.
The network output an
will correspond to the normalized targets tn
. The un-normalized network output a
is in the same units as the original targets t
.
If premnmx
is used to preprocess the training set data, then whenever the trained network is used with new inputs they should be preprocessed with the minimum and maximums that were computed for the training set. This can be accomplished with the routine tramnmx
. In the following code, we apply a new set of inputs to the network we have already trained.
Summary and Discussion | Mean and Stand. Dev. (prestd, poststd, trastd) |
© 1994-2005 The MathWorks, Inc.