Neural Network Toolbox |
Exact Design (newrbe)
Radial basis networks can be designed with the function newrbe
. This function can produce a network with zero error on training vectors. It is called in the following way.
The function newrbe
takes matrices of input vectors P
and target vectors T
, and a spread constant SPREAD for the radial basis layer, and returns a network with weights and biases such that the outputs are exactly T
when the inputs are P
.
This function newrbe
creates as many radbas
neurons as there are input vectors in P, and sets the first-layer weights to P'. Thus, we have a layer of radbas
neurons in which each neuron acts as a detector for a different input vector. If there are Q input vectors, then there will be Q neurons.
Each bias in the first layer is set to 0.8326/SPREAD. This gives radial basis functions that cross 0.5 at weighted inputs of +/- SPREAD. This determines the width of an area in the input space to which each neuron responds. If SPREAD is 4, then each radbas
neuron will respond with 0.5 or more to any input vectors within a vector distance of 4 from their weight vector. As we shall see, SPREAD should be large enough that neurons respond strongly to overlapping regions of the input space.
The second-layer weights IW 2,1 (or in code, IW{2,1}) and biases b2 (or in code, b{2}) are found by simulating the first-layer outputs a1 (A{1}), and then solving the following linear expression.
We know the inputs to the second layer (A{1}) and the target (T), and the layer is linear. We can use the following code to calculate the weights and biases of the second layer to minimize the sum-squared error.
Here Wb
contains both weights and biases, with the biases in the last column. The sum-squared error will always be 0, as explained below.
We have a problem with C constraints (input/target pairs) and each neuron has C +1 variables (the C weights from the C radbas
neurons, and a bias). A linear problem with C constraints and more than C variables has an infinite number of zero error solutions!
Thus, newrbe
creates a network with zero error on training vectors. The only condition we have to meet is to make sure that SPREAD is large enough so that the active input regions of the radbas
neurons overlap enough so that several radbas
neurons always have fairly large outputs at any given moment. This makes the network function smoother and results in better generalization for new input vectors occurring between input vectors used in the design. (However, SPREAD should not be so large that each neuron is effectively responding in the same, large, area of the input space.)
The drawback to newrbe
is that it produces a network with as many hidden neurons as there are input vectors. For this reason, newrbe
does not return an acceptable solution when many input vectors are needed to properly define a network, as is typically the case.
Network Architecture | More Efficient Design (newrb) |
© 1994-2005 The MathWorks, Inc.