Neural Network Toolbox |
More Efficient Design (newrb)
The function newrb
iteratively creates a radial basis network one neuron at a time. Neurons are added to the network until the sum-squared error falls beneath an error goal or a maximum number of neurons has been reached. The call for this function is:
The function newrb
takes matrices of input and target vectors, P
and T
, and design parameters GOAL and, SPREAD, and returns the desired network.
The design method of newrb is similar to that of newrbe
. The difference is that newrb creates neurons one at a time. At each iteration the input vector that results in lowering the network error the most, is used to create a radbas
neuron. The error of the new network is checked, and if low enough newrb is finished. Otherwise the next neuron is added. This procedure is repeated until the error goal is met, or the maximum number of neurons is reached.
As with newrbe
, it is important that the spread parameter be large enough that the radbas
neurons respond to overlapping regions of the input space, but not so large that all the neurons respond in essentially the same manner.
Why not always use a radial basis network instead of a standard feedforward network? Radial basis networks, even when designed efficiently with newrbe
, tend to have many times more neurons than a comparable feedforward network with tansig
or logsig
neurons in the hidden layer.
This is because sigmoid neurons can have outputs over a large region of the input space, while radbas
neurons only respond to relatively small regions of the input space. The result is that the larger the input space (in terms of number of inputs, and the ranges those inputs vary over) the more radbas
neurons required.
On the other hand, designing a radial basis network often takes much less time than training a sigmoid/linear network, and can sometimes result in fewer neurons being used, as can be seen in the next demonstration.
Exact Design (newrbe) | Demonstrations |
© 1994-2005 The MathWorks, Inc.