Neural Network Toolbox |
Syntax
[net,tr] = newrb(P,T,goal,spread,MN,DF)
Description
Radial basis networks can be used to approximate functions. newrb
adds neurons to the hidden layer of a radial basis network until it meets the specified mean squared error goal.
net = newrb
creates a new network with a dialog box.
newrb(P,T,goal,spread,MN, DF)
takes two to these arguments,
P
-- R
x Q
matrix of Q
input vectors
T
-- S
x Q
matrix of Q
target class vectors
goal
-- Mean squared error goal, default = 0.0
spread
-- Spread of radial basis functions, default = 1.0
MN
-- Maximum number of neurons, default is Q
DF
-- Number of neurons to add between displays, default = 25
and returns a new radial basis network.
The larger that spread is, the smoother the function approximation will be. Too large a spread means a lot of neurons will be required to fit a fast changing function. Too small a spread means many neurons will be required to fit a smooth function, and the network may not generalize well. Call newrb
with different spreads to find the best value for a given problem.
Examples
Here we design a radial basis network given inputs P
and targets T
.
Here the network is simulated for a new input.
Algorithm
newrb
creates a two-layer network. The first layer has radbas
neurons, and calculates its weighted inputs with dist
, and its net input with netprod
. The second layer has purelin
neurons, and calculates its weighted input with dotprod
and its net inputs with netsum
. Both layers have biases.
Initially the radbas
layer has no neurons. The following steps are repeated until the network's mean squared error falls below goal
.
radbas
neuron is added with weights equal to that vector.
purelin
layer weights are redesigned to minimize error.
See Also
newpnn | newrbe |
© 1994-2005 The MathWorks, Inc.