Neural Network Toolbox Previous page   Next Page
newrb

Design a radial basis network

Syntax

net = newrb

[net,tr] = newrb(P,T,goal,spread,MN,DF)

Description

Radial basis networks can be used to approximate functions. newrb adds neurons to the hidden layer of a radial basis network until it meets the specified mean squared error goal.

net = newrb creates a new network with a dialog box.

newrb(P,T,goal,spread,MN, DF) takes two to these arguments,

and returns a new radial basis network.

The larger that spread is, the smoother the function approximation will be. Too large a spread means a lot of neurons will be required to fit a fast changing function. Too small a spread means many neurons will be required to fit a smooth function, and the network may not generalize well. Call newrb with different spreads to find the best value for a given problem.

Examples

Here we design a radial basis network given inputs P and targets T.

Here the network is simulated for a new input.

Algorithm

newrb creates a two-layer network. The first layer has radbas neurons, and calculates its weighted inputs with dist, and its net input with netprod. The second layer has purelin neurons, and calculates its weighted input with dotprod and its net inputs with netsum. Both layers have biases.

Initially the radbas layer has no neurons. The following steps are repeated until the network's mean squared error falls below goal.

  1. The network is simulated.
  2. The input vector with the greatest error is found.
  3. A radbas neuron is added with weights equal to that vector.
  4. The purelin layer weights are redesigned to minimize error.

See Also

sim, newrbe, newgrnn, newpnn


Previous page  newpnn newrbe Next page

© 1994-2005 The MathWorks, Inc.