Neural Network Toolbox |
Initialization Functions
You can create three kinds of initialization functions: network, layer, and weight/bias initialization.
Network Initialization Functions
The most general kind of initialization function is the network initialization function which sets all the weights and biases of a network to values suitable as a starting point for training or adaption.
Once defined, you can assign your network initialization function to a network.
Your network initialization function is used whenever you initialize your network.
To be a valid network initialization function, it must take and return a network.
Your function can set the network's weight and bias values in any way you want. However, you should be careful not to alter any other properties, or to set the weight matrices and bias vectors of the wrong size. For performance reasons, init
turns off the normal type checking for network properties before calling your initialization function. So if you set a weight matrix to the wrong size, it won't immediately generate an error, but could cause problems later when you try to simulate or train the network.
You can examine the implementation of the toolbox function initlay
if you are interested in creating your own network initialization function.
Layer Initialization Functions
The layer initialization function sets all the weights and biases of a layer to values suitable as a starting point for training or adaption.
Once defined, you can assign your layer initialization function to a layer of a network. For example, the following line of code assigns the layer initialization function yourlif
to the second layer of a network.
Layer initialization functions are only called to initialize a layer if the network initialization function (net.initFcn
) is set to the toolbox function initlay
. If this is the case, then your function is used to initialize the layer whenever you initialize your network with init
.
To be a valid layer initialization function, it must take a network and a layer index i
, and return the network after initializing the ith layer.
Your function can then set the ith layer's weight and bias values in any way you see fit. However, you should be careful not to alter any other properties, or to set the weight matrices and bias vectors to the wrong size.
If you are interested in creating your own layer initialization function, you can examine the implementations of the toolbox functions initwb
and initnw
.
Weight and Bias Initialization Functions
The weight and bias initialization function sets all the weights and biases of a weight or bias to values suitable as a starting point for training or adaption.
Once defined, you can assign your initialization function to any weight and bias in a network. For example, the following lines of code assign the weight and bias initialization function yourwbif
to the second layer's bias, and the weight coming from the first input to the second layer.
Weight and bias initialization functions are only called to initialize a layer if the network initialization function (net.initFcn
) is set to the toolbox function initlay
, and the layer's initialization function (net.layers{i}.initFcn
) is set to the toolbox function initwb
. If this is the case, then your function is used to initialize the weight and biases it is assigned to whenever you initialize your network with init
.
To be a valid weight and bias initialization function, it must take a the number of neurons in a layer S
, and a two-column matrix PR
of R rows defining the minimum and maximum values of R inputs and return a new weight matrix W
,
S
is the number of neurons in the layer.
PR
is an matrix defining the minimum and maximum values of R inputs.
W
is a new weight matrix.
Your function also needs to generate a new bias vector as follows,
To see how an example custom weight and bias initialization function works, type
Use this command to see how mywbif was implemented.
You can use mywbif as a template to create your own weight and bias initialization function.
Simulation Functions | Learning Functions |
© 1994-2005 The MathWorks, Inc.