Neural Network Toolbox |
Principal component transformation
Syntax
Description
trapca
preprocesses the network input training set by applying the principal component transformation that was previously computed by prepca. This function needs to be used when a network has been trained using data normalized by prepca. All subsequent inputs to the network need to be transformed using the same normalization.
trapca(P,transMat)
takes these inputs,
Examples
Here is the code to perform a principal component analysis and retain only those components that contribute more than two percent to the variance in the data set. prestd is called first to create zero mean data, which is needed for prepca.
p = [-1.5 -0.58 0.21 -0.96 -0.79; -2.2 -0.87 0.31 -1.4 -1.2]; t = [-0.08 3.4 -0.82 0.69 3.1]; [pn,meanp,stdp,tn,meant,stdt] = prestd(p,t); [ptrans,transMat] = prepca(pn,0.02); net = newff(minmax(ptrans),[5 1],{'tansig''purelin'},'trainlm'); net = train(net,ptrans,tn);
If we then receive new inputs to apply to the trained network, we will use trastd
and trapca
to transform them first. Then the transformed inputs can be used to simulate the previously trained network. The network output must also be unnormalized using poststd.
p2 = [1.5 -0.8;0.05 -0.3]; [p2n] = trastd(p2,meanp,stdp); [p2trans] = trapca(p2n,transMat) an = sim(net,p2trans); [a] = poststd(an,meant,stdt);
Algorithm
See Also
prestd
,
premnmx
,
prepca
,
trastd
,
tramnmx
tramnmx | trastd |
© 1994-2005 The MathWorks, Inc.