`
deepfuture
  • 浏览: 4412370 次
  • 性别: Icon_minigender_1
  • 来自: 湛江
博客专栏
073ec2a9-85b7-3ebf-a3bb-c6361e6c6f64
SQLite源码剖析
浏览量:80133
1591c4b8-62f1-3d3e-9551-25c77465da96
WIN32汇编语言学习应用...
浏览量:70351
F5390db6-59dd-338f-ba18-4e93943ff06a
神奇的perl
浏览量:103594
Dac44363-8a80-3836-99aa-f7b7780fa6e2
lucene等搜索引擎解析...
浏览量:286574
Ec49a563-4109-3c69-9c83-8f6d068ba113
深入lucene3.5源码...
浏览量:15054
9b99bfc2-19c2-3346-9100-7f8879c731ce
VB.NET并行与分布式编...
浏览量:67786
B1db2af3-06b3-35bb-ac08-59ff2d1324b4
silverlight 5...
浏览量:32292
4a56b548-ab3d-35af-a984-e0781d142c23
算法下午茶系列
浏览量:46075
社区版块
存档分类
最新评论

matlab-BP神经网络

 
阅读更多

BP网络设计2要素:

1、精度

2、训练时间

1)循环次数

2)每次循环中计算所花费的时间

 

异或问题是不能用线性神经网络解决的,必须使用非线性,BP可以轻易解决这个问题

 

>> P=[0 0 1 1;0 1 0 1]

P =

     0     0     1     1
     0     1     0     1

>> T=[0 1 1 0]

T =

     0     1     1     0

 

>> net=newff(minmax(P),[5 1],{'tansig','purelin'},'trainlm')

 

>> a=sim(net,P)

a =

    0.0000    1.0000    1.0000    0.0000

 

 

 

2010b中这么用

>> P=[0 0 1 1;0 1 0 1]

P =

     0     0     1     1
     0     1     0     1

>> T=[0 1 1 0]

T =

     0     1     1     0

>> net1=newff(P,T,5)

>> net1.divideFcn=''

>> net1=train(net1,P,T)

>> a=net1(P)

a =

   -0.0000    1.0000    1.0000   -0.0000

>>

 

 

 

 help newff

 newff Create a feed-forward backpropagation network.
 
   Obsoleted in R2010b NNET 7.0.  Last used in R2010a NNET 6.0.4.
   The recommended function is feedforwardnet.
 
   Syntax
 
     net = newff(P,T,S)
     net = newff(P,T,S,TF,BTF,BLF,PF,IPF,OPF,DDF)
 
   Description
 
     newff(P,T,S) takes,
       P  - RxQ1 matrix of Q1 representative R-element input vectors.
       T  - SNxQ2 matrix of Q2 representative SN-element target vectors.
       Si  - Sizes of N-1 hidden layers, S1 to S(N-1), default = [].
             (Output layer size SN is determined from T.)
     and returns an N layer feed-forward backprop network.
 
     newff(P,T,S,TF,BTF,BLF,PF,IPF,OPF,DDF) takes optional inputs,
       TFi - Transfer function of ith layer. Default is 'tansig' for
             hidden layers, and 'purelin' for output layer.
       BTF - Backprop network training function, default = 'trainlm'.
       BLF - Backprop weight/bias learning function, default = 'learngdm'.
       PF  - Performance function, default = 'mse'.
       IPF - Row cell array of input processing functions.
             Default is {'fixunknowns','remconstantrows','mapminmax'}.
       OPF - Row cell array of output processing functions.
             Default is {'remconstantrows','mapminmax'}.
       DDF - Data division function, default = 'dividerand';
     and returns an N layer feed-forward backprop network.
 
     The transfer functions TF{i} can be any differentiable transfer
     function such as TANSIG, LOGSIG, or PURELIN.
 
     The training function BTF can be any of the backprop training
     functions such as TRAINLM, TRAINBFG, TRAINRP, TRAINGD, etc.
 
     *WARNING*: TRAINLM is the default training function because it
     is very fast, but it requires a lot of memory to run.  If you get
     an "out-of-memory" error when training try doing one of these:
 
     (1) Slow TRAINLM training, but reduce memory requirements, by
         setting NET.efficiency.memoryReduction to 2 or more. (See HELP TRAINLM.)
     (2) Use TRAINBFG, which is slower but more memory efficient than TRAINLM.
     (3) Use TRAINRP which is slower but more memory efficient than TRAINBFG.
 
     The learning function BLF can be either of the backpropagation
     learning functions such as LEARNGD, or LEARNGDM.
 
     The performance function can be any of the differentiable performance
     functions such as MSE or MSEREG.
 
   Examples
 
     [inputs,targets] = simplefitdata;
     net = newff(inputs,targets,20);
     net = train(net,inputs,targets);
     outputs = net(inputs);
     errors = outputs - targets;
     perf = perform(net,outputs,targets)
 
   Algorithm
 
     Feed-forward networks consist of Nl layers using the DOTPROD
     weight function, NETSUM net input function, and the specified
     transfer functions.
 
     The first layer has weights coming from the input.  Each subsequent
     layer has a weight coming from the previous layer.  All layers
     have biases.  The last layer is the network output.
 
     Each layer's weights and biases are initialized with INITNW.
 
     Adaption is done with TRAINS which updates weights with the
     specified learning function. Training is done with the specified
     training function. Performance is measured according to the specified
     performance function.
 
   See also newcf, newelm, sim, init, adapt, train, trains

 

我们也可以使用feedforwardnet做为newff的替代

 

>> net2=feedforwardnet(2)

>> net2.divideFcn=''

>> net2=train(net2,P,T)

 >> net2=train(net2,P,T)

 a =

   -0.0000    1.0000    1.0000   -0.0000

 

 

 

 

 

 

>> help feedforwardnet
 feedforwardnet Feedforward neural network.
 
   Two (or more) layer feedforward networks can implement any finite
   input-output function arbitrarily well given enough hidden neurons.
 
   feedforwardnet(hiddenSizes,trainFcn) takes a 1xN vector of N hidden
   layer sizes, and a backpropagation training function, and returns
   a feed-forward neural network with N+1 layers.
 
   Input, output and output layers sizes are set to 0.  These sizes will
   automatically be configured to match particular data by train. Or the
   user can manually configure inputs and outputs with configure.
 
   Defaults are used if feedforwardnet is called with fewer arguments.
   The default arguments are (10,'trainlm').
 
   Here a feed-forward network is used to solve a simple fitting problem:
 
     [x,t] = simplefit_dataset;
     net = feedforwardnet(10);
     net = train(net,x,t);
     view(net)
     y = net(x);
     perf = perform(net,t,y)
 
   See also fitnet, patternnet, cascadeforwardnet.

    Reference page in Help browser
       doc feedforwardnet

 

 

  1. 前向网络创建函数: newcf, newff, newfftd
  2. 激励函数: logsig, dlogsig, tansig, dtansig, purelin, dpurelin
  3. 学习函数: learngd, learngdm
  4. 性能函数: mse, msereg

BP网络创建函数

在介绍这类函数之前, 先给出在BP网络创建函数中可能用得到的变量及其含义:

  • PR: 由每组输入元素的最大值和最小值组成的Rx2的矩阵;
  • $$S_i$$: 第i层的长度, 共计N层;
  • $$TF_i$$: 第$$i$$层的激励函数, 默认为"tansig";
  • BTF: 网络的训练函数, 默认为"trainlm";
  • BLF: 权值和阈值的学习算法, 默认为"learngdm";
  • PF: 网络的性能函数, 默认为"mse"

函数newcf

这个函数用于创建级联前向BP网络, 调用格式为:

net=newcf
net=newcf(PR, [S1, S2...SN], {TF1 TF2 ... TFN}, BTF, BLF, PF)

其中, net=newcf用于在对话框中创建一个BP网络..

函数newff

这个函数用于创建一个BP网络, 其调用格式为:

net=newff
net=newff(PR, [S1, S2...SN], {TF1 TF2 ... TFN}, BTF, BLF, PF)

其中net=newff用于在对话框中创建一个BP网络.

函数newfftd

这个函数用于创建一个存在输入延迟的前向网络, 其调用格式为:

net=newfftd
net=newfftd(PR, [S1, S2...SN], {TF1 TF2 ... TFN}, BTF, BLF, PF)

其中, net=newfftd用于在对话框中创建一个BP网络.

神经元激励函数

激励函数是BP神经网络的重要组成部分, 必须是连续可微的. BP网络经常采用S型的对数或者正切函数和线性函数.

函数logsig

激励函数logsig为S型的对数函数, 语法格式:

A=logsig(N)
info=logsig(code)

其中, N为Q个S维的输入列向量; A为函数返回值, 位于区间(0,1)中; info依据code值的不同而返回不同的信息, 具体请help.

函数dlogsig

函数dlogsig为logsig的导函数, 使用格式为:

dA_dN=dlogsig(N,A)

其中, N为SxQ维网络输入; A为SxQ维网络输出; dA_dN为函数的返回值, 输出对输入的导数. 该函数使用的算法为$$d=a(1-a)$$. 示例: 假设一个BP网络某层有3个神经元, 其激励函数为S型的对数函数, 利用函数dlogsig进行计算, 其MATLAB代码为:

N=[1, 6, 2]'
A=logsig(N)
da_dn=dlogsig(N,A)

函数tansig

函数tansig为双曲正切S型激励函数, 其使用格式为: A=tansig(N) info=tansig(code) 其中, N为Q个S维的输入列向量; A为函数返回值, 位于区间(-1, 1)之间; info依据code不同返回不同的值, 具体请参考help文件. 该函数使用的算法为: $$!n=\frac{2}{[1+e{-2n}]-1}$$

函数dtansig

函数dtansig是tansig的导函数, 调用格式为:

dA_dN=dtansig(N,A)

其中, 参数的含义与dlogsig相同, 该函数使用的算法为$$d=1-a2$$

函数purelin

函数purelin为线性激励函数, 其使用格式为: A=purelin(N) info=purelin(code) 其中, N为Q个S维的输入列向量; A为函数返回值, A=N; info仍然是依据code值来返回不同的信息. 该函数使用的算法是$$purelin(n)=n$$

函数dpurelin

函数dpurelin是purelin的导函数, 其调用格式为: dA_dN=dpurelin(N,A) 其中, 参数的含义与函数dlogsig的相同

BP网络学习函数

函数learngd: 这是梯度梯度下降权值/阈值学习函数, 它通过神经元的输入和误差, 以及权值和阈值的学习速率计算权值或阈值的变化率. 函数learngdm: 这是梯度下降动量学习函数, 它利用神经元的输入和误差, 权值或者阈值的学习速率和动量常数计算权值或者阈值的变化率. 具体函数的使用请参考help文件.

BP网络训练函数

函数trainbfg: 这个函数为BFGS准牛顿BP算法函数. 除了BP网络之外, 该函数也可以训练任意形式的神经网络, 要求它的激励函数对于权值和输入存在导数. 函数traingd: 函数traingd是梯度下降BP算法训练函数. 函数traingdm: 为梯度下降动量BP算法训练函数. 类似其它的训练函数还有很多, 就不一一例举了, 请大家自行help之.

性能函数

函数mse和msereg, mse是均方误差性能函数, msereg是在函数mse基础上增加了一项网络权值和阈值的均方值,目的是使网络获得较小的权值和阈值, 从而迫使网络的响应变得更平滑.

分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics