`
deepfuture
  • 浏览: 4411178 次
  • 性别: Icon_minigender_1
  • 来自: 湛江
博客专栏
073ec2a9-85b7-3ebf-a3bb-c6361e6c6f64
SQLite源码剖析
浏览量:80123
1591c4b8-62f1-3d3e-9551-25c77465da96
WIN32汇编语言学习应用...
浏览量:70323
F5390db6-59dd-338f-ba18-4e93943ff06a
神奇的perl
浏览量:103573
Dac44363-8a80-3836-99aa-f7b7780fa6e2
lucene等搜索引擎解析...
浏览量:286524
Ec49a563-4109-3c69-9c83-8f6d068ba113
深入lucene3.5源码...
浏览量:15039
9b99bfc2-19c2-3346-9100-7f8879c731ce
VB.NET并行与分布式编...
浏览量:67776
B1db2af3-06b3-35bb-ac08-59ff2d1324b4
silverlight 5...
浏览量:32280
4a56b548-ab3d-35af-a984-e0781d142c23
算法下午茶系列
浏览量:46072
社区版块
存档分类
最新评论

matlab-神经网络-自定义多层感知器解决异或(2)

 
阅读更多

继续定义单元神经元

 

net.inputs{i}.range


This property defines the range of each element of the ith network input.

It can be set to any Ri x 2 matrix, where Ri is the number of elements in the input (net.inputs{i}.size), and each element in column 1 is less than the element next to it in column 2.

Each jth row defines the minimum and maximum values of the jth input element, in that order:
net.inputs{i}(j,:)

 

Uses.   Some initialization functions use input ranges to find appropriate initial values for input weight matrices.

Side Effects.   Whenever the number of rows in this property is altered, the input size, processedSize, and processedRange change to remain consistent. The sizes of any weights coming from this input and the dimensions of the weight matrices also change.

 

>> net.inputs{1}.range=[0 1;0 1]

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 2
       biasConnect: [1; 1]
      inputConnect: [1; 0]
      layerConnect: [0 0; 1 0]
     outputConnect: [0 1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {2x1 cell} of layers
           outputs: {1x2 cell} containing 1 output
            biases: {2x1 cell} containing 2 biases
      inputWeights: {2x1 cell} containing 1 input weight
      layerWeights: {2x2 cell} containing 1 layer weight

    functions:

          adaptFcn: (none)
         divideFcn: (none)
       gradientFcn: (none)
           initFcn: (none)
        performFcn: (none)
          plotFcns: {}
          trainFcn: (none)

    parameters:

        adaptParam: (none)
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: (none)

    weight and bias values:

                IW: {2x1 cell} containing 1 input weight matrix
                LW: {2x2 cell} containing 1 layer weight matrix
                 b: {2x1 cell} containing 2 bias vectors

    other:

              name: ''
          userdata: (user information)

>>

======

 

 

net.layers{i}.size


This property defines the number of neurons in the ith layer. It can be set to 0 or a positive integer.

Side Effects.   Whenever this property is altered, the sizes of any input weights going to the layer (net.inputWeights{i,:}.size), any layer weights going to the layer (net.layerWeights{i,:}.size) or coming from the layer (net.inputWeights{i,:}.size), and the layer's bias (net.biases{i}.size), change.

The dimensions of the corresponding weight matrices (net.IW{i,:}, net.LW{i,:}, net.LW{:,i}), and biases (net.b{i}) also change.

Changing this property also changes the size of the layer's output (net.outputs{i}.size) and target (net.targets{i}.size) if they exist.

Finally, when this property is altered, the dimensions of the layer's neurons (net.layers{i}.dimension) are set to the same value. (This results in a one-dimensional arrangement of neurons. If another arrangement is required, set the dimensions property directly instead of using size.

======= 

>> net.layers{1}.size=2

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 2
       biasConnect: [1; 1]
      inputConnect: [1; 0]
      layerConnect: [0 0; 1 0]
     outputConnect: [0 1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {2x1 cell} of layers
           outputs: {1x2 cell} containing 1 output
            biases: {2x1 cell} containing 2 biases
      inputWeights: {2x1 cell} containing 1 input weight
      layerWeights: {2x2 cell} containing 1 layer weight

    functions:

          adaptFcn: (none)
         divideFcn: (none)
       gradientFcn: (none)
           initFcn: (none)
        performFcn: (none)
          plotFcns: {}
          trainFcn: (none)

    parameters:

        adaptParam: (none)
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: (none)

    weight and bias values:

                IW: {2x1 cell} containing 1 input weight matrix
                LW: {2x2 cell} containing 1 layer weight matrix
                 b: {2x1 cell} containing 2 bias vectors

    other:

              name: ''
          userdata: (user information)

>>

 

 

=====

net.layers{i}.initFcn


This property defines which of the layer initialization functions are used to initialize the ith layer, if the network initialization function (net.initFcn) is initlay. If the network initialization is set to initlay, then the function indicated by this property is used to initialize the layer's weights and biases.

For a list of functions, type
help nninit

=====

 

>> net.layers{1}.initFcn='initnw'

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 2
       biasConnect: [1; 1]
      inputConnect: [1; 0]
      layerConnect: [0 0; 1 0]
     outputConnect: [0 1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {2x1 cell} of layers
           outputs: {1x2 cell} containing 1 output
            biases: {2x1 cell} containing 2 biases
      inputWeights: {2x1 cell} containing 1 input weight
      layerWeights: {2x2 cell} containing 1 layer weight

    functions:

          adaptFcn: (none)
         divideFcn: (none)
       gradientFcn: (none)
           initFcn: (none)
        performFcn: (none)
          plotFcns: {}
          trainFcn: (none)

    parameters:

        adaptParam: (none)
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: (none)

    weight and bias values:

                IW: {2x1 cell} containing 1 input weight matrix
                LW: {2x2 cell} containing 1 layer weight matrix
                 b: {2x1 cell} containing 2 bias vectors

    other:

              name: ''
          userdata: (user information)

>>

 

>> net.layers{2}.size=1

>> net.layers{2}.initFcn='initnw'

>> net.layers{2}.transferFcn='hardlim'

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 2
       biasConnect: [1; 1]
      inputConnect: [1; 0]
      layerConnect: [0 0; 1 0]
     outputConnect: [0 1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {2x1 cell} of layers
           outputs: {1x2 cell} containing 1 output
            biases: {2x1 cell} containing 2 biases
      inputWeights: {2x1 cell} containing 1 input weight
      layerWeights: {2x2 cell} containing 1 layer weight

    functions:

          adaptFcn: (none)
         divideFcn: (none)
       gradientFcn: (none)
           initFcn: (none)
        performFcn: (none)
          plotFcns: {}
          trainFcn: (none)

    parameters:

        adaptParam: (none)
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: (none)

    weight and bias values:

                IW: {2x1 cell} containing 1 input weight matrix
                LW: {2x2 cell} containing 1 layer weight matrix
                 b: {2x1 cell} containing 2 bias vectors

    other:

              name: ''
          userdata: (user information)

>>

=

 

 

net.layers{i}.transferFcn


This function defines which of the transfer functions is used to calculate the ith layer's output, given the layer's net input, during simulation and training.

For a list of functions type: help nntransfer

=

>> net.adapFcn='trans'

===

 

net.adaptFcn


This property defines the function to be used when the network adapts. It can be set to the name of any network adapt function. The network adapt function is used to perform adaption whenever adapt is called.
[net,Y,E,Pf,Af] = adapt(NET,P,T,Pi,Ai)

 

For a list of functions type help nntrain.

Side Effects.   Whenever this property is altered, the network's adaption parameters (net.adaptParam) are set to contain the parameters and default values of the new function.

===

>> net.adaptFcn='trains'

net =

    Neural Network object:

    architecture:

         numInputs: 1
         numLayers: 2
       biasConnect: [1; 1]
      inputConnect: [1; 0]
      layerConnect: [0 0; 1 0]
     outputConnect: [0 1]

        numOutputs: 1  (read-only)
    numInputDelays: 0  (read-only)
    numLayerDelays: 0  (read-only)

    subobject structures:

            inputs: {1x1 cell} of inputs
            layers: {2x1 cell} of layers
           outputs: {1x2 cell} containing 1 output
            biases: {2x1 cell} containing 2 biases
      inputWeights: {2x1 cell} containing 1 input weight
      layerWeights: {2x2 cell} containing 1 layer weight

    functions:

          adaptFcn: 'trains'
         divideFcn: (none)
       gradientFcn: (none)
           initFcn: (none)
        performFcn: (none)
          plotFcns: {}
          trainFcn: (none)

    parameters:

        adaptParam: .passes
       divideParam: (none)
     gradientParam: (none)
         initParam: (none)
      performParam: (none)
        trainParam: (none)

    weight and bias values:

                IW: {2x1 cell} containing 1 input weight matrix
                LW: {2x2 cell} containing 1 layer weight matrix
                 b: {2x1 cell} containing 2 bias vectors

    other:

              name: ''
          userdata: (user information)

>>

==========

 

 

net.performFcn


This property defines the function used to measure the network's performance. You can set it to the name of any of the performance functions. The performance function is used to calculate network performance during training whenever train is called.
[net,tr] = train(NET,P,T,Pi,Ai)

 

For a list of functions, type
help nnperformance

 

Side Effects.   Whenever this property is altered, the network's performance parameters (net.performParam) are set to contain the parameters and default values of the new function

==========

net.performFcn='mse'

======

 

 

======

>> net.trainFcn='trainlm'

分享到:
评论

相关推荐

    Matlab多层感知器解决异或分类问题

    用多层感知器解决异或分类问题,用plot函数绘出向量分布和分类线。

    p_or.zip_XOR_多层感知器_多层神经网络_异或_感知器

    本文将围绕“p_or.zip_XOR_多层感知器_多层神经网络_异或_感知器”这一主题,深入探讨多层感知器如何解决著名的异或(XOR)问题。 首先,我们需要理解什么是异或(XOR)。异或门是数字逻辑电路中的基本元件,它输出...

    matlab开发-TWEYAYER多层感知器网络的实现

    在本项目中,我们关注的是使用MATLAB进行多层感知器(MLP)神经网络的开发,特别是聚焦于一个两层的结构。多层感知器是前馈神经网络的一种,通常用于非线性数据建模和分类任务。下面将详细讨论相关知识点。 1. **...

    matlab开发-多层感知神经网络模型与反向传播算法

    在IT领域,特别是机器学习和人工智能的实践中,多层感知器(Multilayer Perceptron, MLP)神经网络模型和反向传播(Backpropagation, BP)算法是基础且重要的组成部分。本文将深入探讨这两个概念及其在MATLAB环境中...

    神经网络感知器(原理及matlab程序)

    4. **多层感知器完成异或功能**:单层感知器无法解决非线性可分问题,例如“异或”问题。多层感知器引入了隐藏层,使得模型能够学习到更复杂的决策边界,从而解决了这个问题。多层感知器通常包含多个隐藏层,每个...

    感知器实现异或,与问题的代码

    然而,它们作为神经网络的基础单元,为更复杂模型如多层感知器和现代深度学习架构提供了理论基础。 总结来说,"感知器实现异或,与问题的代码"是一个关于使用感知器模型解决逻辑运算问题的示例,涉及到权重调整、...

    多层线性神经网络与单层线性神经网络的等效性研究.pdf

    在神经网络领域中,Frank Rosenblatt提出的感知器神经网络能够解决线性可分问题,但是后来发现,多层感知器神经网络能够实现“异或”等线性不可分问题。然而,对于自适应线性神经网络,为什么各种文献只有单层自适应...

    计算机神经网络之感知器篇.ppt

    为了解决这个问题,后来发展出了多层前馈网络(如多层感知器)和更复杂的激活函数,如Sigmoid和ReLU,这些模型能够逼近任意复杂的非线性函数,从而扩展了神经网络的适用范围。 通过构建不同输入数量的感知器网络,...

    用matlab实现的单隐层神经网络,即双层网络,能学习简单的逻辑操作,如与或非异或等.zip

    单隐层神经网络是多层感知器(Multilayer Perceptron, MLP)的一种,由输入层、隐藏层和输出层构成。在本案例中,隐藏层是网络的唯一隐藏层,它负责对输入数据进行非线性转换,以解决线性不可分问题。隐藏层的神经元...

    利用感知器算法进行鸢尾花数据分类

    尽管标准感知器只能处理线性可分情况,但通过构建多层感知器(即神经网络),可以逼近任何复杂的非线性决策边界。例如,异或问题是一个经典的非线性问题,两个输入的异或结果无法用一条直线划分。通过增加隐藏层和...

    感知器求解逻辑分类问研究题的

    为了解决这个问题,需要使用多层感知器(MLP)。多层感知器包含一个或多个隐藏层,这些隐藏层可以学习更复杂的特征表示,从而解决非线性可分问题。 - **网络结构设计**:多层感知器通常包含输入层、一个或多个隐藏...

    matlab开发-Netlab

    在Netlab中,可以通过创建多层感知器(MLP)网络来解决这个问题,这在"demmlp2.m"脚本中可能有所体现。 "LICENSE"文件是许可协议,详细说明了Netlab工具箱的使用权限和限制,用户在使用前应当仔细阅读并遵守其中的...

    XOR.zip_XOR_matlab xor_neural_xor matlab_zip

    XOR(异或)问题在神经网络的学习中具有重要的地位,因为它是一个非线性可分的问题,对于单层感知器无法解决,但可以通过多层神经网络解决,这体现了深度学习的优势。 标题"XOR.zip_XOR_matlab xor_neural_xor ...

    XOR.rar_XOR

    在人工智能和神经网络领域,XOR(异或)问题是一个经典的示例,它常被用来测试和理解多层感知器(Multi-Layer Perceptron, MLP)的工作原理。XOR问题之所以重要,是因为它无法通过单层感知器解决,而需要引入多层...

    BP神经网络原理及实战.ppt

    BP神经网络的发展可以追溯到1969年,当时明斯基和佩珀特合作出版了《感知器》,论证了简单的线性感知器功能有限,不能解决如“异或”(XOR)这样的基本问题,且对多层网络也持悲观态度。这使得神经网络的研究走向长达...

    利用BP算法实现“异或”功能

    “异或”问题因其非线性可分性的特点,成为了检验多层感知器(MLP)能力的经典案例。 #### 二、BP算法原理概述 BP算法的核心思想在于利用链式法则来计算损失函数相对于各层权重的梯度,并通过反向传播的方式更新...

    神经网络实验指导.doc

    本文档提供了神经网络实验指导,涵盖了神经元转移函数、单层感知器、多层感知器等知识点,並提供了MATLAB代码实例。 一、神经元转移函数 神经元转移函数是神经网络中的一种重要组件,它将输入信号转换为输出信号。...

    Perceptron.rar_matlab例程_matlab_

    不过,通过多次迭代和权重更新,多层感知器(即深度学习中的前馈神经网络)可以解决此类问题。 感知器算法的基本步骤包括: 1. 初始化权重:通常设置为随机值。 2. 遍历训练集:对于每一个样本,计算预测输出和实际...

Global site tag (gtag.js) - Google Analytics