- 浏览: 135183 次
- 性别:
- 来自: 福建省莆田市
文章分类
最新评论
-
houruiming:
tks for your info which helps m ...
setcontent和setcontentobject用的是同一片内存 -
turingfellow:
in.tftpd -l -s /home/tmp -u ro ...
commands -
turingfellow:
LINUX下的网络设置 ifconfig ,routeLINU ...
commands -
turingfellow:
安装 linux loopbackyum install um ...
commands
always errs on the side of caution. Since determining the correct attachment point of
prepositional phrases, relative clauses, and adverbial modifiers almost always requires
extrasyntactic information, Fidditch pursues the very conservative strategy of always
leaving such constituents unattached, even if only one attachment point is syntacti-
cally possible. However, Fidditch does indicate its best guess concerning a fragment's
attachment site by the fragment's depth of embedding. Moreover, it attaches preposi-
tional phrases beginning with of if the preposition immediately follows a noun; thus,
tale of…and boatload of…are parsed as single constituents, while first of…is not.
Since Fidditch lacks a large verb lexicon, it cannot decide whether some constituents
serve as adjuncts or arguments and hence leaves subordinate clauses such as infini-
322Mitchell P. Marcus et al.
Building a Large Annotated Corpus of English
tives as separate fragments. Note further that Fidditch creates adjective phrases only
when it determines that more than one lexical item belongs in the ADJP Finally, as
is well known, the scope of conjunctions and other coordinate structures can only
be determined given the richest forms of contextual information; here again, Fidditch
simply turns out a string of tree fragments around any conjunction. Because all de-
cisions within Fidditch are made locally, all commas (which often signal conjunction)
must disrupt the input into separate chunks
The original design of the Treebank called for a level of syntactic analysis compa-
rable to the skeletal analysis used by the Lancaster Treebank, but a limited experiment
was performed early in the project to investigate the feasibility of providing greater
levels of structural detail. While the results were somewhat unclear, there was ev-
idence that annotators could maintain a much faster rate of hand correction if the
parser output was simplified in various ways, reducing the visual complexity of the
tree representations and eliminating a range of minor decisions. The key results of this
experiment were as follows:
Annotators take substantially longer to learn the bracketing task than the
POS tagging task, with substantial increases in speed occurring even
after two months of training.
Annotators can correct the full structure provided by Fidditch at an
average speed of approximately 375 words per hour after three weeks
and 475 words per hour after six weeks.
Reducing the output from the full structure shown in Figure 3 to a more
skeletal representation similar to that used by the Lancaster UCREL
Treebank Project increases annotator productivity by approximately
100-200 words per hour.
It proved to be very difficult for annotators to distinguish between a
verb's arguments and adjuncts in all cases. Allowing annotators to
ignore this distinction when it is unclear (attaching constituents high)
increases productivity by approximately 150-200 words per hour.
Informal examination of later annotation showed that forced distinctions
cannot be made consistently.
As a result of this experiment, the originally proposed skeletal representation was
adopted, without a forced distinction between arguments and adjuncts. Even after
extended training, performance varies markedly by annotator, with speeds on the task
of correcting skeletal structure without requiring a distinction between arguments and
adjuncts ranging from approximately 750 words per hour to well over 1,000 words
per hour after three or four months' experience. The fastest annotators work in bursts
of well over 1,500 words per hour alternating with brief rests. At an average rate
of 750 words per hour, a team of five part-time annotators annotating three hours a
day should maintain an output of about 2.5 million words a year of "treebanked"
sentences, with each sentence corrected once.
It is worth noting that experienced annotators can proofread previously corrected
material at very high speeds. A parsed subcorpus of over 1 million words was recently
proofread at an average speed of approximately 4,000 words per annotator per hour.
At this rate of productivity, annotators are able to find and correct gross errors in
parsing, but do not have time to check, for example, whether they agree with all
prepositional phrase attachments.
323Computational Linguistics
Volume 19, Number 2
((S
(NP (ADJP Battle-tested industrial)
managers)
(?here)
(?always)
(VP buck))
(?(PP up
(NP nervous newcomers)))
(?(PP with
(NP the tale
(PP of
(NP the
(ADJP first))))))
(?(PP of
(NP their countrymen)))
(?(S (NP*)
to
(UP visit
(NP Mexico))))
(?,)
(?(NP a boatload
(PP of
(NP warriors))
(VP blown
(?ashore)
(NP 375 years))))
(?ago)
(?.))
Figure 4
Sample bracketed text-after simplification, before correction.
The process that creates the skeletal representations to be corrected by the anno-
tators simplifies and flattens the structures shown in Figure 3 by removing POS tags,
nonbranching lexical nodes, and certain phrasal nodes, notably NBAR. The output of
the first automated stage of the bracketing task is shown in Figure 4.
Annotators correct this simplified structure using a mouse-based interface. Their
primary job is to "glue" fragments together, but they must also correct incorrect parses
and delete some structure. Single mouse clicks perform the following tasks, among
others. The interface correctly reindents the structure whenever necessary.
Attach constituents labeled ?. This is done by pressing down the
appropriate mouse button on or immediately after the ?, moving the
mouse onto or immediately after the label of the intended parent and
releasing the mouse. Attaching constituents automatically deletes their?
label.
Promote a constituent up one level of structure, making it a sibling of its
current parent.
Delete a pair of constituent brackets.
324Mitchell P Marcus et al.
Building a Large Annotated Corpus of English
((S
(NP Battle-tested industrial managers
here)
always
(VP buck
up
(NP nervous newcomers)
(PP with
(NP the tale
(PP of
(NP (NP the
(ADJP first
(PP of
(NP their countrymen)))
(S (NP*)
to
(VP visit
(NP Mexico))))
(NP (NP a boatload
(PP of
(NP (NP warriors)
(VP-1 blown
ashore
(ADVP (NP 375 years)
ago)))))
(VP-1 *pseudo-attach*))))))))
.)
Figure 5
Sample bracketed text-after correction.
Create a pair of brackets around a constituent. This is done by typing a
constituent tag and then sweeping out the intended constituent with the
mouse. The tag is checked to assure that it is a legal label.
Change the label of a constituent. The new tag is checked to assure that
it is legal.
The bracketed text after correction is shown in Figure 5. The fragments are now
connected together into one rooted tree structure. The result is a skeletal analysis in
that much syntactic detail is left unannotated. Most prominently, all internal structure
of the NP up through the head and including any single-word post-head modifiers is
left unannotated.
As noted above in connection with POS tagging, a major goal of the Treebank
project is to allow annotators only to indicate structure of which they were certain. The
Treebank provides two notational devices to ensure this goal: the X constituent label
and so-called "pseudo-attachment." The X constituent label is used if an annotator
is sure that a sequence of words is a major constituent but is unsure of its syntactic
category; in such cases, the annotator simply brackets the sequence and labels it X. The
second notational device, pseudo-attachment, has two primary uses. On the one hand,
it is used to annotate what Kay has called permanent predictable ambiguities, allowing an
annotator to indicate that a structure is globally ambiguous even given the surrounding
context (annotators always assign structure to a sentence on the basis of its context). An
example of this use of pseudo-attachment is shown in Figure 5, where the participial
phrase blown ashore 375 years ago modifies either warriors or boatload, but there is no way
of settling the question-both attachments mean exactly the same thing. In the case
at hand, the pseudo-attachment notation indicates that the annotator of the sentence
thought that VP-1 is most likely a modifier of warriors, but that it is also possible that
it is a modifier of boatload." A second use of pseudo-attachment is to allow annotators
to represent the "underlying" position of extraposed elements; in addition to being
attached in its superficial position in the tree, the extraposed constituent is pseudo-
attached within the constituent to which it is semantically related. Note that except
for the device of pseudo-attachment, the skeletal analysis of the Treebank is entirely
restricted to simple context-free trees.
The reader may have noticed that the ADJP brackets in Figure 4 have vanished in
Figure 5. For the sake of the overall efficiency of the annotation task, we leave all ADJP
brackets in the simplified structure, with the annotators expected to remove many
of them during annotation. The reason for this is somewhat complex, but provides
a good example of the considerations that come into play in designing the details
of annotation methods. The first relevant fact is that Fidditch only outputs ADJP
brackets within NPs for adjective phrases containing more than one lexical item. To
be consistent, the final structure must contain ADJP nodes for all adjective phrases
within NPs or for none; we have chosen to delete all such nodes within NPs under
normal circumstances. (This does not affect the use of the ADJP tag for predicative
adjective phrases outside of NPs.) In a seemingly unrelated guideline, all coordinate
structures are annotated in the Treebank; such coordinate structures are represented
by Chomsky-adjunction when the two conjoined constituents bear the same label.
This means that if an NP contains coordinated adjective phrases, then an ADJP tag
will be used to tag that coordination, even though simple ADJPs within NPs will not
bear an APJP tag. Experience has shown that annotators can delete pairs of brackets
extremely quickly using the mouse-based tools, whereas creating brackets is a much
slower operation. Because the coordination of adjectives is quite common, it is more
efficient to leave in ADJP labels, and delete them if they are not part of a coordinate
structure, than to reintroduce them if necessary.
5. Progress to Date
5.1 Composition and Size of Corpus
Table 4 shows the output of the Penn Treebank project at the end of its first phase. All
the materials listed in Table 4 are available on CD-ROM to members of the Linguistic
Data Consortium." About 3 million words of POS-tagged material and a small sam-
pling of skeletally parsed text are available as part of the first Association for Com-
putational Linguistics/ Data Collection Initiative CD-ROM, and a somewhat larger
subset of materials is available on cartridge tape directly from the Penn Treebank
Project. For information, contact the first author of this paper or send e-mail to tree-
bank@unagi.cis.upenn.edu.
prepositional phrases, relative clauses, and adverbial modifiers almost always requires
extrasyntactic information, Fidditch pursues the very conservative strategy of always
leaving such constituents unattached, even if only one attachment point is syntacti-
cally possible. However, Fidditch does indicate its best guess concerning a fragment's
attachment site by the fragment's depth of embedding. Moreover, it attaches preposi-
tional phrases beginning with of if the preposition immediately follows a noun; thus,
tale of…and boatload of…are parsed as single constituents, while first of…is not.
Since Fidditch lacks a large verb lexicon, it cannot decide whether some constituents
serve as adjuncts or arguments and hence leaves subordinate clauses such as infini-
322Mitchell P. Marcus et al.
Building a Large Annotated Corpus of English
tives as separate fragments. Note further that Fidditch creates adjective phrases only
when it determines that more than one lexical item belongs in the ADJP Finally, as
is well known, the scope of conjunctions and other coordinate structures can only
be determined given the richest forms of contextual information; here again, Fidditch
simply turns out a string of tree fragments around any conjunction. Because all de-
cisions within Fidditch are made locally, all commas (which often signal conjunction)
must disrupt the input into separate chunks
The original design of the Treebank called for a level of syntactic analysis compa-
rable to the skeletal analysis used by the Lancaster Treebank, but a limited experiment
was performed early in the project to investigate the feasibility of providing greater
levels of structural detail. While the results were somewhat unclear, there was ev-
idence that annotators could maintain a much faster rate of hand correction if the
parser output was simplified in various ways, reducing the visual complexity of the
tree representations and eliminating a range of minor decisions. The key results of this
experiment were as follows:
Annotators take substantially longer to learn the bracketing task than the
POS tagging task, with substantial increases in speed occurring even
after two months of training.
Annotators can correct the full structure provided by Fidditch at an
average speed of approximately 375 words per hour after three weeks
and 475 words per hour after six weeks.
Reducing the output from the full structure shown in Figure 3 to a more
skeletal representation similar to that used by the Lancaster UCREL
Treebank Project increases annotator productivity by approximately
100-200 words per hour.
It proved to be very difficult for annotators to distinguish between a
verb's arguments and adjuncts in all cases. Allowing annotators to
ignore this distinction when it is unclear (attaching constituents high)
increases productivity by approximately 150-200 words per hour.
Informal examination of later annotation showed that forced distinctions
cannot be made consistently.
As a result of this experiment, the originally proposed skeletal representation was
adopted, without a forced distinction between arguments and adjuncts. Even after
extended training, performance varies markedly by annotator, with speeds on the task
of correcting skeletal structure without requiring a distinction between arguments and
adjuncts ranging from approximately 750 words per hour to well over 1,000 words
per hour after three or four months' experience. The fastest annotators work in bursts
of well over 1,500 words per hour alternating with brief rests. At an average rate
of 750 words per hour, a team of five part-time annotators annotating three hours a
day should maintain an output of about 2.5 million words a year of "treebanked"
sentences, with each sentence corrected once.
It is worth noting that experienced annotators can proofread previously corrected
material at very high speeds. A parsed subcorpus of over 1 million words was recently
proofread at an average speed of approximately 4,000 words per annotator per hour.
At this rate of productivity, annotators are able to find and correct gross errors in
parsing, but do not have time to check, for example, whether they agree with all
prepositional phrase attachments.
323Computational Linguistics
Volume 19, Number 2
((S
(NP (ADJP Battle-tested industrial)
managers)
(?here)
(?always)
(VP buck))
(?(PP up
(NP nervous newcomers)))
(?(PP with
(NP the tale
(PP of
(NP the
(ADJP first))))))
(?(PP of
(NP their countrymen)))
(?(S (NP*)
to
(UP visit
(NP Mexico))))
(?,)
(?(NP a boatload
(PP of
(NP warriors))
(VP blown
(?ashore)
(NP 375 years))))
(?ago)
(?.))
Figure 4
Sample bracketed text-after simplification, before correction.
The process that creates the skeletal representations to be corrected by the anno-
tators simplifies and flattens the structures shown in Figure 3 by removing POS tags,
nonbranching lexical nodes, and certain phrasal nodes, notably NBAR. The output of
the first automated stage of the bracketing task is shown in Figure 4.
Annotators correct this simplified structure using a mouse-based interface. Their
primary job is to "glue" fragments together, but they must also correct incorrect parses
and delete some structure. Single mouse clicks perform the following tasks, among
others. The interface correctly reindents the structure whenever necessary.
Attach constituents labeled ?. This is done by pressing down the
appropriate mouse button on or immediately after the ?, moving the
mouse onto or immediately after the label of the intended parent and
releasing the mouse. Attaching constituents automatically deletes their?
label.
Promote a constituent up one level of structure, making it a sibling of its
current parent.
Delete a pair of constituent brackets.
324Mitchell P Marcus et al.
Building a Large Annotated Corpus of English
((S
(NP Battle-tested industrial managers
here)
always
(VP buck
up
(NP nervous newcomers)
(PP with
(NP the tale
(PP of
(NP (NP the
(ADJP first
(PP of
(NP their countrymen)))
(S (NP*)
to
(VP visit
(NP Mexico))))
(NP (NP a boatload
(PP of
(NP (NP warriors)
(VP-1 blown
ashore
(ADVP (NP 375 years)
ago)))))
(VP-1 *pseudo-attach*))))))))
.)
Figure 5
Sample bracketed text-after correction.
Create a pair of brackets around a constituent. This is done by typing a
constituent tag and then sweeping out the intended constituent with the
mouse. The tag is checked to assure that it is a legal label.
Change the label of a constituent. The new tag is checked to assure that
it is legal.
The bracketed text after correction is shown in Figure 5. The fragments are now
connected together into one rooted tree structure. The result is a skeletal analysis in
that much syntactic detail is left unannotated. Most prominently, all internal structure
of the NP up through the head and including any single-word post-head modifiers is
left unannotated.
As noted above in connection with POS tagging, a major goal of the Treebank
project is to allow annotators only to indicate structure of which they were certain. The
Treebank provides two notational devices to ensure this goal: the X constituent label
and so-called "pseudo-attachment." The X constituent label is used if an annotator
is sure that a sequence of words is a major constituent but is unsure of its syntactic
category; in such cases, the annotator simply brackets the sequence and labels it X. The
second notational device, pseudo-attachment, has two primary uses. On the one hand,
it is used to annotate what Kay has called permanent predictable ambiguities, allowing an
annotator to indicate that a structure is globally ambiguous even given the surrounding
context (annotators always assign structure to a sentence on the basis of its context). An
example of this use of pseudo-attachment is shown in Figure 5, where the participial
phrase blown ashore 375 years ago modifies either warriors or boatload, but there is no way
of settling the question-both attachments mean exactly the same thing. In the case
at hand, the pseudo-attachment notation indicates that the annotator of the sentence
thought that VP-1 is most likely a modifier of warriors, but that it is also possible that
it is a modifier of boatload." A second use of pseudo-attachment is to allow annotators
to represent the "underlying" position of extraposed elements; in addition to being
attached in its superficial position in the tree, the extraposed constituent is pseudo-
attached within the constituent to which it is semantically related. Note that except
for the device of pseudo-attachment, the skeletal analysis of the Treebank is entirely
restricted to simple context-free trees.
The reader may have noticed that the ADJP brackets in Figure 4 have vanished in
Figure 5. For the sake of the overall efficiency of the annotation task, we leave all ADJP
brackets in the simplified structure, with the annotators expected to remove many
of them during annotation. The reason for this is somewhat complex, but provides
a good example of the considerations that come into play in designing the details
of annotation methods. The first relevant fact is that Fidditch only outputs ADJP
brackets within NPs for adjective phrases containing more than one lexical item. To
be consistent, the final structure must contain ADJP nodes for all adjective phrases
within NPs or for none; we have chosen to delete all such nodes within NPs under
normal circumstances. (This does not affect the use of the ADJP tag for predicative
adjective phrases outside of NPs.) In a seemingly unrelated guideline, all coordinate
structures are annotated in the Treebank; such coordinate structures are represented
by Chomsky-adjunction when the two conjoined constituents bear the same label.
This means that if an NP contains coordinated adjective phrases, then an ADJP tag
will be used to tag that coordination, even though simple ADJPs within NPs will not
bear an APJP tag. Experience has shown that annotators can delete pairs of brackets
extremely quickly using the mouse-based tools, whereas creating brackets is a much
slower operation. Because the coordination of adjectives is quite common, it is more
efficient to leave in ADJP labels, and delete them if they are not part of a coordinate
structure, than to reintroduce them if necessary.
5. Progress to Date
5.1 Composition and Size of Corpus
Table 4 shows the output of the Penn Treebank project at the end of its first phase. All
the materials listed in Table 4 are available on CD-ROM to members of the Linguistic
Data Consortium." About 3 million words of POS-tagged material and a small sam-
pling of skeletally parsed text are available as part of the first Association for Com-
putational Linguistics/ Data Collection Initiative CD-ROM, and a somewhat larger
subset of materials is available on cartridge tape directly from the Penn Treebank
Project. For information, contact the first author of this paper or send e-mail to tree-
bank@unagi.cis.upenn.edu.
发表评论
-
protocols
2011-04-03 19:22 924<!-- The protocols capabilit ... -
dfcap
2011-04-03 19:15 876<!-- The df capability has a ... -
booktrading /seller
2011-03-29 23:19 928<html><head><tit ... -
booktrading / manager
2011-03-29 23:18 1089<html><head><tit ... -
booktrading / common
2011-03-29 23:17 986<html><head><tit ... -
booktrading / buyer
2011-03-29 23:13 846<!-- <H3>The buyer age ... -
tomcat的context说明书
2011-03-20 17:39 803http://tomcat.apache.org/tomcat ... -
msyql的select语法
2010-09-13 22:52 107613.2.7. SELECT语法 13.2.7.1. ... -
zotero与word集成
2010-09-11 08:50 1767Manually Installing the Zotero ... -
university 2/n
2010-08-24 07:54 898Chapter 1.Introduction of regis ... -
university 1/n
2010-08-24 07:53 940chapter? Introduction ?.?The st ... -
Sun Java Bugs that affect lucene
2010-08-23 08:59 735Sometimes Lucene runs amok of b ... -
Snowball分词
2010-08-22 13:07 1226using System; using Lucene.Net. ... -
penn tree bank 6/6
2010-08-20 07:09 91811 This use of 12 Contact the - ... -
penn tree bank 4/n
2010-08-19 07:39 8184. Bracketing 4.1 Basic Methodo ... -
penn tree bank 3/n
2010-08-15 23:31 8192.3.1 Automated Stage. During t ... -
penn tree bank 2/n
2010-08-15 23:30 1505Mitchell P Marcus et al. Buildi ... -
capabilities 3/3
2010-08-11 22:58 77801<capability xmlns="ht ... -
capabilities 2/3
2010-08-11 22:57 740Fig.3.Element creation cases:a) ... -
capabilities 1/3
2010-08-11 22:56 947Extending the Capability Concep ...
相关推荐
【毕业设计】基于yolov9实现目标追踪和计数源码.zip
MATLAB程序:多个无人船 协同围捕控制算法 3船围捕控制,围捕运动船只 可以仿真多个船之间的距离以及距离目标船的距离,特别适合学习、参考
基于线性模型预测控制(LMPC)的四旋翼飞行器(UAV)控制
资源说明: 1:csdn平台资源详情页的文档预览若发现'异常',属平台多文档混合解析和叠加展示风格,请放心使用。 2:32页图文详解文档(从零开始项目全套环境工具安装搭建调试运行部署,保姆级图文详解)。 3:34页范例参考毕业论文,万字长文,word文档,支持二次编辑。 4:27页范例参考答辩ppt,pptx格式,支持二次编辑。 5:工具环境、ppt参考模板、相关教程资源分享。 6:资源项目源码均已通过严格测试验证,保证能够正常运行,本项目仅用作交流学习参考,请切勿用于商业用途。 7:项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通。 内容概要: 本系统基于 B/S 网络结构,在IDEA中开发。服务端用 Java 并借 ssm 框架(Spring+SpringMVC+MyBatis)搭建后台。前台采用支持 HTML5 的 VUE 框架。用 MySQL 存储数据,可靠性强。 能学到什么: 学会用ssm搭建后台,提升效率、专注业务。学习 VUE 框架构建交互界面、前后端数据交互、MySQL管理数据、从零开始环境搭建、调试、运行、打包、部署流程。
内容概要:本文详细介绍了一种基于 Python 实现无人机自动拍摄的方法,具体涵盖无人机飞行控制系统与摄像头控制的交互流程。主要内容包括:环境搭建、所需第三方库安装、无人机初始化与控制逻辑解析、到达指定地理位置后的摄影动作实现及任务结束后的安全返程指令集发送机制。 适用人群:具有一定编程能力和硬件基础知识的技术爱好者、从事航空影像获取相关领域的工作人员以及自动化设备的研发者。 使用场景及目标:通过本指南可以帮助用户掌握如何构建基本但完整的无人机自动拍摄系统,从而适用于新闻报道、地质勘探、环境监测等多个应用场景中的快速响应数据采集任务。 其他说明:代码实例采用开源软件(如dronekit、opencv等),便于后续开发优化,同时强调了飞行安全性与法律法规遵从的重要意义,鼓励开发者先期模拟测试再逐步应用于实际项目中。
李团结业务招待费申报表20250104.pdf
含前后端源码,非线传,修复最新登录接口 梦想贩卖机升级版,变现宝吸取了资源变现类产品的很多优点,摒弃了那些无关紧要的东西,使本产品在运营和变现能力上,实现了质的超越。多领域素材资源知识变现营销裂变独立版。 实现流量互导,多渠道变现。独立部署,可绑自有独立域名不限制域名。
这是一个基于 Unofficial Airplay 协议规范的 C# 与 Apple TV 连接
【Golang设计模式】使用Golang泛型实现的设计模式(大话设计模式)
【C语言】2019年南航计算机学院操作系统课程的实验代码-实验心得-上机考试练习-笔试复习笔记_pgj
二十.核心动画 - 新年烟花:资源及源码
【毕业设计】Python 图形化麻将游戏 带蒙特卡洛AI源码.zip
离散数学是计算机科学中的一个基础且至关重要的领域,它主要研究不连续的、个体的、离散的数据结构和逻辑关系。02324离散数学自学考试试题集为考研复习提供了宝贵的参考资料,尤其对那些准备复试的学生来说,价值巨大。通过这些试题,考生可以系统地理解和掌握离散数学的基本概念、理论和方法。 离散数学的核心内容包括以下几个方面: 1. **集合论**:集合是最基本的数学概念,离散数学首先会介绍集合的定义、元素关系、集合的运算(如并集、交集、差集、笛卡尔积等)以及集合的性质。此外,还有幂集和良序原理等相关知识。 2. **逻辑与证明**:这包括命题逻辑和一阶逻辑,学习如何使用逻辑符号表达命题,并进行逻辑推理。证明技巧如归纳法、反证法和构造性证明也是重点。 3. **图论**:图是描述对象之间关系的重要工具,学习图的定义、度、路径、环、树等基本概念,以及欧拉图、哈密顿图、最短路径算法等问题。 4. **组合数学**:计数问题是离散数学中的一个重要部分,包括排列、组合、二项式定理、鸽巢原理、容斥原理等。它们。内容来源于网络分享,如有侵权请联系我删除。另外如果没有积分的同学需要下载,请私信我。
【Golang设计模式】使用Golang泛型实现的设计模式(大话设计模式)_pgj
资源说明: 1:csdn平台资源详情页的文档预览若发现'异常',属平台多文档混合解析和叠加展示风格,请放心使用。 2:32页图文详解文档(从零开始项目全套环境工具安装搭建调试运行部署,保姆级图文详解)。 3:34页范例参考毕业论文,万字长文,word文档,支持二次编辑。 4:27页范例参考答辩ppt,pptx格式,支持二次编辑。 5:工具环境、ppt参考模板、相关教程资源分享。 6:资源项目源码均已通过严格测试验证,保证能够正常运行,本项目仅用作交流学习参考,请切勿用于商业用途。 7:项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通。 内容概要: 本系统基于 B/S 网络结构,在 IDEA 中开发。服务端用 Java 并借 ssm 框架(Spring+SpringMVC+MyBatis)搭建后台。用 MySQL 存储数据,可靠性强。 能学到什么: 学会用ssm搭建后台,提升效率、专注业务。学习使用jsp、html构建交互界面、前后端数据交互、MySQL管理数据、从零开始环境搭建、调试、运行、打包、部署流程。
全方位讲解三菱Q系列QD173H、QD170运动控制器, 是事频,共25个小时的事频讲解,非常详细。 需要特殊播放器播放,一机一码,必须电脑本地播放,看清楚再拿哦 Q系列运动控制器是比较高级的内容,专门用于运动控制,比如圆弧插补、电子凸轮、同步运动等。 每结课的源程序和QD713H QD170都有,已经配置好了。 如果需要用,根据自己的实际应用稍作修改,灌入PLC就可以。 内容有:QD170 QD713的参数设置、模块介绍和NN通信、指令讲解与JOG编程、opr定位程序初写、QD170M的同步控制、双凸轮控制、凸轮进给、凸轮往复、实模式、虚模式。 可以说这两个模块的所有功能都有讲有。 包括有: 1、PLC源程序 2、QD170 QD713的配置文件 3、事频讲解,专门讲的QD713 这个是Q系列中比较高级的内容,需要比较好的基础 搞定这个15K不是很大的问题,需要好的基础。
资源说明: 1:csdn平台资源详情页的文档预览若发现'异常',属平台多文档混合解析和叠加展示风格,请放心使用。 2:32页图文详解文档(从零开始项目全套环境工具安装搭建调试运行部署,保姆级图文详解)。 3:34页范例参考毕业论文,万字长文,word文档,支持二次编辑。 4:27页范例参考答辩ppt,pptx格式,支持二次编辑。 5:工具环境、ppt参考模板、相关教程资源分享。 6:资源项目源码均已通过严格测试验证,保证能够正常运行,本项目仅用作交流学习参考,请切勿用于商业用途。 7:项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通。 内容概要: 本系统基于 B/S 网络结构,在IDEA中开发。服务端用 Java 并借 ssm 框架(Spring+SpringMVC+MyBatis)搭建后台。前台采用支持 HTML5 的 VUE 框架。用 MySQL 存储数据,可靠性强。 能学到什么: 学会用ssm搭建后台,提升效率、专注业务。学习 VUE 框架构建交互界面、前后端数据交互、MySQL管理数据、从零开始环境搭建、调试、运行、打包、部署流程。
感应电机 异步电机模型预测电流控制MPCC 感应电机MPCC系统将逆变器电压矢量遍历代入到定子磁链、定子电流预测模型,可得到下一时刻的定子电流,将预测得到的定子电流代入到表征系统控制性能的成本函数,并将令成本函数最小的电压矢量作为输出。 提供对应的参考文献;
资源说明: 1:csdn平台资源详情页的文档预览若发现'异常',属平台多文档混合解析和叠加展示风格,请放心使用。 2:32页图文详解文档(从零开始项目全套环境工具安装搭建调试运行部署,保姆级图文详解)。 3:34页范例参考毕业论文,万字长文,word文档,支持二次编辑。 4:27页范例参考答辩ppt,pptx格式,支持二次编辑。 5:工具环境、ppt参考模板、相关教程资源分享。 6:资源项目源码均已通过严格测试验证,保证能够正常运行,本项目仅用作交流学习参考,请切勿用于商业用途。 7:项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通。 内容概要: 本系统基于 B/S 网络结构,在 IDEA 中开发。服务端用 Java 并借 ssm 框架(Spring+SpringMVC+MyBatis)搭建后台。用 MySQL 存储数据,可靠性强。 能学到什么: 学会用ssm搭建后台,提升效率、专注业务。学习使用jsp、html构建交互界面、前后端数据交互、MySQL管理数据、从零开始环境搭建、调试、运行、打包、部署流程。
建筑暖通空调与微电网智能控制协同设计(代码)