Java theory and practice: Garbage collection and performance
- 博客分类:
- Performance
In the early days of Java technology, allocating objects got a pretty bad rap. There were lots of articles (including some by this author) advising developers to avoid creating temporary objects unnecessarily because allocation (and the corresponding garbage-collection overhead) was expensive. While this used to be good advice (in situations where performance was significant), it is no longer generally applicable to all but the most performance-critical situations.
The 1.0 and 1.1 JDKs used a mark-sweep collector, which did compaction on some -- but not all -- collections, meaning that the heap might be fragmented after a garbage collection. Accordingly, memory allocation costs in the 1.0 and 1.1 JVMs were comparable to that in C or C++, where the allocator uses heuristics such as "first-first" or "best-fit" to manage the free heap space. Deallocation costs were also high, since the mark-sweep collector had to sweep the entire heap at every collection. No wonder we were advised to go easy on the allocator.
In HotSpot JVMs (Sun JDK 1.2 and later), things got a lot better -- the Sun JDKs moved to a generational collector. Because a copying collector is used for the young generation, the free space in the heap is always contiguous so that allocation of a new object from the heap can be done through a simple pointer addition, as shown in Listing 1. This makes object allocation in Java applications significantly cheaper than it is in C, a possibility that many developers at first have difficulty imagining. Similarly, because copying collectors do not visit dead objects, a heap with a large number of temporary objects, which is a common situation in Java applications, costs very little to collect; simply trace and copy the live objects to a survivor space and reclaim the entire heap in one fell swoop. No free lists, no block coalescing, no compacting -- just wipe the heap clean and start over. So both allocation and deallocation costs per object went way down in JDK 1.2.
Listing 1. Fast allocation in a contiguous heap
void *malloc(int n) { synchronized (heapLock) { if (heapTop - heapStart > n) doGarbageCollection(); void *wasStart = heapStart; heapStart += n; return wasStart; } } |
Performance advice often has a short shelf life; while it was once true that allocation was expensive, it is now no longer the case. In fact, it is downright cheap, and with a few very compute-intensive exceptions, performance considerations are generally no longer a good reason to avoid allocation. Sun estimates allocation costs at approximately ten machine instructions . That's pretty much free -- certainly no reason to complicate the structure of your program or incur additional maintenance risks for the sake of eliminating a few object creations.
Of course, allocation is only half the story -- most objects that are allocated are eventually garbage collected, which also has costs. But there's good news there, too. The vast majority of objects in most Java applications become garbage before the next collection. The cost of a minor garbage collection is proportional to the number of live objects in the young generation, not the number of objects allocated since the last collection. Because so few young generation objects survive to the next collection, the amortized cost of collection per allocation is fairly small (and can be made even smaller by simply increasing the heap size, subject to the availability of enough memory).
The JIT compiler can perform additional optimizations that can
reduce the cost of object allocation to zero. Consider the code in
Listing 2, where the getPosition()
method creates a temporary object to hold the coordinates of a point, and the calling method uses the Point
object briefly and then discards it. The JIT will likely inline the call to getPosition()
and, using a technique
called escape analysis
, can recognize that no reference to the Point
object leaves the doSomething()
method. Knowing this, the JIT can then allocate the object on the stack
instead of the heap or, even better, optimize the allocation away
completely and simply hoist the fields of the Point into
registers. While the current Sun JVMs do not yet perform this
optimization, future JVMs probably will. The fact that allocation can
get even cheaper in the future, with no changes to your code, is just
one more reason not to compromise the correctness or maintainability of
your program for the sake of avoiding a few extra allocations.
Listing 2. Escape analysis can eliminate many temporary
allocations entirely
void doSomething() { Point p = someObject.getPosition(); System.out.println("Object is at (" + p.x, + ", " + p.y + ")"); } ... Point getPosition() { return new Point(myX, myY); } |
Isn't the allocator a scalability bottleneck?
Listing 1 shows that while allocation itself is fast, access to the heap structure must be synchronized across threads. So doesn't that make the allocator a scalability hazard? There are several clever tricks JVMs use to reduce this cost significantly. IBM JVMs use a technique called thread-local heaps , by which each thread requests a small block of memory (on the order of 1K) from the allocator, and small object allocations are satisfied out of that block. If the program requests a larger block than can be satisfied using the small thread-local heap, then the global allocator is used to either satisfy the request directly or to allocate a new thread-local heap. By this technique, a large percentage of allocations can be satisfied without contending for the shared heap lock. (Sun JVMs use a similar technique, instead using the term "Local Allocation Blocks.")
Finalizers are not your friend
Objects with finalizers (those that have a non-trivial finalize()
method) have significant overhead compared to objects without
finalizers, and should be used sparingly. Finalizeable objects are both
slower to allocate and slower to collect. At allocation time, the JVM
must register any finalizeable objects with the garbage collector, and
(at least in the HotSpot JVM implementation) finalizeable objects must
follow a slower allocation
path than most other objects. Similarly, finalizeable objects are
slower to collect, too. It takes at least two garbage collection cycles
(in the best case) before a finalizeable object can be reclaimed, and
the garbage collector has to do extra work to invoke the finalizer. The
result is more time spent allocating and collecting objects and more
pressure on the garbage collector, because the memory used by
unreachable finalizeable objects is retained longer. Combine that with
the fact that finalizers are not guaranteed to run in any predictable
timeframe, or even at all, and you can see that there are relatively
few situations for which finalization is the right tool to use.
If you must use finalizers, there are a few guidelines you can follow that will help contain the damage. Limit the number of finalizeable objects, which will minimize the number of objects that have to incur the allocation and collection costs of finalization. Organize your classes so that finalizeable objects hold no other data, which will minimize the amount of memory tied up in finalizeable objects after they become unreachable, as there can be a long delay before they are actually reclaimed. In particular, beware when extending finalizeable classes from standard libraries.
Helping the garbage collector . . . not
Because allocation and garbage collection at one time imposed significant performance costs on Java programs, many clever tricks were developed to reduce these costs, such as object pooling and nulling. Unfortunately, in many cases these techniques can do more harm than good to your program's performance.
Object pooling is a straightforward concept -- maintain a pool of frequently used objects and grab one from the pool instead of creating a new one whenever needed. The theory is that pooling spreads out the allocation costs over many more uses. When the object creation cost is high, such as with database connections or threads, or the pooled object represents a limited and costly resource, such as with database connections, this makes sense. However, the number of situations where these conditions apply is fairly small.
In addition, object pooling has some serious downsides. Because the object pool is generally shared across all threads, allocation from the object pool can be a synchronization bottleneck. Pooling also forces you to manage deallocation explicitly, which reintroduces the risks of dangling pointers. Also, the pool size must be properly tuned to get the desired performance result. If it is too small, it will not prevent allocation; and if it is too large, resources that could get reclaimed will instead sit idle in the pool. By tying up memory that could be reclaimed, the use of object pools places additional pressure on the garbage collector. Writing an effective pool implementation is not simple.
In his "Performance Myths Exposed" talk at JavaOne 2003, Dr. Cliff Click offered concrete benchmarking data showing that object pooling is a performance loss for all but the most heavyweight objects on modern JVMs. Add in the serialization of allocation and the dangling-pointer risks, and it's clear that pooling should be avoided in all but the most extreme cases.
Explicit nulling is simply the practice of setting reference objects to null when you are finished with them. The idea behind nulling is that it assists the garbage collector by making objects unreachable earlier. Or at least that's the theory.
There is one
case where the use of explicit nulling is not
only helpful, but virtually required, and that is where a reference to
an object is scoped more broadly than it is used or considered valid by
the program's specification. This includes cases such as using a static
or instance field to store a reference to a temporary buffer, rather
than a local variable, or using an array to store references that may
remain reachable by the runtime but not by the implied semantics of the
program. Consider the class in Listing 3, which is an implementation of
a simple bounded stack backed by an array. When pop()
is
called, without the explicit nulling in the example, the class could
cause a memory leak (more properly called "unintentional object
retention," or sometimes called "object loitering") because the
reference stored in stack[top+1]
is no longer reachable by the program, but still considered reachable by the garbage collector.
Listing 3. Avoiding object loitering in a stack implementation
public class SimpleBoundedStack { private static final int MAXLEN = 100; private Object stack[] = new Object[MAXLEN]; private int top = -1; public void push(Object p) { stack [++top] = p;} public Object pop() { Object p = stack [top]; stack [top--] = null; // explicit null return p; } } |
In the September 1997 "Java Developer Connection Tech Tips" column (see Resources
), Sun warned of this risk and explained how explicit nulling was needed in cases like the pop()
example above. Unfortunately, programmers often take this advice too
far, using explicit nulling in the hope of helping the garbage
collector. But in most cases, it doesn't help the garbage collector at
all, and in some cases, it can actually hurt your program's performance.
Consider the code in Listing 4, which combines several really bad ideas. The listing is a linked list implementation that uses a finalizer to walk the list and null out all the forward links. We've already discussed why finalizers are bad. This case is even worse because now the class is doing extra work, ostensibly to help the garbage collector, but that will not actually help -- and might even hurt. Walking the list takes CPU cycles and will have the effect of visiting all those dead objects and pulling them into the cache -- work that the garbage collector might be able to avoid entirely, because copying collectors do not visit dead objects at all. Nulling the references doesn't help a tracing garbage collector anyway; if the head of the list is unreachable, the rest of the list won't be traced anyway.
public class LinkedList { private static class ListElement { private ListElement nextElement; private Object value; } private ListElement head; ... public void finalize() { try { ListElement p = head; while (p != null) { p.value = null; ListElement q = p.nextElement; p.nextElement = null; p = q; } head = null; } finally { super.finalize(); } } } |
Explicit nulling should be saved for cases where your program is subverting normal scoping rules for performance reasons, such as the stack example in Listing 3 (a more correct -- but poorly performing -- implementation would be to reallocate and copy the stack array each time it is changed).
A third category where developers often mistakenly think they are helping the garbage collector is the use of System.gc()
,
which triggers a garbage collection (actually, it merely suggests that
this might be a good time for a garbage collection). Unfortunately, System.gc()
triggers a full collection, which includes tracing all live objects in
the heap and sweeping and compacting the old generation. This can be a
lot of work. In general, it is better to let the system decide when it
needs to collect the heap, and whether or not to do a full collection.
Most of the time, a minor collection will do the job. Worse, calls to System.gc()
are often deeply buried where developers may be unaware of their presence, and where they might get triggered far
more often than necessary. If you are concerned that your application might have hidden calls to System.gc()
buried in libraries, you can invoke the JVM with the -XX:+DisableExplicitGC
option to prevent calls to System.gc()
and triggering a garbage collection.
No installment of Java theory and practice would be complete without some sort of plug for immutability. Making objects immutable eliminates entire classes of programming errors. One of the most common reasons given for not making a class immutable is the belief that doing so would compromise performance. While this is true sometimes, it is often not -- and sometimes the use of immutable objects has significant, and perhaps surprising, performance advantages.
Many objects function as containers for references to other objects.
When the referenced object needs to change, we have two choices: update
the reference (as we would in a mutable container class) or re-create
the container to hold a new reference (as we would in an immutable
container class). Listing 5 shows two ways to implement a simple holder
class. Assuming the containing object is small, which is often the case
(such as a Map.Entry
element in a Map
or a
linked list element), allocating a new immutable object has some hidden
performance advantages that come from the way generational garbage
collectors work, having to do with the relative age of objects.
Listing 5. Mutable and immutable object holders
public class MutableHolder { private Object value; public Object getValue() { return value; } public void setValue(Object o) { value = o; } } public class ImmutableHolder { private final Object value; public ImmutableHolder(Object o) { value = o; } public Object getValue() { return value; } } |
In most cases, when a holder object is updated to reference a
different object, the new referent is a young object. If we update a MutableHolder
by calling setValue()
, we have created a situation where an older object references a younger one. On the other hand, by creating a new ImmutableHolder
object instead, a younger object is referencing an older one. The
latter situation, where most objects point to older objects, is much
more gentle on a generational garbage collector. If a MutableHolder
that lives in the old generation is mutated, all the objects on the card that contain the
MutableHolder
must be scanned for old-to-young references
at the next minor collection. The use of mutable references for
long-lived container objects increases the work done to track
old-to-young references at collection time. (See last month's
article
and this month's Resources
,
which explain the card-marking algorithm used to implement the write
barrier in the generational collector used by current Sun JVMs).
When good performance advice goes bad
A cover story in the July 2003 Java Developer's Journal illustrates how easy it is for good performance advice to become bad performance advice by simply failing to adequately identify the conditions under which the advice should be applied or the problem it was intended to solve. While the article contains some useful analysis, it will likely do more harm than good (and, unfortunately, far too much performance-oriented advice falls into this same trap).
The article opens with a set of requirements from a realtime environment, where unpredictable garbage collection pauses are unacceptable and there are strict operational requirements on how long a pause can be tolerated. The authors then recommend nulling references, object pooling, and scheduling explicit garbage collection to meet the performance goals. So far, so good -- they had a problem and they figured out what they had to do to solve it (although they appear to have failed to identify what the costs of these practices were or explore some less intrusive alternatives, such as concurrent collection). Unfortunately, the article's title ("Avoid Bothersome Garbage Collection Pauses") and presentation suggest that this advice would be useful for a wide range of applications -- perhaps all Java applications. This is terrible, dangerous performance advice!
For most applications, explicit nulling, object pooling, and explicit garbage collection will harm the throughput of your application, not improve it -- not to mention the intrusiveness of these techniques on your program design. In certain situations, it may be acceptable to trade throughput for predictability -- such as real-time or embedded applications. But for many Java applications, including most server-side applications, you probably would rather have the throughput.
The moral of the story is that performance advice is highly situational (and has a short shelf life). Performance advice is by definition reactive -- it is designed to address a particular problem that occurred in a particular set of circumstances. If the underlying circumstances change, or they are simply not applicable to your situation, the advice may not be applicable, either. Before you muck up your program's design to improve its performance, first make sure you have a performance problem and that following the advice will solve that problem.
Garbage collection has come a long way in the last several years. Modern JVMs offer fast allocation and do their job fairly well on their own, with shorter garbage collection pauses than in previous JVMs. Tricks such as object pooling or explicit nulling, which were once considered sensible techniques for improving performance, are no longer necessary or helpful (and may even be harmful) as the cost of allocation and garbage collection has been reduced considerably.
<!-- CMA ID: 10906 --> <!-- Site ID: 1 --><!-- XSLT stylesheet used to transform this file: dw-document-html-6.0.xsl-->
- The previous two installments of Java theory and practice
, "A brief history of garbage collection
" and
"Garbage collection in the 1.4.1 JVM
," cover some of the basics of garbage collection in Java virtual machines.
-
Garbage
Collection: Algorithms for Automatic Dynamic Memory Management
(John Wiley & Sons, 1997) is a comprehensive survey of garbage
collection algorithms, with an extensive bibliography. The author,
Richard Jones, maintains an updated bibliography of nearly 2000 papers
on garbage collection on his Garbage Collection Page
.
- The Garbage Collection mailing list maintains a GC FAQ
.
- The IBM 1.4 SDK for the Java plaform uses a mark-sweep-compact collector, which supports incremental compaction
to reduce pause times.
- The three-part series, Sensible
sanitation
by Sam Borman (developerWorks
, August 2002), describes the garbage collection strategy employed by the IBM 1.2 and 1.3 SDKs for the Java platform.
- This article from the IBM Systems Journal
describes some of the lessons learned building the IBM 1.1.x JDKs
, including the details of mark-sweep and mark-sweep-compact garbage collection.
- The example in Listing 3
was raised by Sun in a 1997 Tech Tip
.
- The paper "Removing GC Sychronisation
" is a nice survey of potential scalability bottlenecks in garbage collection implementations.
- In the paper "A fast write barrier for generational garbage collectors
,"
Urs Hoeltze covers both the classical card-marking algorithm and an
improvement that can reduce the cost of marking significantly by
slightly increasing the cost of scanning dirty cards at collection time.
- Find hundreds more Java technology resources on the
developerWorks
Java technology zone
.
- Browse for books on these and other technical topics.
发表评论
-
Java theory and practice: A brief history of garbage collection
2009-08-15 00:04 784The benefits of garbage collect ... -
Removing GC Synchronisation
2009-08-14 23:15 755Removing GC Synchronisation ... -
Tuning Garbage Collection with the 1.4.2 Java[tm] Virtual Machine
2009-08-14 22:55 857Table of Contents ...
相关推荐
vue3 访问通义千问聊天代码例子
基于Python的Flask-vue基于Hadoop的智慧校园数据共享平台实现源码-演示视频 项目关键技术 开发工具:Pycharm 编程语言: python 数据库: MySQL5.7+ 后端技术:Flask 前端技术:HTML 关键技术:HTML、MYSQL、Python 数据库工具:Navicat、SQLyog
【实验1】:读取一次AI0通道数值 【实验2】:一次读取AI0通道多个数值 【实验3】:单次模拟量输出 【实验4】:连续模拟量输出(输出一个正弦曲线)
无人船的Smith-PID跟踪控制方法研究及实现:融合传统与最优PID策略的LOS曲线跟踪资料,基于无人船Smith-PID改进跟踪控制技术及其LOS曲线跟踪方法研究资料,基于无人船的smith-pid跟踪控制资料。 首先,针对pid进行了改进,有传统pid,最优pid和基于smith的pid三种控制方式。 然后还在smithpid基础上设计了LOS的曲线跟踪方法。 (有对应参考文献)。 有意者可直接联系,参考学习资料。 python语言。 ,基于无人船的Smith-PID跟踪控制; PID改进(传统PID、最优PID、基于Smith的PID); Smith-PID曲线跟踪方法; 参考学习资料; Python语言。,基于无人船的Smith-PID优化跟踪控制资料
自研船舶电力推进系统MATLAB仿真报告:从柴油机+同步发电机到异步电机直接转矩控制的全面模拟与实践,《船舶电力推进系统自搭MATLAB仿真报告:从柴油机同步发电机到异步电机直接转矩控制的完整过程与参数配置详解》,自己搭建的船舶电力推进系统(船舶电力推进自动控制)完全自搭MATLAB仿真,可适度,含对应27页正文的中文报告,稀缺资源,仿真包括船舶电站,变流系统和异步电机直接转矩控制,放心用吧。 三个文件逐层递进 柴油机+同步发电机(船舶电站) 柴油机+同步发电机+不控整流全桥逆变 柴油机+同步发电机+变流模块+异步电机直接转矩控制 所有参数都是配好的,最大负载参考变流系统所带负载两倍,再大柴油机和同步发电机参数就不匹配了,有能力可以自己调 ,核心关键词:船舶电力推进系统; MATLAB仿真; 船舶电站; 变流系统; 异步电机直接转矩控制; 柴油机; 同步发电机; 不控整流全桥逆变; 参数配比。,《船舶电力推进系统MATLAB仿真报告》
西门子博图WinCC V15自动化系统项目实战:多服务器客户端下的PID DCS闭环控制及参数调整实战指南,西门子博图WinCC V15自动化系统项目实战:多服务器客户端下的PID DCS闭环控制及参数调整实战指南,西门子博图WinCC V 15大型自动化系统项目,包含多台服务器客户端项目,系统采用安全1516F -3PN DP 外挂多台精智面板,1200PLC ET200SP 变频器 对整个工艺过程PID DCS 闭环过程控制,如何调整温度压力流量液位等参数,实用工程项目案例 ,西门子博图WinCC V 15; 大型自动化系统; 多台服务器客户端; 安全外挂; 精智面板; 1200PLC ET200SP; 变频器; PID DCS; 闭环过程控制; 温度压力流量液位调整; 工程项目案例,西门子博图WinCC V15大型项目:多服务器客户端的PID DCS闭环控制与实用参数调整
内容概要:本文详尽介绍了计算机网络相关资源及其各方面构成要素,首先阐述了硬件层面的各种传输媒介和设备如双绞线、同轴电缆、光纤以及台式电脑、笔记本、大型计算机等设备,还包括网络互联所需的各类组件如网卡、交换机、路由器等。其次探讨了多种操作系统的特性和主要功能,以及各类通讯和支持应用程序的概述,涵盖浏览器、图像和视频编辑等常用软件。再深入讨论了多种常见网络协议如TCP、UDP、HTTP等的功能特性。最后还提到了确保网络安全运行的重要措施和工具如MIB、SNMP以及防火墙、入侵检测系统等。并且简要提到计算机网络在不同的应用环境,从局域网到移动网络。 适合人群:所有对计算机网络技术感兴趣的初学者和希望深入了解各个组成成分的技术人员. 使用场景及目标:为用户提供计算机网络资源全面而系统的认识,帮助他们建立对于该领域的理论和技术的扎实认知基础,提高在实际环境中识别配置及维护计算机网络系统的能力.
海神之光上传的视频是由对应的完整代码运行得来的,完整代码皆可运行,亲测可用,适合小白; 1、从视频里可见完整代码的内容 主函数:main.m; 调用函数:其他m文件;无需运行 运行结果效果图; 2、代码运行版本 Matlab 2019b;若运行有误,根据提示修改;若不会,私信博主; 3、运行操作步骤 步骤一:将所有文件放到Matlab的当前文件夹中; 步骤二:双击打开main.m文件; 步骤三:点击运行,等程序运行完得到结果; 4、仿真咨询 如需其他服务,可私信博主; 4.1 博客或资源的完整代码提供 4.2 期刊或参考文献复现 4.3 Matlab程序定制 4.4 科研合作
ABAQUS中隧道结构模型的无限元应用:超声激励源的施加方法、3D无限元吸收边界的添加技巧、模型结果精确性校核流程及教学视频与CAE、INP文件解析,ABAQUS隧道模型中3D无限元吸收边界的应用:超声激励源的施加与模型结果精确性校核的实践教程,ABAQUS无限元吸收边界,abaqus隧道无限元,1.超声激励源施加;2.3D无限元吸收边界添加方法;3.模型结果精确性校核;4.提供教学视频,cae、inp文件。 ,ABAQUS无限元吸收边界;ABAQUS隧道无限元;超声激励源施加;3D无限元吸收边界添加;模型结果精确性校核;CAE和INP文件。,ABAQUS中超声激励下无限元吸收边界设置及模型精度验证教程
海神之光上传的视频是由对应的完整代码运行得来的,完整代码皆可运行,亲测可用,适合小白; 1、从视频里可见完整代码的内容 主函数:main.m; 调用函数:其他m文件;无需运行 运行结果效果图; 2、代码运行版本 Matlab 2019b;若运行有误,根据提示修改;若不会,私信博主; 3、运行操作步骤 步骤一:将所有文件放到Matlab的当前文件夹中; 步骤二:双击打开main.m文件; 步骤三:点击运行,等程序运行完得到结果; 4、仿真咨询 如需其他服务,可私信博主; 4.1 博客或资源的完整代码提供 4.2 期刊或参考文献复现 4.3 Matlab程序定制 4.4 科研合作
git自用lllllllllllllllllll
本资源与文章【Django小白项目】为一体,此为已成功项目,供给给Django初学者做参考,有不会的问题可以私信我噢~
使用一维数据表示向量和二维矩阵,支持常用运算。
1、以上文章可用于参考,请勿直接抄袭,学习、当作参考文献可以,主张借鉴学习 2、资源本身不含 对应项目代码,如需完整项目源码,请私信博主获取
基于多目标粒子群优化算法(MOPSO)的微电网多目标经济运行分析与优化策略考虑响应侧响应的协同调度策略,基于多目标粒子群优化算法(MOPSO)的微电网经济调度优化:含风光储荷一体化模型与需求侧响应策略,考虑需求侧响应的微电网多目标经济运行 建立了含风光储荷的微电网模型,以发电侧成本(包括风光储以及电网的购电成本)和负荷侧成本最小为目标,考虑功率平衡以及储能SOC约束,建立了多目标优化模型,通过分时电价引导负荷需求侧响应,得到可削减负荷量,同时求解模型,得到风光储以及电网的运行计划。 这段代码是一个使用多目标粒子群优化算法(MOPSO)解决问题的程序。下面我将对程序进行详细的分析和解释。 首先,程序的目标是通过优化算法来解决一个多目标优化问题。程序中使用的优化算法是多目标粒子群优化算法(MOPSO),该算法通过迭代更新粒子的位置和速度来搜索最优解。 程序的主要功能是对能源系统进行优化调度,包括光伏发电、风力发电、储能和电网供电。程序的目标是最小化能源系统的成本,并满足负荷需求。 程序的主要思路是使用粒子群优化算法来搜索最优解。程序中定义了一个粒子类(Particle),每个粒子代
data.gov.sg geojson部分项目整理
基于MATLAB Simulink的避障功能欠驱动无人船航迹跟踪控制仿真实验研究,基于MATLAB Simulink的欠驱动无人船避障功能路径跟踪控制仿真实验研究,包含避障功能的欠驱动无人船航迹(路径)跟踪控制仿真实验,基于MATLAB Simulink制作 ,避障功能; 欠驱动无人船; 航迹(路径)跟踪控制; MATLAB Simulink 仿真实验; 避障算法。,基于MATLAB Simulink的避障无人船航迹跟踪控制仿真实验
海神之光上传的视频是由对应的完整代码运行得来的,完整代码皆可运行,亲测可用,适合小白; 1、从视频里可见完整代码的内容 主函数:main.m; 调用函数:其他m文件;无需运行 运行结果效果图; 2、代码运行版本 Matlab 2019b;若运行有误,根据提示修改;若不会,私信博主; 3、运行操作步骤 步骤一:将所有文件放到Matlab的当前文件夹中; 步骤二:双击打开main.m文件; 步骤三:点击运行,等程序运行完得到结果; 4、仿真咨询 如需其他服务,可私信博主; 4.1 博客或资源的完整代码提供 4.2 期刊或参考文献复现 4.3 Matlab程序定制 4.4 科研合作