`
leonzhx
  • 浏览: 811819 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类
最新评论

Zz Garbage Collection in Java

    博客分类:
  • JVM
阅读更多

Heap Overview

 

This is the first in a series of posts about Garbage Collection (GC). I hope to be able to cover a bit of theory and all the major collectors in the hotspot virtual machine over the course of the series. This post just explains what garbage collection is and elements common to different collectors.

Why should I care?

Your Java virtual machine manages memory for you - which is highly convenient - but it might not be optimally tuned by default. By understanding some of the theory behind garbage collection you can more easily tune your collector. A common concern is collector efficiency, that is to say how much time your program spends executing program code rather than collecting garbage. Another common concern is how long that application pauses for.

There's also a lot of hearsay and folklore out there about garbage collection and so understanding the algorithms in a bit more detail really helps avoid falling into common pitfalls and traps. Besides - for anyone interested in how computer science principles are applied and used, JVM internals are a great thing to look at.

What does stop-the-world mean?

Your program (or mutator in GC-Speak) will be allocating objects as it runs. At some point your heap needs to be collected and all of the collectors in hotspot pause your application. The term 'stop-the-world' is used to mean that all of the mutator's threads are paused.

Its possible to implement a garbage collector that doesn't need to pause. Azul have implemented an effectively pauseless collector in their Zing virtual machine. I won't be covering how it works but there's a really interesting whitepaper if you want to know more.

The Young/Weak Generational Hypothesis

Simply stated: Most allocated objects die young 1. This concept was demonstrated by empirically analysing the memory allocation and liveness patterns of a large group of programs during the 1980s. What researchers found was that not only do most objects die young but once they live past a certain age they tend to live for a long time. The graph below is taken from a SUN/Oracle study looking at the lifespan of objects as a histogram.



 

How is Heap organised?

The young generational hypothesis has given rise to the idea of generational garbage collection in which heap is split up into several regions, and the placement of objects within each region corresponds to their age. One element that is common to the above these garbage collectors (other than G1) is the way that heap is organised into different spaces.



 

When objects are initially allocated, if they fit, they are stored in the Eden space. If the object survives a collection then it ends up in a survivor space. If it survives a few times (your tenuring threshold) then the object ends up in the tenured space. The specifics of the algorithms for collecting these spaces differs by collector, so I'll be covering them seperately in a future blog post.

This split is beneficial because it allows you to use different algorithms on different spaces. Some GC algorithms are more efficient if most of your objects are dead and some are more efficient if most of your objects are alive. Due to the generational hypothesis usually when it comes time to collect most objects in Eden and survivor spaces are dead, and most objects in tenured are alive.

There is also the permgen - or permanent generation. This is a special generation that holds objects that are related to the Java language itself. For example information about loaded classes is held here. Historically Strings that were interened or were constants were also held here. The permanent generation is being removed in favour of metaspace.

Multiple Collectors

The hotspot virtual machine actually has a variety of different Garbage Collectors. Each has a different set of performance characteristics and is more (or less) suited for different tasks. The key Garbage Collectors that I'll be looking at are:

  • Parallel Scavenge (PS): the default collector in recently released JVMs. This stops-the-world in order to collect, but collects in parallel (ie using multiple threads).
  • Concurrent Mark Sweep (CMS): this collector has several phases, some of which stop the world, but runs concurrently with the program for several of its phases as well.
  • Incremental Concurrent Mark Sweep (iCMS): a variant of CMS designed for lower pauses. It sometimes achieves this!
  • Garbage First (G1): a newish collector that's recently become more stable and is in slowly increasing usage.

Conclusions

I've given a few introductory points of thought about garbage collection, in the next post I'll be covering the Parallel Scavenge collector - which is currently the default collector. I'd also like to provide a link to my employer who have a GC log analyser which we think is pretty useful.

  1. "hotspot" is the name given to the codebase common behind openjdk and the official Oracle JVM. As of Java 7 openjdk is the reference implementation for Java SE.
  2. Technically what I described above is the 'weak generational hypothesis' which has empirical validation. There's also a strong variant which can be stated as The mean lifetime of a heap allocated object is equal to the mean amount of reachable storage. This is actually mathematically provable by taking Little's Law and setting Λ to 1. Simple proof!
  3. I'll cover the way heap is organised within G1 on a G1-specific blog post.

Parallel GC

 

Parallel Scavenge

Today we cover how Parallel GC works. Specifically this is the combination of running a Parallel Scavenge collector over Eden and the Parallel Mark and Sweep collector over the Tenured generation. You can get this option by passing in -XX:+UseParallelOldGC though its the default on certain machine types.

You may want to read my first blog post on Garbage Collection if you haven't since this gives a general overview.

Eden and Survivor Spaces

In the parallel scavenge collector eden and survivor spaces are collected using an approach known as Hemispheric GC. Objects are initially allocated in Eden, once Eden is close to full 1 a gc of the Eden space is triggered. This identifies live objects and copies them to the active Survivor Space2. It then treats the whole Eden space as a free, contiguous, block of memory which it can allocate into again.

In this case the allocation process ends up being like cutting a piece of cheddar. Each chunk gets sliced off contiguously and then the slice next to is the next to be 'eaten'. This has the upside that allocation merely requires pointer addition.



 

A slab of cheddar, ready to be allocated.

In order to identify live objects a search of the object graph is undertaken. The search starts from a set of 'root' objects which are objects that are guaranteed to be live, for example every thread is a root object. The search then find objects which are pointed to by the root set, and expands outwards until it has found all live objects. Here's a really nice pictorial representation, courtesy of Michael Triana



 

Parallel in the context of parallel scavenge means the collection is done by multiple threads running at the same time. This shouldn't be confused with ConcurrentGC, where the collector runs at the same time as, or interleaved with, the program. Parallel collection improves overall GC throughput by better using modern multicore CPUs. The parallelism is achieved by giving each thread a set of the roots to mark and a segment of the table of objects.

There are two survivor spaces, but only one of them is active at any point in time. They are collected in the same way as eden. The idea is that objects get copied into the active survivor space when they are promoted from eden. Then when its time to evacuate the space they are copied into the inactive survivor space. Once the active survivor space is completely evacuated the inactive space becomes active, and the active space becomes inactive. This is achieve by flipping the pointer to the beginning of the survivor space and means that all the dead objects in the survivor space can be freed at the cost of assigning to a single pointer.

Young Gen design and time tradeoffs

Since this involves only copying live objects and pointer changes the time taken to collect eden and survivor spaces is proportional to the amount of live objects. This is quite important since due to the generational hypothesis we know that most objects die young, and there's consequently no GC cost to freeing the memory associated with them.

The design of the survivor spaces is motivated by the idea that collecting objects when they are young is cheaper than doing a collection of the tenured space. Having objects continue to be collected in a hemispheric fashion for a few GC runs is helpful to the overall throughput.

Finally the fact that eden is organised into a single contiguous space makes object allocation cheap. A C program might back onto the 'malloc' command in order to allocate a block of memory, which involves traversing a list of free spaces in memory trying to find something that's big enough. When you use an arena allocator and consecutively allocate all you need to do is check there is enough free space and then increment a pointer by the size of the object.

Parallel Mark and Sweep

Objects that have survived a certain number of collections make their way into the tenured space. The number of times that they need to survive is referred to as the 'tenuring threshold'. Tenured Collections work somewhat differently to Eden, using an algorithm called mark and sweep. Each object has a mark bit associated with it. The marks are initially all set to false and as the object is reached during the graph search they're set to true.

The graph search that identifies live objects is similar to the search described for young generation. The difference is that instead of copying live objects, it simply marks them. After this it can go through the object table and free any object that isn't live. This process is done in parallel by several threads, each search a region of the heap.

Unfortunately this process of deleting objects that aren't live leaves the tenured space looking like Swiss Cheese. You get some used memory where objects live, and gaps in between where objects used to live. This kind of fragmentation isn't helpful for application performance because it makes it impossible to allocate objects that are bigger than the size of the holes.



 

Cheese after Mark and Sweep.

In order to reduce the Swiss Cheese problem the Parallel Mark/Sweep compacts the heap down to try and make live objects contiguously allocated at the start of the tenured space. After deletion it search areas of the tenured space in order to identify which have low occupancy and which have high occupancy. The live objects from lower occupancy regions are moved down towards regions that have higher occupancy. These are naturally at the lower end of memory from the previous compacting phase. The moving of objects in this phase is actually performed by the thread allocated to the destination region, rather than the source region.



 

Low occupancy cheese.

Summary

  • Parallel Scavenge splits heap up into 4 Spaces: eden, two survivor spaces and tenured.
  • Parallel Scavenge uses a parallel, copying collector to collector Eden and Survivor Spaces.
  • A different algorithm is used for the tenured space. This marks all live objects, deletes the dead objects and then compacts the space/
  • Parallel Scavenge has good throughput but it pauses the whole program when it runs.

In part three I'll look at how the CMS, or Concurrent-Mark-Sweep, collector works. Hopefully this post will be easier for those with dairy allergies to read.

  1. Technically there is an 'occupancy threshold' for each heap space - which defines how full the space is allowed to get before collection occurs.
  2. This copying algorithm is based on Cheney's algorithm
 

Concurrent Mark Sweep

This follows on from my previous two garbage collection blog posts:

  1. Overview of GC in Hotspot.
  2. Parallel Garbage Collectors.

Concurrent Mark Sweep

The parallel garbage collectors in Hotspot are designed to minimise the amount of time that the application spends undertaking garbage collection, which is termed throughput. This isn't an appropriate tradeoff for all applications - some require individual pauses to be short as well, which is known as a latency requirement.

The Concurrent Mark Sweep (CMS) collector is designed to be a lower latency collector than the parallel collectors. The key part of this design is trying to do part of the garbage collection at the same time as the application is running. This means that when the collector needs to pause the application's execution it doesn't need to pause for as long.

At this point you're probably thinking 'don't parallel and concurrent mean something fairly similar?' Well in the context of GC Parallel means "uses multiple threads to perform GC at the same time" and Concurrent means "the GC runs at the same time as the application is collecting".

Young Generational Collection

The young gen collector in CMS is called ParNew and it actually uses the same basic algorithm as the Parallel Scavenge collector in the parallel collectors, that I described previously.

This is still a different collector in terms of the hotspot codebase to Parallel Scavenge though because it needs to interleave its execution with the rest of CMS, and also implements a different internal API to Parallel Scavenge. Parallel Scavenge makes assumptions about which tenured collectors it works with - specifically ParOld and SerialOld. Bare in mind this also means that the young generational collector is stop the world.

Tenured Collection

As with the ParOld collector the CMS tenured collector uses a mark and sweep algorithm, in which live objects are marked and then dead objects are deleted. Deleted is really a strange term when it comes to memory management. The collector isn't actually deleting objects in the sense of blanking memory, its merely returning the memory associated with that object to the space that the memory system can allocate from - the freelist. Even though its termed a concurrent mark and sweep collector, not all phases run concurrently with the application's execution, two of them stop the world and four run concurrently.

How is GC triggered?

In ParOld garbage collection is triggered when you run out of space in the tenured heap. This approach works because ParOld simply pauses the application to collect. In order for the application to continue operating during a tenured collection, the CMS collector needs to start collecting when there is a still enough working space left in tenured.

So CMS starts based upon how full up tenured is - the idea is that the amount of free space left is your window of opportunity to run GC. This is known as the initiating occupancy fraction and is described in terms of how full the heap is, so a fraction of 0.7 gives you a window of 30% of your heap to run the CMS GC before you run out of heap.

Phases

Once the GC is triggered, the CMS algorithm consists of a series of phases run in sequence.

  1. Initial Mark - Pauses all application threads and marks all objects directly reachable from root objects as live. This phase stops the world.
  2. Concurrent Mark - Application threads are restarted. All live objects are transitively marked as reachable by following references from the objects marked in the initial mark.
  3. Concurrent Preclean - This phase looks at objects which have been updated or promoted during the concurrent mark or new objects that have been allocated during the concurrent mark. It updates the mark bit to denote whether these objects are live or dead. This phase may be run repeatedly until there is a specified occupancy ratio in Eden.
  4. Remark Since some objects may have been updated during the preclean phase its still necessary to do stop the world in order to process the residual objects. This phase does a retrace from the roots. It also processes reference objects, such as soft and weak references. This phase stops the world.
  5. Concurrent Sweep - This looks through the Ordinary Object Pointer (OOP) Table, which references all objects in the heap, and finds the dead objects. It then re-adds the memory allocated to those objects to its freelist. This is the list of spaces from which an object can be allocated.
  6. Concurrent Reset - Reset all internal data structures in order to be able to run CMS again in future.

Theoretically the objects marked during the preclean phase would get looked at during the next phase - remark - but the remark phase is stop the world, so the preclean phase exists to try and reduce remark pauses by doing part of the remark work concurrently. When CMS was originally added to HotSpot this phase didn't exist at all. It was added in Java 1.5 in order to address scenarios when a young generation scavenging collection causes a pause and is immediately followed by a remark. This remark also causes a pause, which combine to make a more painful pause. This is why remarks are triggered by an occupancy threshold in Eden - the goal is to schedule the remark phase halfway between young gen pauses.

The remark phases are also pausing, whilst the preclean isn't, which means that having precleans reduces the amount of time spent paused in GC.

Concurrent Mode Failures

Sometimes CMS is unable to meet the needs of the application and a stop-the-world Full GC needs to be run. This is called a concurrent mode failure, and usually results in a long pause. A concurrent mode failure happens when there isn't enough space in tenured to promote an object. There are two causes for this:

  • An object is promoted that is too large to fit into any contiguous space in memory.
  • There isn't enough space in tenured to account for the rate of live objects being promoted.

This might happen because the concurrent collection is unable to free space fast enough given the object promotion rates or because the continued use of the CMS collector has resulted in a fragmented heap and there's no individual space large enough to promote an object into. In order to properly 'defrag' the tenured heap space a full GC is required.

Permgen

CMS doesn't collect permgen spaces by default, and requires the ?XX:+CMSClassUnloadingEnabled flag enabled in order to do so. If, whilst using CMS, you run out of permgen space without this flag switched on it will trigger a Full GC. Furthermore permgen space can hold references into normal heap via things like classloaders, which means that until you collect Permgen you may be leaking memory in regular heap. In Java 7 String constants from class files are also allocated in regular heap, instead of permgen, which reduces permgen consumption, but also adds to the set of object references coming into regular heap from permgen.

Floating Garbage

At the end of a CMS collection its possible for some objects to not have been deleted - this is called Floating Garbage. This happens when objects become de-referenced since the initial mark. The concurrent preclean and the remark phase ensure that all live objects are marked by looking at objects which have been created, mutated or promoted. If an object has become dereferenced between the initial mark and the remark phase then it would require a complete retrace of the entire object graph in order to find all dead objects. This is obviously very expensive, and the remark phase must be kept short since its a pausing phase.

This isn't necessarily a problem for users of CMS since the next run of the CMS collector will clean up this garbage.

Summary

Concurrent Mark and Sweep reduces the pause times observed in the parallel collector by performing some of the GC work at the same time as the application runs. It doesn't entirely remove the pauses, since part of its algorithm needs to pause the application in order to execute.

It took me a little longer than I had hoped to get round to writing this blog post - but if you want to know when my next post is published simply enter your email in the top right hand corner of this blog to subscribe by email.

 

G1: Garbage First

The G1 collector is the latest collector to be implemented in the hotspot JVM. It's been a supported collector ever since Java 7 Update 4. Its also been publicly stated by the Oracle GC Team that their hope for low pause GC is a fully realised G1. This post follows on from my previous garbage collection blog posts:

  1. Overview of GC in Hotspot.
  2. Parallel Garbage Collectors.
  3. Concurrent Mark Sweep.

The Problem: Large heaps mean Large Pause Times

The Concurrent Mark and Sweep (CMS) collector is the currently recommended low pause collector, but unfortunately its pause times scale with the amount of live objects in its tenured region. This means that whilst its relatively easy to get short GC Pauses with smaller heaps, but once you start using heaps in the 10s or 100s of Gigabytes the times start to ramp up.

CMS also doesn't "defrag" its heap so at some point in time you'll get a concurrent mode failure (CMF), triggering a full gc. Once you get into this full gc scenario you can expect a pause in the timeframe of roughly 1 second per Gigabyte of live objects. With CMS your 100GB heap can be a 1.5 minute GC Pause ticking time bomb waiting to happen ...



 

Good GC Tuning can address this problem, but sometimes it just pushes the problem down the road. A Concurrent Mode Failure and therefore a Full GC is inevitable on a long enough timeline unless you're in the tiny niche of people who deliberately avoid filling their tenured space.

G1 Heap Layout

The G1 Collector tries to separate the pause time of an individual collection from the overall size of the heap by splitting up the heap into different regions. Each region is of a fixed size, between 1MB and 32MB, and the JVM aims to create about 2000 regions in total.


You may recall from previous articles that the other collectors split the heap up into Eden, Survivor Space and Tenured memory pools. G1 retains the same categories of pools but instead of these being contiguous blocks of memory, each region is logically categorised into one of these pools.

There is also another type of region - the humongous region. These are designed to store objects which are bigger in size than most objects - for example a very long array. Any object which is bigger than 50% of the size of a region is stored in a humongous region. They work by taking multiple normal regions which are contiguously located in memory and treating them as a single logical region.




 
 

Remembered Sets

Of course there's little point in splitting the heap into regions if you are going to have to scan the entire heap to figure out which objects are marked live. The first step in achieving this is breaking down regions into 512 Byte segments called cards. Each card has a 1 byte entry in the card marking table.

Each region has an associated remembered set or RSet - which is the set of cards that have been written to. A card is in the remembered set if an object from another region stored within the card points to an object within this region.

Whenever the mutator writes to an object reference, a write barrier is used to update the remembered set. Under the hood the remembered set is split up into different collections so that different threads can operate without contention, but conceptually all collections are part of the same remembered set.

Concurrent Marking

In order to identify which heap objects are live G1 performs a mostly concurrent mark of live objects.

  • Marking Phase The goal of the marking phase is to figure out which objects within the heap are live. In order to store which objects are live, G1 uses a marking bitmap - which stores a single bit for every 64bits on the heap. All objects are traced from their roots, marking areas with live objects in the marking bitmap. This is mostly concurrent, but there is an Initial Marking Pause similar to CMS where the application is paused and the first level of children from the root objects are traced. After this completes the mutator threads restart. G1 needs to keep an up to date understanding of what is live in the heap, since the heap isn't being cleaned up in the same pause as the marking phase.
  • Remarking Phase The goal of the remarking phase is to bring the information from the marking phase about live objects up to date. The first thing to do is decide when to remark. Its triggered by a percentage of the heap being full. This is calculated by taking information from the marking phase and the number of allocations since then and which tells G1 whether its over the required percentage. G1 uses the aforementioned write barrier to take note of changes to the heap and store them in series of change buffers. The objects in the change buffer are marked in the marking bitmap concurrently. When the fill percentage is reached the mutator threads are paused again and the change buffers are processed, marking objects in change buffers live.
  • Cleanup Phase At this point G1 knows which objects are live. Since G1 focusses on regions which have the most free space available, its next step is to work out the free space in a given region by counting the live objects. This is calculated from the marking bitmap, and regions are sorted according to which regions are most likely to be beneficial to collect. Regions which are to be collected are stored in what's know as a collection set or CSet.

Evacuation

Similar to the approach taken by the hemispheric Young Generation in the Parallel GC and CMS collectors dead objects aren't collected. Instead live objects get evacuated from a region and the entire region is then considered free.

G1 is intelligent about how it reclaims living objects - it doesn't try to reclaim all living objects in a given cycle. It targets regions which are likely to reclaim as much space as possible and only evacuates those. It works out its target regions by calculating the proportion of live objects within a region and picking region with the lowest proportion of live objects.

Objects are evacuated into free regions, from multiple other regions. This means that G1 compacts the data when performing GC. This is operated on in parallel by multiple threads. The traditional 'Parallel GC' does this but CMS doesn't.

Similar to CMS and Parallel GC there is a concept of tenuring. That is to say that young objects become 'old' if they survive enough collections. This number is called the tenuring threshold. If a young generational region survives the tenuring threshold and retains enough live objects to avoid being evacuated then the region is promoted. First to be a survivor and eventually a tenured region. It is never evacuated.

Evacuation Failure

Unfortunately, G1 can still encounter a scenario similar to a Concurrent Mode Failure in which it falls back to a Stop the World Full GC. This is called an evacuation failure and happens when there aren't any regions which are free. No free regions means no where to evacuate objects.

Theoretically evacuation failures are less likely to happen in G1 than Concurrent Mode Failures are in CMS. This is because G1 compacts its regions on the fly rather than just waiting for a failure for compaction to occur.

Conclusions

Despite the compaction and efforts at low pauses G1 isn't a guaranteed win and any attempt to adopt it should be accompanied by objective and measurable performance targets and GC Log analysis. The methodology required is out of the scope of this blog post, but hopefully I will cover it in a future post.

Algorithmically there are overheads that G1 encounters that other Hotspot collectors don't. Notably the cost of maintaining remembered sets. Parallel GC is still the recommended throughput collector, and in many circumstances CMS copes better than G1.

Its too early to tell if G1 will be a big win over the CMS Collector, but in some situations its already providing benefits for developers who use it. Over time we'll see if the performance limitations of G1 are really G1 limits or whether the development team just needs more engineering effort to solve the problems that are there.

Thanks to John Oliver, Tim Monks and Martijn Verburg for reviewing drafts of this and previous GC articles.

  • 大小: 6.3 KB
  • 大小: 11.6 KB
  • 大小: 7 KB
  • 大小: 61.5 KB
  • 大小: 9.2 KB
  • 大小: 15.9 KB
  • 大小: 155.3 KB
  • 大小: 4.4 KB
  • 大小: 10.7 KB
分享到:
评论

相关推荐

    zz差分包

    在IT领域,垃圾回收(Garbage Collection,简称GC)是一项重要的自动内存管理技术,主要用于管理程序中的动态内存分配。在Java、.NET等编程语言中,垃圾回收机制是其内存管理的核心部分,它自动识别并释放不再使用的...

    Matlab基础与应用获奖课件.pptx

    Matlab基础与应用获奖课件.pptx

    铁路订票平台 2025免费JAVA微信小程序毕设

    2025免费微信小程序毕业设计成品,包括源码+数据库+往届论文资料,附带启动教程和安装包。 启动教程:https://www.bilibili.com/video/BV1BfB2YYEnS 讲解视频:https://www.bilibili.com/video/BV1BVKMeZEYr 技术栈:Uniapp+Vue.js+SpringBoot+MySQL。 开发工具:Idea+VSCode+微信开发者工具。

    深部应力环境模拟中pfc-flac耦合代码的应用与实现

    内容概要:本文详细介绍了pfc-flac耦合代码在深部应力环境中模拟巷道和煤层开挖的应用。pfc-flac耦合代码将颗粒流(PFC)和有限差分法(FLAC)相结合,分别用于微观颗粒间相互作用和宏观应力应变问题的处理。文中展示了具体的Python代码片段,解释了如何生成颗粒模型、定义接触模型、创建模拟区域、设置初始应力状态以及实现数据交换。此外,还讨论了耦合代码在处理煤岩损伤演化、开挖步进控制等方面的优势,并提供了实际应用中的优化建议和技术难点。 适合人群:从事岩土工程、矿山开采等领域研究的技术人员和科研工作者。 使用场景及目标:①模拟深部应力环境下的巷道和煤层开挖过程;②研究岩体应力重分布及其对开挖的影响;③提高模拟精度,更好地预测岩层破坏形态。 其他说明:尽管pfc-flac耦合代码在深部应力环境模拟中有显著优势,但也存在一些挑战,如内存消耗较大等问题。文中提到了一些优化措施,如调整参数以节省内存开销等。

    电力系统基于谐波线性化的阻抗建模与稳定性分析:考虑锁相环影响的逆变器输出阻抗模型设计及应用

    内容概要:文章详细阐述了阻抗建模过程,分为不考虑锁相环影响和考虑锁相环影响两种情况。不考虑锁相环时,基于电路拓扑结构列出方程,通过谐波线性化方法建模,并将电流基频分量及其正负序谐波分量进行傅里叶变换至频域,再变换至dq坐标系下,最终求得正负序输出阻抗。考虑锁相环影响时,重点在于锁相环对abc坐标系向dq坐标系转换的作用,以及谐波分量对锁相环角度的影响,通过引入传递函数,推导出正负序谐波电压分量到输出扰动角的传递函数,进而求得更精确的输出阻抗。此外,还介绍了稳定性分析,通过诺顿等效和戴维南等效,依据奈奎斯特稳定判据确保系统稳定。 适合人群:电力电子、自动化等相关专业的研究人员和技术人员,尤其是对电力系统建模与稳定性分析感兴趣的读者。 使用场景及目标:①用于研究并网逆变器的阻抗特性;②提高对锁相环在电力系统中作用的理解;③为电力系统的稳定性分析提供理论支持。 阅读建议:本文涉及较多数学公式和专业术语,建议读者先掌握基本的电力系统理论和控制理论基础知识,结合相关文献深入理解公式推导过程。

    SpringBoot的租房系统,你看这篇就够了(附源码)

    SpringBoot的租房系统,你看这篇就够了(附源码)

    MapGIS地质图件一般统一规定.doc

    MapGIS地质图件一般统一规定.doc

    MATLAB/Simulink中800V直流母线311V交流输出的VSG虚拟同步发电机控制详解

    内容概要:本文详细介绍了如何在MATLAB/Simulink中搭建和调试800V直流母线、311V交流输出的虚拟同步发电机(VSG)控制系统。首先,文章解释了VSG的基本概念及其重要性,接着深入探讨了功率计算模块、机械方程模块和电压环控制的具体实现方法。文中提供了多个关键公式的代码实现,并强调了调试过程中需要注意的关键参数和常见错误。此外,还分享了一些实践经验,如避免过度调制、设置合理的虚拟惯量和阻尼系数、以及进行负载突变测试的方法。 适合人群:从事电力电子、电力系统仿真研究的技术人员,尤其是有一定MATLAB/Simulink基础的研发人员。 使用场景及目标:适用于需要理解和实现虚拟同步发电机控制系统的科研项目和技术开发。目标是帮助读者掌握VSG的工作原理和实现方法,提高仿真和实际应用的成功率。 其他说明:文章不仅提供了详细的理论推导和代码实现,还结合了大量的实践经验,使读者能够更好地应对实际工程中的挑战。同时,提醒读者在实际应用前进行全面的极端工况测试,确保系统的稳定性和可靠性。

    智慧校园管理系统 2025免费JAVA微信小程序毕设

    2025免费微信小程序毕业设计成品,包括源码+数据库+往届论文资料,附带启动教程和安装包。 启动教程:https://www.bilibili.com/video/BV1BfB2YYEnS 讲解视频:https://www.bilibili.com/video/BV1BVKMeZEYr 技术栈:Uniapp+Vue.js+SpringBoot+MySQL。 开发工具:Idea+VSCode+微信开发者工具。

    PLC关键工程师现场调试步骤.doc

    PLC关键工程师现场调试步骤.doc

    PCCAN接口解决专题方案.doc

    PCCAN接口解决专题方案.doc

    电力传输领域中110kV三相高压电缆对地面人体电磁场影响的COMSOL仿真分析

    内容概要:本文详细介绍了如何使用COMSOL软件对110kV三相高压电缆周围的电磁场进行建模和分析,探讨其对人体的影响。主要内容包括:建立电缆和人体的几何模型,选择适当的物理场模块并设置相关参数,进行精细的网格划分,以及最终求解和结果分析。文中不仅提供了具体的MATLAB和Java代码片段,还讨论了实际工程中需要注意的关键细节和技术难点,如电流密度、磁场强度的计算,网格划分策略,求解器设置等。通过对电场和磁场分布的深入分析,得出电缆周围电磁场对人体的具体影响及其安全性评价。 适合人群:从事电力系统设计、电磁兼容性研究的专业人士,以及对电磁场仿真感兴趣的科研工作者。 使用场景及目标:适用于电力设施建设前期的电磁环境评估,帮助工程师优化电缆布局,确保电磁辐射在安全标准范围内,保障公共健康。此外,也可作为教学案例,帮助学生掌握电磁场仿真技术和COMSOL软件的应用。 其他说明:文章强调了实际工程应用中的注意事项,如材料参数的选择、网格划分的合理性、求解器配置等,这些都是确保仿真结果准确性的重要因素。同时,通过具体实例展示了如何利用COMSOL进行复杂电磁场问题的研究,为相关领域的进一步探索提供了宝贵的经验和参考。

    【数学建模竞赛】“妈妈杯”全国大学生数学建模竞赛:参赛指南与备赛经验分享

    内容概要:“妈妈杯”是全国大学生数学建模竞赛的俗称,由中国工业与应用数学学会(CSIAM)主办。该竞赛面向全国普通高校全日制在校本科生和研究生,通常于每年11月左右举行。比赛形式为团队赛,每队3人,在规定时间内完成数学建模题目并提交论文。竞赛题目涉及实际问题,要求参赛者运用数学建模、计算机编程和论文写作能力解决问题。文中详细介绍了比赛的报名方式、备赛经验和往届题目。报名时间为每年10月左右,费用约为200~400元/队。备赛建议包括团队分工、核心技能提升、赛前模拟训练等方面。往届题目涵盖社会热点问题,如城市垃圾分类、新能源汽车充电桩布局等。 适合人群:全国普通高校全日制在校本科生和研究生,尤其是对数学建模感兴趣的学生。 使用场景及目标:①了解“妈妈杯”全国大学生数学建模竞赛的具体情况,包括主办单位、参赛对象、比赛时间和形式;②掌握比赛的报名方式、费用和官方信息渠道;③学习备赛经验和技巧,如团队分工、核心技能提升、赛前模拟训练;④参考往届题目和社会热点问题,为参赛做好充分准备。 阅读建议:此资源详细介绍了“妈妈杯”全国大学生数学建模竞赛的各个方面,从比赛基本信息到备赛经验都有覆盖。建议参赛学生仔细阅读,结合自身情况进行团队分工和技能提升,并关注官方渠道以获取最新信息。

    Linux操作系统试验基础指导书.doc

    Linux操作系统试验基础指导书.doc

    永磁同步电机非奇异快速终端滑模速度控制(GFTSMC)仿真及其应用

    内容概要:本文详细介绍了永磁同步电机(PMSM)采用非奇异快速终端滑模控制(GFTSMC)进行速度控制的方法和仿真结果。首先解释了传统PI调节器和滑模控制存在的问题,特别是抖振和收敛速度慢。接着展示了GFTSMC的核心设计思路,包括滑模面设计、控制律实现以及参数选择。文中提供了具体的MATLAB代码片段,用于实现滑模面和控制律,并讨论了如何通过调整参数来优化性能。仿真结果显示,在负载突变情况下,GFTSMC相比传统方法表现出更快的响应时间和更低的抖振水平。此外,作者还分享了一些调试经验和注意事项,如使用sigmoid函数减少抖振、设置合理的参数范围等。 适合人群:从事电机控制系统研究与开发的技术人员,尤其是对永磁同步电机有深入了解并希望提高其抗扰能力和响应速度的研究者。 使用场景及目标:适用于需要高性能速度控制的应用场合,如无人机电调、电动汽车驱动系统等。目标是实现快速响应的同时确保系统的稳定性和平滑性。 其他说明:文中提到的实际案例基于3kW永磁同步电机的实验数据,强调了理论与实践相结合的重要性。同时指出,尽管GFTSMC表现优异,但在实际应用中仍需考虑电机参数辨识的准确性,以避免因参数失配导致的性能下降。

    基于COMSOL的煤矿瓦斯抽采数值模拟:变渗透率模型与煤体变形耦合效应研究

    内容概要:本文详细介绍了利用COMSOL进行煤矿瓦斯抽采数值模拟的方法,特别是变渗透率模型与煤体变形之间的耦合效应。文章首先解释了如何在COMSOL中建立固体力学和达西流的耦合模型,通过引入动态渗透率公式(如指数型变渗透率模型),将煤体变形与瓦斯流动紧密联系在一起。接着讨论了边界条件的设置技巧,强调了钻孔周围网格加密的重要性以及求解器配置的优化方法。此外,文中还分享了一些实用的操作技巧,如使用探针功能实时监测压力变化,确保模拟结果的准确性。最后,作者通过实际案例展示了模型的有效性和应用价值,指出该模型能够更好地预测瓦斯流动路径和抽采效果。 适合人群:从事煤矿开采、瓦斯治理及相关领域的科研人员和技术工程师。 使用场景及目标:适用于需要精确模拟煤矿瓦斯抽采过程的研究项目,旨在提高抽采效率并保障矿井安全。通过该模型,研究人员可以深入理解煤体变形与瓦斯流动之间的复杂关系,从而优化抽采工艺。 其他说明:文章不仅提供了详细的建模步骤和技术细节,还分享了许多实践经验,帮助读者避免常见错误,提升模拟的成功率。同时,作者鼓励进一步探索多物理场耦合的可能性,如加入温度场的影响,以获得更加全面的理解。

    自动控制领域中直流电机模糊PID控制算法的Matlab实现

    内容概要:本文详细介绍了直流电机的传递函数及其在Matlab环境下的模糊PID控制算法实现。首先,阐述了直流电机传递函数的基本概念,描述了输入电枢电压与输出转速之间的动态关系。接着,解释了模糊控制PID算法的工作原理,包括模糊化、模糊规则制定、模糊推理与解模糊等步骤。然后,展示了具体的Matlab代码实现,涵盖了传递函数定义、模糊控制器设计、仿真过程以及绘图展示控制效果。通过对比实验数据,证明了模糊PID控制在应对负载突变时的优势。 适合人群:对自动控制理论有一定了解并希望通过Matlab实现具体控制算法的研究人员和技术人员。 使用场景及目标:适用于需要精确控制直流电机转速的应用场景,如工业自动化生产线、机器人控制系统等。主要目标是提高系统的鲁棒性和适应性,尤其是在面对非线性或不确定性的环境中。 其他说明:文中提供的Matlab代码可以直接运行,便于读者理解和实践。同时,强调了在设计模糊控制器时需要注意的一些关键点,如隶属函数的选择和规则库的设计,确保控制效果最优。

    Matlab. EW-trading-Strategy

    In MATLAB, an EW (Exponentially Weighted) trading strategy involves using moving averages, a common technique for smoothing price data and identifying trends. This strategy can be implemented in MATLAB to generate buy and sell signals based on how the current price relates to the exponentially weighted moving average. Here's a breakdown of how to implement an EW trading strategy in MATLAB: 1. Data Preparation: Load historical price data: Import your financial data into MATLAB. This data should include the closing price of the asset you intend to trade. Calculate the exponentially weighted moving average (EWMA): Use the ewma function in MATLAB to calculate the EWMA. The ewma function takes the price data and a "smoothing factor" as input. The smoothing factor controls the speed at which th

    【嵌入式系统】AUTOSAR操作系统任务管理机制详解:Basic与Extended任务状态及属性配置了AUTOSAR

    内容概要:本文深入探讨了AUTOSAR OS中的任务机制,详细介绍了Basic Task和Extended Task两种任务类型及其状态转换。Basic Task有Suspended、Ready、Running三种状态,Extended Task在此基础上增加了Waiting状态。文章还列举并解释了AUTOSAR OS任务的多个关键属性配置,如任务名称、激活次数限制、内存保护标识、优先级、调度模式、堆栈大小、任务类型、FPU使用、自动启动和时间保护等。这些配置项可通过Vector DaVinci Configurator工具进行设置,以满足不同的应用场景需求。; 适合人群:汽车电子软件工程师,尤其是对AUTOSAR OS有一定了解,希望深入了解任务调度机制的工程师。; 使用场景及目标:①理解Basic Task与Extended Task的区别及其状态转换机制;②掌握AUTOSAR OS任务的关键属性配置,优化任务调度和系统性能; 阅读建议:本文内容较为专业,建议读者结合实际项目经验,特别是AUTOSAR OS的配置和调试经验来阅读,以便更好地理解和应用文中提到的任务机制和配置方法。

    毕业设计基于Python Web的社区爱心养老管理系统(完整前后端+mysql+说明文档+LW+PPT).zip

    前台: 首页、公共书籍、活动信息、网站公告、个人中心 管理员: 首页、用户、身体健康、公共书籍、借阅信息、归还信息、还书入库、图书分类、活动信息、活动报名、活动分类、系统管理等 关键技术:Python、Django、Mysql、B/S 内容包含:源码+数据库+开发文档+LUN文+PPT,快速上手。 标价即实价:一键购买,无需议价。 技术支持:已测试可正常运行,调试问题可联系客服有偿解决。 更多项目:欢迎咨询,我们将快速回复。

Global site tag (gtag.js) - Google Analytics