From:
Java applications run on the JVM, but what do you know about JVM technology? This article, the first in a series, is an overview of how a classic Java virtual machine works such as pros and cons of Java's write-once, run-anywhere engine, garbage collection basics, and a sampling of common GC algorithms and compiler optimizations. Later articles will turn to JVM performance optimization, including newer JVM designs to support the performance and scalability of today's highly concurrent Java applications.
If you are a programmer then you have undoubtedly experienced that special feeling when a light goes on in your thought process, when those neurons finally make a connection, and you open your previous thought pattern to a new perspective. I personally love that feeling of learning something new. I've had those moments many times in my work with Java virtual machine (JVM) technologies, particularly to do with garbage collection and JVM performance optimization. In this new JavaWorld series I hope to share some of that illumination with you. Hopefully you'll be as excited to learn about JVM performance as I am to write about it!
This series is written for any Java developer interested in learning more about the underlying layers of the JVM and what a JVM really does. At a high level, I will discuss garbage collection and the never-ending quest to free memory safely and quickly without impacting running applications. You'll learn about the key components of a JVM: garbage collection and GC algorithms, compiler flavors, and some common optimizations. I will also discuss why Java benchmarking is so difficult and offer tips to consider when measuring performance. Finally, I'll touch on some of the newer innovations in JVM and GC technology, including highlights from Azul's Zing JVM, IBM JVM, and Oracle's Garbage First (G1) garbage collector.
I hope you'll walk away from this series with a greater understanding of the factors that limit Java scalability today, as well as how those limitations force us to architect our Java deployments in a non-optimal way. Hopefully, you'll experience some aha!moments and be inspired to do something good for Java: stop accepting the limitations and work for change! If you're not already an open source contributor, perhaps this series will encourage you in that direction.
JVM performance optimization: Read the series
- Part 1: Overview
- Part 2: Compilers
- Part 3: Garbage collection
- Part 4: Concurrently compacting GC
- Part 5: Scalability
JVM performance and the 'one for all' challenge
I have news for people who are stuck with the idea that the Java platform is inherently slow. The belief that the JVM is to blame for poor Java performance is decades old -- it started when Java was first being used for enterprise applications, and it's outdated! It is true that if you compare the results of running simple static and deterministic tasks on different development platforms, you will most likely see better execution using machine-optimized code over using any virtualized environment, including a JVM. But Java performance has taken major leaps forward over the past 10 years. Market demand and growth in the Java industry have resulted in a handful of garbage-collection algorithms and new compilation innovations, and plenty of heuristics and optimizations have emerged as JVM technology has progressed. I'll introduce some of them later in this series.
The beauty of JVM technology is also its biggest challenge: nothing can be assumed with a "write once, run anywhere" application. Rather than optimizing for one use case, one application, and one specific user load, the JVM constantly tracks what is going on in a Java application and dynamically optimizes accordingly. This dynamic runtime leads to a dynamic problem set. Developers working on the JVM can't rely on static compilation and predictable allocation rates when designing innovations, at least not if we want performance in production environments!
A career in JVM performance
Early in my career I realized that garbage collection is hard to "solve," and I've been fascinated with JVMs and middleware technology ever since. My passion for JVMs started when I worked on the JRockit team, coding a novel approach to a self-learning, self-tuning garbage collection algorithm (see Resources). That project, which turned into an experimental feature of JRockit and laid ground for the Deterministic Garbage Collection algorithm, started my journey through JVM technology. I've worked for BEA Systems, partnered with Intel and Sun, and was briefly employed by Oracle following its acquisition of BEA Systems. I later joined the team at Azul Systems to manage the Zing JVM, and today I work for Cloudera.
Machine-optimized code might deliver better performance, but it comes at the cost of inflexibility, which is not a workable trade-off for enterprise applications with dynamic loads and rapid feature changes. Most enterprises are willing to sacrifice the narrowly perfect performance of machine-optimized code for the benefits of Java:
- Ease of coding and feature development (meaning, faster time to market)
- Access to knowledgeable programmers
- Rapid development using Java APIs and standard libraries
- Portability -- no need to rewrite a Java application for every new platform
From Java code to bytecode
As a Java programmer, you are probably familiar with coding, compiling, and executing Java applications. For the sake of example, let's assume that you have a program, MyApp.java
and you want to run it. To execute this program you need to first compile it with javac
, the JDK's built-in static Java language-to-bytecode compiler. Based on the Java code, javac
generates the corresponding executable bytecode and saves it into a same-named class file: MyApp.class
. After compiling the Java code into bytecode, you are ready to run your application by launching the executable class file with the java
command from your command-line or startup script, with or without startup options. The class is loaded into the runtime (meaning the running Java virtual machine) and your program starts executing.
That's what happens on the surface of an everyday application execution scenario, but now let's explore what really happens when you call that java
command. What is this thing called a Java virtual machine? Most developers have interacted with a JVM through the continuous process of tuning -- aka selecting and value-assigning startup options to make your Java program run faster, while deftly avoiding the infamous JVM "out of memory" error. But have you ever wondered why we need a JVM to run Java applications in the first place?
What is a Java virtual machine?
Simply speaking, a JVM is the software module that executes Java application bytecode and translates the bytecode into hardware- and operating system-specific instructions. By doing so, the JVM enables Java programs to be executed in different environments from where they were first written, without requiring any changes to the original application code. Java's portability is key to its popularity as an enterprise application language: developers don't have to rewrite application code for every platform because the JVM handles the translation and platform-optimization.
A JVM basically is a virtual execution environment acting as a machine for bytecode instructions, while assigning execution tasks and performing memory operations through interaction with underlying layers.
A JVM also takes care of dynamic resource management for running Java applications. This means it handles allocating and de-allocating memory, maintaining a consistent thread model on each platform, and organizing the executable instructions in a way that is suited for the CPU architecture where the application is executed. The JVM frees the programmer from keeping track of references between objects and knowing how long they should be kept in the system. It also frees us from having to decide exactly when to issue explicit instructions to free up memory -- an acknowledged pain point of non-dynamic programming languages like C.
You could think about the JVM as a specialized operating system for Java; its job is to manage the runtime environment for Java applications. A JVM basically is a virtual execution environment acting as a machine for bytecode instructions, while assigning execution tasks and performing memory operations through interaction with underlying layers.
JVM components overview
There's a lot more to write about JVM internals and performance optimization. As foundation for upcoming articles in this series, I'll conclude with an overview of JVM components. This brief tour will be especially helpful for developers new to the JVM, and should prime your appetite for more in-depth discussions later in the series.
From one language to another -- about Java compilers
A compiler takes one language as an input and produces an executable language as an output. A Java compiler has two main tasks:
- Enable the Java language to be more portable, not tied into any specific platform when first written
- Ensure that the outcome is efficient execution code for the intended target execution platform
Compilers are either static or dynamic. An example of a static compiler is javac
. It takes Java code as input and translates it into bytecode -- a language that is executable by the Java virtual machine. Static compilers interpret the input code once and the output executable is in the form that will be used when the program executes. Because the input is static you will always see the same outcome. Only when you make changes to your original source and recompile will you see a different result.
Dynamic compilers, such as Just-In-Time (JIT) compilers, perform the translation from one language to another dynamically, meaning they do it as the code is executed. A JIT compiler lets you collect or create runtime profiling data (by the means of inserting performance counters) and make compiler decisions on the fly, using the environment data at hand. Dynamic compilation makes it possible to better sequence instructions in the compiled-to language, replace a set of instructions with more efficient sets, or even eliminate redundant operations. Over time you can collect more code-profiling data and make additional and better compilation decisions; altogether this is usually referred to as code optimization and recompilation.
Dynamic compilation gives you the advantage of being able to adapt to dynamic changes in behavior or application load over time that drive the need for new optimizations. This is why dynamic compilers are very well suited to Java runtimes. The catch is that dynamic compilers can require extra data structures, thread resources, and CPU cycles for profiling and optimization. For more advanced optimizations you'll need even more resources. In most environments, however, the overhead is very small for the execution performance improvement gained -- five or 10 times better performance than what you would get from pure interpretation (meaning, executing the bytecode as-is, without modification).
Allocation leads to garbage collection
Allocation is done on a per-thread basis in each "Java process dedicated memory address space," also known as the Java heap, or heap for short. Single-threaded allocation is common in the client-side application world of Java. Single-threaded allocation quickly becomes non-optimal in the enterprise application and workload-serving side, however, because it doesn't take advantage of the parallelism in modern multicore environments.
Parallell application design also forces the JVM to ensure that multiple threads do not allocate the same address space at the same time. You could control this by putting a lock on the entire allocation space. But this technique (a so-called heap lock) comes at a cost, as holding or queuing threads can cause a performance hit to resource utilization and application performance. A plus side of multicore systems is that they've created a demand for various new approaches to resource allocation in order to prevent the bottlenecking of single-thread, serialized allocation.
A common approach is to divide the heap into several partitions, where each partition is of a "decent size" for the application -- obviously something that would need tuning, as allocation rate and object sizes vary significantly for different applications, as well as by number of threads. A Thread Local Allocation Buffer(TLAB), or sometimes Thread Local Area (TLA), is a dedicated partition that a thread allocates freely within, without having to claim a full heap lock. Once the area is full, the thread is assigned a new area until the heap runs out of areas to dedicate. When there's not enough space left to allocate the heap is "full," meaning the empty space on the heap is not large enough for the object that needs to be allocated. When the heap is full, garbage collection kicks in.
Fragmentation
A catch with the use of TLABs is the risk of inducing memory inefficiency by fragmenting the heap. If an application happens to allocate object sizes that do not add up to or fully allocate a TLAB size, there is a risk that a tiny empty space too small to host a new object will be left. This leftover space is referred to as a "fragment." If the application also happens to keep references to objects that are allocated next to these leftover spaces then the space could remain un-used for a long time.
Fragmentation is what happens when fragments are scattered over the heap -- wasting heap space with tiny pieces of un-used memory. Configuring the "wrong" TLAB size for your application allocation behavior (with regard to object sizes and mix of object sizes and reference holding rate) is one path to an increasingly fragmented heap. As the application continues to run, the number of wasted fragmented spaces will come to hold an increasing portion of the free memory on the heap. Fragmentation causes decreased performance as the system is unable to allocate enough space for new application threads and objects. The garbage collector subsequently works harder to prevent out-of-memory exceptions.
TLAB waste can be worked around. One way to temporarily or completely avoid fragmentation is to tune TLAB size on a per-application basis. This approach typically requires re-tuning as soon as your application allocation behavior changes. It is also possible to use sophisticated JVM algorithms and other approaches to organize heap partitions for more efficient memory allocation. For instance, a JVM could implement free-lists, which are linked lists of free memory chunks of specific sizes. A consecutive free chunk of memory is linked to a linked list of other chunks of a similar size, thus creating a handful of lists, each with its own size range. In some case using free-lists leads to a better fitted-memory allocation approach. Threads allocating a certain-sized object are enabled to allocate it in chunks close to the object's size, generating potentially less fragmentation than if you just relied on fixed-sized TLABs.
GC trivia
Some early garbage collectors had multiple old generations, but it emerged that more than two old generation spaces resulted in more overhead than value.
Another way to optimize allocation versus fragmentation is to create a so-calledyoung generation, which is a dedicated heap area (e.g., address space) where you will allocate all new objects. The rest of the heap becomes the so-called old generation. The old generation is left for the allocation of longer lived objects, meaning objects that have survived garbage collection or large objects that are assumed to live for a very long time. In order to better understand this approach to allocation we need to talk a bit about garbage collection.
Garbage collection and application performance
Garbage collection is the operation performed by the JVM's garbage collector to free up occupied heap memory that is no longer referenced. When a garbage collection is first triggered, all objects that are still referenced are kept, and the space occupied by previously referenced objects is freed or reclaimed. When all reclaimable memory has been collected, the space is up for grabs and ready to be allocated again by new objects.
A garbage collector should never reclaim a referenced object; doing so would break the JVM standard specification. An exception to this rule is a soft or weak reference(if defined as such) that could be collected if the garbage collector were approaching a state of running out of memory. I strongly recommend that you try to avoid weak references as much as possible, however, because ambiguity in the Java specification has led to misinterpretation and error in their use. And besides, Java is designed for dynamic memory management, so you shouldn't have to think about where and when memory should be released.
The challenge for a garbage collector is to reclaim memory without impacting running applications more than necessary. If you don't garbage-collect enough, your application will run out of memory; if you collect too frequently you'll lose throughput and response time, which will negatively impact running applications.
GC algorithms
There are many different garbage collection algorithms. Later in this series we will deep dive into a few. On the highest level, the main two approaches to garbage collection are reference counting and tracing collectors.
- Reference counting collectors keep track of how many references an object has pointing to it. Once the count for an object becomes zero, the memory can immediately be reclaimed, which is one of the advantages of this approach. The difficulties with a reference-counting approach are circular structures and keeping all the reference counts up to date.
- Tracing collectors mark each object that is still referenced, iteratively following and marking all objects referenced by already marked objects. Once all still referenced objects are marked "live," all non-marked space can be reclaimed. This approach handles circular structures, but in most cases the collector has to wait until marking is complete before it can reclaim the unreferenced memory.
There are different ways to implement the above approaches. The more famous algorithms are marking or copying algorithms and parallel or concurrent algorithms. I'll talk about these in detail later in the series.
Generational garbage collection means dedicating separate address spaces on the heap for new objects and older ones. By "older objects" I mean objects that have survived a number of garbage collections. Having a young generation for new allocations and an old generation for surviving objects reduces fragmentation by quickly reclaiming memory occupied by short-lived objects, and by moving long-living objects closer together as they are promoted to the old generation address space. All of this reduces the the risk of fragments between long-living objects and protects the heap from fragmentation. A positive side-effect of having a young generation is also that it delays the need for the more costly collection of the old generation, as you are constantly reusing the same space for short-lived objects. (Old-space collection is more costly because the long-lived objects that live there contain more references to be traversed.)
A final algorithm improvement worth mentioning is compaction, which is a way to manage heap fragmentation. Compaction basically means moving objects together to free up larger consecutive chunks of memory. If you are familiar with disk fragmentation and the tools for handling it then you will find that compaction is similar, but works on Java heap memory. I'll discuss compaction in more detail later in this series.
In conclusion: Reflection points and highlights
A JVM enables portability ("write once, run anywhere") and dynamic memory management, both key features of the Java platform and reasons for its popularity and productivity.
In this first article in the JVM performance optimization series I've explained how a compiler translates bytecode to target-platform instruction languages and helps optimize the execution of your Java program dynamically. There are different compilers for different application needs.
I've also briefly discussed memory allocation and garbage collection, and how both relate to Java application performance. Basically, the higher the allocation rate of a Java application, the faster your heap fills up and the more frequently garbage collection is triggered. The challenge of garbage collection is to reclaim enough memory for your application needs without impacting running applications more than necessary, but to do so before the application runs out of memory. In future articles we'll explore the details of both traditional and more novel approaches to garbage collection for JVM performance optimization.
相关推荐
Pub Date: 2018 Learn how Java principles and technology make the best use of modern hardware and operating ...Learn performance aspects of the Java Collections API and get an overview of Java concurrency
然而,在启动 Tomcat 时,偶尔会遇到报错“Address already in use: JVM_Bind <null>:8080”,这意味着端口 8080 已经被占用。 为什么会出现这种情况?这通常是因为某个进程已经占用了端口 8080,阻止了 Tomcat 的...
【Java中的`java.net.BindException: Address already in use: JVM_Bind`异常】 在Java编程中,当你尝试启动一个服务器端应用,如Tomcat,或者任何需要监听特定端口的服务时,可能会遇到`java.net.BindException: ...
JVM 源代码part1 (看我的上传记录 有1--9 个part)
JVM 致命错误日志详解 JVM 致命错误日志是 Java 虚拟机(JVM)在遇到致命错误时生成的日志文件,用于记录错误信息和系统信息。该日志文件可以帮助开发者和维护者快速定位和解决问题。 文件头:文件头是错误日志的...
JVM 源代码 part07
JVM 源代码 part06
[jvm]深入JVM(一):从"abc"=="abc"看java的连接过程收藏 一般说来,我不关注java底层的东西,这次是一个朋友问到了,注意不光是 System.out.println("abc"=="abc");返回true, System.out.println(("a"+"b"+"c")....
JVM内存模型是Java程序运行的基础,它包括程序计数器、虚拟机栈、本地方法栈、堆、方法区等多个部分。了解这些内存区域的作用和特点,对于Java开发者来说至关重要。通过合理使用内存和垃圾收集机制,可以提高程序的...
在myeclipse中将html文件改成jsp文件时myeclipse卡住;将之前的任务关掉;再打开时多次部署项目的时候报错
JVM面试资料。 JVM结构:类加载器,执行引擎,本地方法接口,本地内存结构; 四大垃圾回收算法:复制算法、标记-清除算法、标记-整理算法、分代收集算法 七大垃圾回收器:Serial、Serial Old、ParNew、CMS、Parallel...
JVM 源代码 part09
JVM 源代码 part05
JVM 源代码 part08
jvm-npm, 适用于JVM的兼容CommonJS模块加载器 JVM上Javascript运行时中的NPM模块加载支持。 实现基于 http://nodejs.org/api/modules.html,应该完全兼容。 当然,不包括完整的node.js API,因此不要期望依赖于它的...
java hotspot源码比较JVM实现的性能 这是我的文章的源代码。 怎么跑 Java 8基准run-benchmark-java-8.sh Java 11基准run-benchmark-java-11.sh
JVM 不安全实用程序库 这项工作正在进行中,尚未正确命名。 专为大数据分析设计的高性能库,包括: 显式(堆内和堆外)内存分配器可避免 JVM 垃圾收集并提高缓存局部性 馆藏库 可以超出 2GB 限制的数组抽象(具有...
"JVM 参数调优-optimization-jvm.zip"这个压缩包很可能是包含了一套关于JVM调优的资料或者代码示例,可能包括文档、教程或工具。虽然具体的文件内容未给出,但我们可以根据标题和描述来讨论JVM调优的一些关键知识点...
在Java世界中,JVM(Java Virtual Machine)是运行所有Java程序的核心,它负责解析字节码、管理内存以及执行线程。对于大型企业级应用,JVM的性能监控与调优是至关重要的,因为这直接影响到应用的响应速度、稳定性和...
jvm-libp2p 用Kotlin编写的JVM的libp2p实现 :fire: :warning: 这是繁重的工作! :warning: 路线图 构建jvm-libp2p的工作分为两个阶段: 最小阶段(v0.x):旨在提供最基本的最小堆栈,以允许基于JVM的以太坊2.0...