- 浏览: 257854 次
- 性别:
- 来自: 苏州
文章分类
- 全部博客 (289)
- java (72)
- oracle (3)
- mysql (5)
- spring (28)
- hibernate (2)
- osgi (0)
- linux (2)
- ExtJs (1)
- jvm (0)
- mybatis (7)
- 分布式 (11)
- MINA (6)
- apache+tomcat (13)
- js+htm (7)
- android (44)
- http (1)
- hbase+hdoop (0)
- memcache (13)
- search (27)
- 部署及性能 (12)
- mongoDB (2)
- 多线程 (12)
- 安全管理验证 (9)
- struts (1)
- webservice (0)
- easyUI (1)
- spring security (16)
- pattern (6)
- 算法 (2)
最新评论
-
lzh8189146:
CommonsHttpSolrServer这个类,现在是不是没 ...
CommonsHttpSolrServer -
xiaochanzi:
我按照你的方法试了下,tomcat6可以发布,但是访问任何网页 ...
基于内嵌Tomcat的应用开发 -
phoneeye:
麻烦你,如果是抄来的文章,请给出来源。谢谢
ant 两则技巧 -
neverforget:
转载不注明出处
Spring Security3.1登陆验证 替换 usernamepasswordfilter -
liang1022:
若不使用eclipse ,如何在命令行下 运行服务端程序 ?
WebService CXF学习(入门篇2):HelloWorld
JAVA NIO之Direct Buffer 与 Heap Buffer的区别?
- 博客分类:
- MINA
个人总结
Direct Buffer vs. Heap Buffer
1、 劣势:创建和释放Direct Buffer的代价比Heap Buffer得要高;
2、 区别:Direct Buffer不是分配在堆上的,它不被GC直接管理(但Direct Buffer的JAVA对象是归GC管理的,只要GC回收了它的JAVA对象,操作系统才会释放Direct Buffer所申请的空间),它似乎给人感觉是“内核缓冲区(buffer in kernel)”。Heap Buffer则是分配在堆上的,或者我们可以简单理解为Heap Buffer就是byte[]数组的一种封装形式,查看JAVA源代码实现,Heap Buffer也的确是这样。
3、 优势:当我们把一个Direct Buffer写入Channel的时候,就好比是“内核缓冲区”的内容直接写入了Channel,这样显然快了,减少了数据拷贝(因为我们平时的read/write都是需要在I/O设备与应用程序空间之间的“内核缓冲区”中转一下的)。而当我们把一个Heap Buffer写入Channel的时候,实际上底层实现会先构建一个临时的Direct Buffer,然后把Heap Buffer的内容复制到这个临时的Direct Buffer上,再把这个Direct Buffer写出去。当然,如果我们多次调用write方法,把一个Heap Buffer写入Channel,底层实现可以重复使用临时的Direct Buffer,这样不至于因为频繁地创建和销毁Direct Buffer影响性能。
简单的说,我们需要牢记三点:
(1) 平时的read/write,都会在I/O设备与应用程序空间之间经历一个“内核缓冲区”。
(2) Direct Buffer就好比是“内核缓冲区”上的缓存,不直接受GC管理;而Heap Buffer就仅仅是byte[]字节数组的包装形式。因此把一个Direct Buffer写入一个Channel的速度要比把一个Heap Buffer写入一个Channel的速度要快。
(3) Direct Buffer创建和销毁的代价很高,所以要用在尽可能重用的地方。
他山之石
REFER:
http://stackoverflow.com/questions/5670862/bytebuffer-allocate-vs-bytebuffer-allocatedirect
Operating systems perform I/O operations on memory areas. These memory areas, as far as the operating system is concerned, are contiguous sequences of bytes. It's no surprise then that only byte buffers are eligible to participate in I/O operations. Also recall that the operating system will directly access the address space of the process, in this case the JVM process, to transfer the data. This means that memory areas that are targets of I/O perations must be contiguous sequences of bytes. In the JVM, an array of bytes may not be stored contiguously in memory, or the Garbage Collector could move it at any time. Arrays are objects in Java, and the way data is stored inside that object could vary from one JVM implementation to another.
For this reason, the notion of a direct buffer was introduced.
Direct buffers are intended for interaction with channels and native I/O routines. They make a best effort to store the byte elements in a memory area that a channel can use for direct, or raw, access by using native code to tell the operating system to drain or fill the memory area directly.
Direct byte buffers are usually the best choice for I/O operations. By design, they support the most efficient I/O mechanism available to the JVM. Nondirect byte buffers can be passed to channels, but doing so may incur a performance penalty. It's usually not possible for a nondirect buffer to be the target of a native I/O operation. If you pass a nondirect ByteBuffer object to a channel for write, the channel may implicitly do the following on each call:
1. Create a temporary direct ByteBuffer object.
2. Copy the content of the nondirect buffer to the temporary buffer.
3. Perform the low-level I/O operation using the temporary buffer.
4. The temporary buffer object goes out of scope and is eventually garbage collected.
This can potentially result in buffer copying and object churn on every I/O, which are exactly the sorts of things we'd like to avoid. However, depending on the implementation, things may not be this bad. The runtime will likely cache and reuse direct buffers or perform other clever tricks to boost throughput.
If you're simply creating a buffer for one-time use, the difference is not significant.
如果我们构造一个ByteBuffer仅仅使用一次,不复用它,那么Direct Buffer和Heap Buffer没有明显的区别。两个地方我们可能通过Direct Buffer来提高性能:
1、 大文件,尽管我们Direct Buffer只用一次,但是如果内容很大,Heap Buffer的复制代价会很高,此时用Direct Buffer能提高性能。这就是为什么,当我们下载一个大文件时,服务端除了用SendFile机制,也可以用“内存映射”,把大文件映射到内存,也就是MappedByteBuffer,是一种Direct Buffer,然后把这个MappedByteBuffer直接写入SocketChannel,这样减少了数据复制,从而提高了性能。
2、 重复使用的数据,比如HTTP的错误信息,例如404呀,这些信息是每次请求,响应数据都一样的,那么我们可以把这些固定的信息预先存放在Direct Buffer中(当然部分修改Direct Buffer中的信息也可以,重要的是Direct Buffer要能被重复使用),这样把Direct Buffer直接写入SocketChannel就比写入Heap Buffer要快了。
On the other hand, if you will be using the buffer repeatedly in a high-performance scenario, you're better off allocating direct buffers and reusing them.
Direct buffers are optimal for I/O, but they may be more expensive to create than nondirect byte buffers.
The memory used by direct buffers is allocated by calling through to native, operating system-specific code, bypassing the standard JVM heap. Setting up and tearing down direct buffers could be significantly more expensive than heap-resident buffers, depending on the host operating system and JVM implementation. The memory-storage areas of direct buffers are not subject to garbage collection because they are outside the standard JVM heap.
The performance tradeoffs of using direct versus nondirect buffers can vary widely by JVM, operating system, and code design. By allocating memory outside the heap, you may subject your application to additional forces of which the JVM is unaware. When bringing additional moving parts into play, make sure that you're achieving the desired effect. I recommend the old software maxim: first make it work, then make it fast. Don't worry too much about optimization up front; concentrate first on correctness. The JVM implementation may be able to perform buffer caching or other optimizations that will give you the performance you need without a lot of unnecessary effort on your part.
Direct Buffer vs. Heap Buffer
1、 劣势:创建和释放Direct Buffer的代价比Heap Buffer得要高;
2、 区别:Direct Buffer不是分配在堆上的,它不被GC直接管理(但Direct Buffer的JAVA对象是归GC管理的,只要GC回收了它的JAVA对象,操作系统才会释放Direct Buffer所申请的空间),它似乎给人感觉是“内核缓冲区(buffer in kernel)”。Heap Buffer则是分配在堆上的,或者我们可以简单理解为Heap Buffer就是byte[]数组的一种封装形式,查看JAVA源代码实现,Heap Buffer也的确是这样。
3、 优势:当我们把一个Direct Buffer写入Channel的时候,就好比是“内核缓冲区”的内容直接写入了Channel,这样显然快了,减少了数据拷贝(因为我们平时的read/write都是需要在I/O设备与应用程序空间之间的“内核缓冲区”中转一下的)。而当我们把一个Heap Buffer写入Channel的时候,实际上底层实现会先构建一个临时的Direct Buffer,然后把Heap Buffer的内容复制到这个临时的Direct Buffer上,再把这个Direct Buffer写出去。当然,如果我们多次调用write方法,把一个Heap Buffer写入Channel,底层实现可以重复使用临时的Direct Buffer,这样不至于因为频繁地创建和销毁Direct Buffer影响性能。
简单的说,我们需要牢记三点:
(1) 平时的read/write,都会在I/O设备与应用程序空间之间经历一个“内核缓冲区”。
(2) Direct Buffer就好比是“内核缓冲区”上的缓存,不直接受GC管理;而Heap Buffer就仅仅是byte[]字节数组的包装形式。因此把一个Direct Buffer写入一个Channel的速度要比把一个Heap Buffer写入一个Channel的速度要快。
(3) Direct Buffer创建和销毁的代价很高,所以要用在尽可能重用的地方。
他山之石
REFER:
http://stackoverflow.com/questions/5670862/bytebuffer-allocate-vs-bytebuffer-allocatedirect
Operating systems perform I/O operations on memory areas. These memory areas, as far as the operating system is concerned, are contiguous sequences of bytes. It's no surprise then that only byte buffers are eligible to participate in I/O operations. Also recall that the operating system will directly access the address space of the process, in this case the JVM process, to transfer the data. This means that memory areas that are targets of I/O perations must be contiguous sequences of bytes. In the JVM, an array of bytes may not be stored contiguously in memory, or the Garbage Collector could move it at any time. Arrays are objects in Java, and the way data is stored inside that object could vary from one JVM implementation to another.
For this reason, the notion of a direct buffer was introduced.
Direct buffers are intended for interaction with channels and native I/O routines. They make a best effort to store the byte elements in a memory area that a channel can use for direct, or raw, access by using native code to tell the operating system to drain or fill the memory area directly.
Direct byte buffers are usually the best choice for I/O operations. By design, they support the most efficient I/O mechanism available to the JVM. Nondirect byte buffers can be passed to channels, but doing so may incur a performance penalty. It's usually not possible for a nondirect buffer to be the target of a native I/O operation. If you pass a nondirect ByteBuffer object to a channel for write, the channel may implicitly do the following on each call:
1. Create a temporary direct ByteBuffer object.
2. Copy the content of the nondirect buffer to the temporary buffer.
3. Perform the low-level I/O operation using the temporary buffer.
4. The temporary buffer object goes out of scope and is eventually garbage collected.
This can potentially result in buffer copying and object churn on every I/O, which are exactly the sorts of things we'd like to avoid. However, depending on the implementation, things may not be this bad. The runtime will likely cache and reuse direct buffers or perform other clever tricks to boost throughput.
If you're simply creating a buffer for one-time use, the difference is not significant.
如果我们构造一个ByteBuffer仅仅使用一次,不复用它,那么Direct Buffer和Heap Buffer没有明显的区别。两个地方我们可能通过Direct Buffer来提高性能:
1、 大文件,尽管我们Direct Buffer只用一次,但是如果内容很大,Heap Buffer的复制代价会很高,此时用Direct Buffer能提高性能。这就是为什么,当我们下载一个大文件时,服务端除了用SendFile机制,也可以用“内存映射”,把大文件映射到内存,也就是MappedByteBuffer,是一种Direct Buffer,然后把这个MappedByteBuffer直接写入SocketChannel,这样减少了数据复制,从而提高了性能。
2、 重复使用的数据,比如HTTP的错误信息,例如404呀,这些信息是每次请求,响应数据都一样的,那么我们可以把这些固定的信息预先存放在Direct Buffer中(当然部分修改Direct Buffer中的信息也可以,重要的是Direct Buffer要能被重复使用),这样把Direct Buffer直接写入SocketChannel就比写入Heap Buffer要快了。
On the other hand, if you will be using the buffer repeatedly in a high-performance scenario, you're better off allocating direct buffers and reusing them.
Direct buffers are optimal for I/O, but they may be more expensive to create than nondirect byte buffers.
The memory used by direct buffers is allocated by calling through to native, operating system-specific code, bypassing the standard JVM heap. Setting up and tearing down direct buffers could be significantly more expensive than heap-resident buffers, depending on the host operating system and JVM implementation. The memory-storage areas of direct buffers are not subject to garbage collection because they are outside the standard JVM heap.
The performance tradeoffs of using direct versus nondirect buffers can vary widely by JVM, operating system, and code design. By allocating memory outside the heap, you may subject your application to additional forces of which the JVM is unaware. When bringing additional moving parts into play, make sure that you're achieving the desired effect. I recommend the old software maxim: first make it work, then make it fast. Don't worry too much about optimization up front; concentrate first on correctness. The JVM implementation may be able to perform buffer caching or other optimizations that will give you the performance you need without a lot of unnecessary effort on your part.
发表评论
-
:JAVA NIO 详解
2012-05-28 08:40 1157http://www.cnblogs.com/phoebus0 ... -
MINA学习(二)mina-2.0.0-M3集成spring的示例
2012-04-09 09:48 0MINA学习(二)mina-2.0.0-M3集成s ... -
MINA学习(一)1.1.7的SERVER和CLIENT示例
2012-04-09 09:48 0MINA学习(一)1.1.7的SERVER和CLI ... -
Apache Mina学习3
2012-04-09 09:47 0Apache Mina学习3 博客分类: ... -
Apache Mina学习2
2012-04-09 09:47 0Apache Mina学习2 ApacheS ... -
Apache Mina学习1
2012-04-09 09:47 0Apache Mina学习1 博客分类: ... -
Mina框架学习笔记(六)
2012-04-09 09:46 0Mina框架学习笔记(六) 博客分类: ... -
Mina框架学习笔记(五)
2012-04-09 09:45 0Mina框架学习笔记(五) 博客分类: ... -
Mina框架学习笔记(四)
2012-04-09 09:45 0Mina框架学习笔记(四) 博客分类: ... -
Mina框架学习笔记(三)
2012-04-09 09:44 0Mina框架学习笔记(三) 博客分类: ... -
Mina框架学习笔记(二)
2012-04-09 09:44 0Mina框架学习笔记(二) 博客分类: ... -
Mina框架学习笔记(一)
2012-04-09 09:43 0Apache MINA is a network app ... -
Apache Mina的学习应用(四)
2012-04-09 09:42 0继续上文。 在客户端和服务端的调用如下: 客 ... -
Apache Mina的学习应用(三)
2012-04-09 09:42 0在客户端和服务端的中定义相关的业务逻辑类的实现:通过实现 ... -
Apache Mina的学习应用(二)
2012-04-09 09:41 0继续上篇文章 在注册自定义协议的工厂的实现类:通 ... -
Apache Mina的学习应用(一)
2012-04-09 09:41 0在一般的项目中使用Mina要求在发送消息前加密数据,在接 ... -
Java NIO原理和使用
2012-04-09 09:49 678Java NIO原理和使用 Java N ... -
java nio学习[zt]
2012-04-09 09:49 556java nio学习[zt] 对相关类的简单介 ... -
java nio 在探索
2012-03-29 08:31 910nio之所以为为新,在 ... -
java模式之Reactor
2012-03-29 08:31 672Java NIO非堵塞应用通常适用用在I/O读写等方面, ...
相关推荐
Buffer是Java NIO中最基本的概念之一,用于在Java NIO中存储不同类型的数据。缓冲区实际上是一块可以读写的内存空间,这块内存空间被封装成NIO Buffer对象,它支持数据的读写操作,并且可以通过flip()方法在读模式和...
Heap Buffer存储在Java堆中,而Direct Buffer存储在系统内存中,直接与硬件交互,减少了Java虚拟机(JVM)的内存拷贝,提高了性能。然而,Direct Buffer可能会占用更多的系统资源,因此在选择时需要权衡性能和资源...
**Buffer**是NIO API的核心概念之一,它定义了一个线性的原始类型数据容器接口。每种原始类型(除`boolean`外)都有一个对应的Buffer子类,如`ByteBuffer`、`CharBuffer`等。这些Buffer子类继承自通用的`Buffer`类,...
Java NIO提供了两种Buffer类型:Heap Buffer和Direct Buffer。Heap Buffer在Java堆内存中创建,而Direct Buffer在操作系统堆外内存中创建,减少了系统调用的开销,适用于大容量数据传输。 6. **实战应用** - 文件...
Buffer在Java.NIO中扮演重要角色,分为Heap Buffer和Direct Buffer。Heap Buffer存储在JVM堆中,而Direct Buffer则直接在操作系统分配的内存中,这使得它对内存管理的影响较小,对于大容量数据处理时,Direct Buffer...
【情况四】:`java.lang.OutOfMemoryError: Direct buffer memory` 这种情况发生在NIO(非阻塞I/O)中,因为直接缓冲区内存分配失败。可以通过设置`-XX:MaxDirectMemorySize`参数来限制直接内存大小,如`-XX:...
如果直接内存过大,也会导致`java.lang.OutOfMemoryError: Direct buffer memory`。可以通过设置-Dsun.nio.ch maks.directBufferCount和-Djnio-direct-buffer-size来限制直接内存。 解决Java内存溢出问题通常需要...
ByteBuffer directBuffer = ByteBuffer.allocateDirect(1024); ``` 2. **非直接缓冲区**: 非直接缓冲区是在Java堆中分配的,数据传输需要经过JVM堆。虽然它没有直接缓冲区那么高效,但在处理小数据量时,由于...
- 内存管理:DirectBuffer与HeapBuffer的选择 - 零拷贝技术:FileRegion和ByteBuf的使用 - 线程模型优化:EventLoopGroup的配置 - 长连接管理:心跳检测和超时策略 6. **Netty实战应用** - 构建Web服务器和...
JVM体系架构 JVM 体系架构是 JavaVirtual Machine(Java虚拟机...直接内存是通过 NIO 库中的 DirectBuffer 类来分配的,它可以调用 Native 方法直接分配堆外内存,这个堆外内存就是本机内存,不会影响到堆内存的大小。
7. **零拷贝(Zero-Copy)**:Netty通过DirectBuffer和FileRegion实现零拷贝技术,减少了数据在内存中的复制,提高了性能。 8. **ChannelFuture和Promise**:ChannelFuture表示异步操作的结果,而Promise则用于设置...
Netty是一个高性能、异步事件...HeapBuffer由JVM管理,适用于数据量较小的情况。Selector的register方法用于向选择器注册通道并设置监听事件,SelectionKey代表了注册关系,wakeup方法用于唤醒被阻塞的Selector线程。
### Java核心面试知识整理 #### 一、JVM基础 **1.1 线程** - **概念**: 在计算机程序执行过程中,线程是最小的执行单元,它被操作系统调度来分配处理器时间。 - **特点**: Java中的线程是轻量级的,可以创建多个...
- **Direct Buffer和Heap Buffer**:Netty支持直接内存和堆内存,直接内存避免了JVM和操作系统之间的数据拷贝,提高性能。 - **FileRegion**:用于传输文件,实现零拷贝,减少了内存与磁盘之间的数据复制。 5. **...
7. **内存管理**:Netty通过DirectBuffer和HeapBuffer实现了高效的内存管理,减少了不必要的系统调用。 8. **流式API**:Netty的API设计为流式,使得编写处理流程更加直观。 9. **错误处理**:Netty提供了优雅的...
2. **缓冲区(Buffers)**:如何使用ByteBuf,理解其与Java NIO Buffer的区别。 3. **编码与解码**:学习如何使用Decoder和Encoder处理各种数据格式。 4. **连接管理**:包括建立连接、关闭连接、心跳检测等操作。 5...
- **DirectBuffer 和 HeapBuffer**: 直接内存和堆内存的选择,DirectBuffer 可以减少系统调用,提高性能。 5. **Netty 的连接管理** - **ChannelHandler**: 处理连接事件,如连接建立、断开、读写等。 - **...