netty的founderTrustin Lee发布在Twitter上的一篇博客,非常好,直接转。
The following text from Twitter
At Twitter, Netty (@netty_project) is used in core places requiring networking functionality.
For example:
- Finagle is our protocol agnostic RPC system whose transport layer is built on top of Netty, and it is used to implement most services internally likeSearch
- TFE (Twitter Front End) is our proprietary spoon-feeding reverse proxy which serves most of public-facing HTTP and SPDY traffic using Netty
- Cloudhopper sends billions of SMS messages every month to hundreds of mobile carriers all around the world using Netty
For those who aren’t aware, Netty is an open source Java NIO framework that makes it easier to create high-performing protocol servers. An older version of Netty v3 used Java objects to represent I/O events. This was simple, but could generate a lot of garbageespecially at our scale. In the new Netty 4 release, changes were made so that instead of short-lived event objects, methods on long-lived channel objects are used to handle I/O events. There is also a specialized buffer allocator that uses pools.
We take the performance, usability, and sustainability of the Netty project seriously, and we have been working closely with the Netty community to improve it in all aspects. In particular, we will discuss our usage of Netty 3 and will aim to show why migrating to Netty 4 has made us more efficient.
Reducing GC pressure and memory bandwidth consumption
A problem was Netty 3’s reliance on the JVM’s memory management for buffer allocations. Netty 3 creates a new heap buffer whenever a new message is received or a user sends a message to a remote peer. This means a ‘new byte[capacity]’ for each new buffer. These buffers caused GC pressure and consumed memory bandwidth: allocating a new byte array consumes memory bandwidth to fill the array with zeros for safety. However, the zero-filled byte array is very likely to be filled with the actual data, consuming the same amount of memory bandwidth. We could have reduced the consumption of memory bandwidth to 50% if the Java Virtual Machine (JVM) provided a way to create a new byte array which is not necessarily filled with zeros, but there’s no such way at this moment.
To address this issue, we made the following changes for Netty 4.
Removal of event objects
Instead of creating event objects, Netty 4 defines different methods for different event types. In Netty 3, the ChannelHandler has a single method that handles all event objects:
class Before implements ChannelUpstreamHandler { void handleUpstream(ctx, ChannelEvent e) { if (e instanceof MessageEvent) { ... } else if (e instanceof ChannelStateEvent) { ... } ... } }
Netty 4 has as many handler methods as the number of event types:
class After implements ChannelInboundHandler { void channelActive(ctx) { ... } void channelInactive(ctx) { ... } void channelRead(ctx, msg) { ... } void userEventTriggered(ctx, evt) { ... } ... }
Note a handler now has a method called ‘userEventTriggered’ so that it does not lose the ability to define a custom event object.
Buffer pooling
Netty 4 also introduced a new interface, ‘ByteBufAllocator’. It now provides a buffer pool implementation via that interface and is a pure Java variant of jemalloc, which implements buddy memory allocation and slab allocation.
Now that Netty has its own memory allocator for buffers, it doesn’t waste memory bandwidth by filling buffers with zeros. However, this approach opens another can of worms—reference counting. Because we cannot rely on GC to put the unused buffers into the pool, we have to be very careful about leaks. Even a single handler that forgets to release a buffer can make our server’s memory usage grow boundlessly.
Was it worthwhile to make such big changes?
Because of the changes mentioned above, Netty 4 has no backward compatibility with Netty 3. It means our projects built on top of Netty 3 as well as other community projects have to spend non-trivial amount of time for migration. Is it worth doing that?
We compared two echo protocol servers built on top of Netty 3 and 4 respectively. (Echo is simple enough such that any garbage created is Netty’s fault, not the protocol). I let them serve the same distributed echo protocol clients with 16,384 concurrent connections sending 256-byte random payload repetitively, nearly saturating gigabit ethernet.
According to our test result, Netty 4 had:
- 5 times less frequent GC pauses: 45.5 vs. 9.2 times/min
- 5 times less garbage production: 207.11 vs 41.81 MiB/s
I also wanted to make sure our buffer pool is fast enough. Here’s a graph where the X and Y axis denote the size of each allocation and the time taken to allocate a single buffer respectively:
As you see, the buffer pool is much faster than JVM as the size of the buffer increases. It is even more noticeable for direct buffers. However, it could not beat JVM for small heap buffers, so we have something to work on here.
Moving forward
Although some parts of our services already migrated from Netty 3 to 4 successfully, we are performing the migration gradually. We discovered some barriers that slow our adoption that we hope to address in the near future:
- Buffer leaks: Netty has a simple leak reporting facility but it does not provide information detailed enough to fix the leak easily.
- Simpler core: Netty is a community driven project with many stakeholders that could benefit from a simpler core set of code. This increases the instability of the core of Netty because those non-core features tend to lead to collateral changes in the core. We want to make sure only the real core features remain in the core and other features stay out of there.
We also are thinking of adding more cool features such as:
- HTTP/2 implementation
- HTTP and SOCKS proxy support for client side
- Asynchronous DNS resolution (see pull request)
- Native extensions for Linux that works directly with epoll via JNI
- Prioritization of the connections with strict response time constraints
Getting Involved
What’s interesting about Netty is that it is used by many different people and companies worldwide, mostly not from Twitter. It is an independent and very healthy open source project with many contributors. If you are interested in building ‘the future of network programming’, why don’t you visit the project web site, follow @netty_project, jump right into the source code at GitHub or even consider joining the flock to help us improve Netty?
Acknowledgements
Netty project was founded by Trustin Lee (@trustin) who joined the flock in 2011 to help build Netty 4. We also like to thank Jeff Pinner (@jpinner) from the TFE team who gave many great ideas mentioned in this article and became a guinea pig for Netty 4 without hesitation. Furthermore, Norman Maurer (@normanmaurer), one of the core Netty committers, made an enormous amount of effort to help us materialize the great ideas into actually shippable piece of code as part of the Netty project. There are also countless number of individuals who gladly tried a lot of unstable releases catching up all the breaking changes we had to make, in particular we would like to thank: Berk Demir (@bd), Charles Yang (@cmyang), Evan Meagher (@evanm), Larry Hosken (@lahosken), Sonja Keserovic (@thesonjake), and Stu Hood (@stuhood).
相关推荐
netty-socketio-netty-socketio-2.0.6 ,Socket.IO 是一个库,可以在客户端和服务器之间实现低延迟, 双向和基于事件的通信:netty-socketio-netty-socketio-2.0.6.tar.gznetty-socketio-netty-socketio-2.0.6.zip
java运行依赖jar包
使用SpringBoot2.x集成Netty4.x创建基于TCP/IP协议的服务端和客户端的Demo代码案例 使用了SpringBoot+swaggerUI方式进行测试,客户端可以给服务端发送消息,服务端也可以给已经连上的客户端发送消息,使用了通道保活...
赠送jar包:transport-netty4-client-5.5.1.jar; 赠送原API文档:transport-netty4-client-5.5.1-javadoc.jar; 赠送源代码:transport-netty4-client-5.5.1-sources.jar; 赠送Maven依赖信息文件:transport-netty...
6. **线程模型**:Netty使用EventLoopGroup管理多个EventLoop,每个EventLoop负责处理多个连接,避免了线程创建和销毁的开销。 7. **心跳机制**:Netty支持心跳机制,确保连接的有效性,防止空闲连接占用资源。 8....
赠送jar包:reactor-netty-http-1.0.11.jar; 赠送原API文档:reactor-netty-http-1.0.11-javadoc.jar; 赠送源代码:reactor-netty-http-1.0.11-sources.jar; 赠送Maven依赖信息文件:reactor-netty-...
赠送jar包:netty-codec-mqtt-4.1.73.Final.jar; 赠送原API文档:netty-codec-mqtt-4.1.73.Final-javadoc.jar; 赠送源代码:netty-codec-mqtt-4.1.73.Final-sources.jar; 赠送Maven依赖信息文件:netty-codec-...
在实际开发中,Dubbo-Remoting-Netty4的使用者通常无需关心底层通信细节,只需要按照Dubbo的API规范进行服务暴露和服务引用。Netty4的强大性能在大型分布式系统中尤其明显,它可以有效地处理高并发请求,保证服务间...
赠送jar包:reactor-netty-core-1.0.15.jar; 赠送原API文档:reactor-netty-core-1.0.15-javadoc.jar; 赠送源代码:reactor-netty-core-1.0.15-sources.jar; 赠送Maven依赖信息文件:reactor-netty-core-1.0.15....
赠送jar包:transport-netty4-client-6.3.0.jar; 赠送原API文档:transport-netty4-client-6.3.0-javadoc.jar; 赠送源代码:transport-netty4-client-6.3.0-sources.jar; 赠送Maven依赖信息文件:transport-netty...
《深入剖析Netty框架——基于itstack-demo-netty-master.zip》 Netty是一个高性能、异步事件驱动的网络应用程序框架,用于快速开发可维护的高性能协议服务器和客户端。本项目"itstack-demo-netty-master.zip"是针对...
akka-remote-transport-netty4(deprecated) 一个新的实现基于 akka 流。不是开源的... ##现在只支持 TCP。在 UDP 上没有 SSL 没有 Docker 支持,很快就会添加,文档会在我不那么忙的时候更新。 像这样使用它 akka...
赠送jar包:transport-netty4-client-5.5.1.jar; 赠送原API文档:transport-netty4-client-5.5.1-javadoc.jar; 赠送源代码:transport-netty4-client-5.5.1-sources.jar; 赠送Maven依赖信息文件:transport-netty...
赠送jar包:netty-codec-redis-4.1.73.Final.jar; 赠送原API文档:netty-codec-redis-4.1.73.Final-javadoc.jar; 赠送源代码:netty-codec-redis-4.1.73.Final-sources.jar; 赠送Maven依赖信息文件:netty-codec-...
netty实战-radius-netty
鲲鹏解决方案成功移植的jar包
netty-spring-boot-starter 基于Netty的Spring Boot Starter工程。 介绍 支持TCP长连接消息转发到Spring容器 支持自定义消息枚举类( CommandController , CommandMapping ) 支持自定义通信协议解析( ...
JT808协议详解与Netty实现 JT808是一种在中国广泛应用的GPS车载终端通信协议,主要用于车辆...如果你需要深入理解或使用这个项目,你需要熟悉JT808协议的详细规格,掌握Netty的基本使用,并具备一定的Java编程基础。