- 浏览: 154109 次
- 性别:
- 来自: northeast
文章分类
最新评论
-
lightgjc1:
好,支持,赞一下
复制表结构的通用存储过程 -
star022:
很有个性~~
tomcat 异常 Exception loading sessions from persistent storag -
我奋斗:
我也觉得,混江湖的吧。
tomcat 异常 Exception loading sessions from persistent storag -
wenjinglian:
你的图片真的 ;豪放。。。
tomcat 异常 Exception loading sessions from persistent storag -
helenxiao520:
[/b][b][b][/b]
什么是集群?
Asynchronous support: Background concepts
Web 2.0 technologies drastically change the traffic profile between Web clients (such as browsers) and Web servers. Asynchronous support introduced in Servlet 3.0 is designed to respond to this new challenge. In order to understand the importance of asynchronous processing, let's first consider the evolution of HTTP communications.
HTTP 1.0 to HTTP 1.1
A major improvement in the HTTP 1.1 standard is persistent connections. In HTTP 1.0, a connection between a Web client and server is closed after a single request/response cycle. In HTTP 1.1, a connection is kept alive and reused for multiple requests. Persistent connections reduce communication lag perceptibly, because the client doesn't need to renegotiate the TCP connection after each request.
Thread per connection
Figuring out how to make Web servers more scalable is an ongoing challenge for vendors. Thread per HTTP connection, which is based on HTTP 1.1's persistent connections, is a common solution vendors have adopted. Under this strategy, each HTTP connection between client and server is associated with one thread on the server side. Threads are allocated from a server-managed thread pool. Once a connection is closed, the dedicated thread is recycled back to the pool and is ready to serve other tasks. Depending on the hardware configuration, this approach can scale to a high number of concurrent connections. Experiments with high-profile Web servers have yielded numerical results revealing that memory consumption increases almost in direct proportion with the number of HTTP connections. The reason is that threads are relatively expensive in terms of memory use. Servers configured with a fixed number of threads can suffer the thread starvation problem, whereby requests from new clients are rejected once all the threads in the pool are taken.
On the other hand, for many Web sites, users request pages from the server only sporadically. This is known as a page-by-page model. The connection threads are idling most of the time, which is a waste of resources.
Thread per request
Thanks to the non-blocking I/O capability introduced in Java 4's New I/O APIs for the Java Platform (NIO) package, a persistent HTTP connection doesn't require that a thread be constantly attached to it. Threads can be allocated to connections only when requests are being processed. When a connection is idle between requests, the thread can be recycled, and the connection is placed in a centralized NIO select set to detect new requests without consuming a separate thread. This model, called thread per request, potentially allows Web servers to handle a growing number of user connections with a fixed number of threads. With the same hardware configuration, Web servers running in this mode scale much better than in the thread-per-connection mode. Today, popular Web servers -- including Tomcat, Jetty, GlassFish (Grizzly), WebLogic, and WebSphere -- all use thread per request through Java NIO. For application developers, the good news is that Web servers implement non-blocking I/O in a hidden manner, with no exposure whatsoever to applications through servlet APIs. (结合Selector+channel --> thread pool)
Server push
A more interesting and vital use case for the Servlet 3.0 asynchronous feature is server push. GTalk, a widget that lets GMail users chat online, is an example of server push. GTalk doesn't poll the server frequently to check if a new message is available to display. Instead it waits for the server to push back new messages. This approach has two obvious advantages: low-lag communication without requests being sent, and no waste of server resources and network bandwidth.
Ajax allows a user to interact with a page even if other requests from the same user are being processed at the same time. A common use case is to have a browser regularly poll the server for updates of state changes without interrupting the user. However, high polling frequencies waste server resources and network bandwidth. If the server could actively push data to browsers -- in other words, deliver asynchronous messages to clients on events (state changes) -- Ajax applications would perform better and save precious server and network resources.
The HTTP protocol is a request/response protocol. A client sends a request message to a server, and the server replies with a response message. The server can't initiate a connection with a client or send an unexpected message to the client. This aspect of the HTTP protocol seemingly makes server push impossible. But several ingenious techniques have been devised to circumvent this constraint:
- Service streaming (streaming) allows a server to send a message to a client when an event occurs, without an explicit request from the client. In real-world implementations, the client initiates a connection to the server through a request, and the response returns bits and pieces each time a server-side event occurs; the response lasts (theoretically) forever. Those bits and pieces can be interpreted by client-side JavaScript and displayed through the browser's incremental rendering ability.
- Long polling, also known as asynchronous polling, is a hybrid of pure server push and client pull. It is based on the Bayeux protocol, which uses a topic-based publish-subscribe scheme. As in streaming, a client subscribes to a connection channel on the server by sending a request. The server holds the request and waits for an event to happen. Once the event occurs (or after a predefined timeout), a complete response message is sent to the client. Upon receiving the response, the client immediately sends a new request. The server, then, almost always has an outstanding request that it can use to deliver data in response to a server-side event. Long polling is relatively easier to implement on the browser side than streaming.
- Passive piggyback: When the server has an update to send, it waits for the next time the browser makes a request and then sends its update along with the response that the browser was expecting.
Service streaming and long polling, implemented with Ajax, are known as Comet, or reverse Ajax. (Some developers call all interactive techniques reverse Ajax, including regular polling, Comet, and piggyback.)
Ajax improves single-user responsiveness. Server-push technologies like Comet improve application responsiveness for collaborative, multi-user applications without the overhead of regular polling.
The client aspect of the server push techniques -- such as hidden iframe
s, XMLHttpRequest
streaming, and some Dojo and jQuery libraries that facilitate asynchronous communication -- are outside this article's scope. Instead, our interest is on the server side, specifically how the Servlet 3.0 specification helps implement interactive applications with server push.
“Comet 技术”、“服务端推技术(Server-Side Push)”、“反向 Ajax 技术”这几个名称说的是同一件事情,可能您已经听说过其中的一项或者几项。但没听说过也没有关系,一句话就足以表达它们全部的意思:“在没有客户端请求的情况下,服务端向客户端发送数据”。
这句话听起来很简单很好理解,但是任何一个长期从事 B/S 应用程序开发的程序都清楚,这实现起来并不简单,甚至很长一段时间内,人们认为这是并不可能的。因为这种做法完全不符合传统基于 HTTP 协议的交互思想:只有基于 Socket 层次的应用才能做到 Server 和 Client 端双方对等通讯,而基于 HTTP 的应用中,Server 只是对来自 Client 的请求进行回应,不关心客户端的状态,不主动向客户端请求信息,因此 Http 协议被称为无状态、单向性协议,这种交互方式称为 Request-Response 交互模型。
无状态、单向的经典 Request-Response 交互模型有很多优点,譬如高效率、高可伸缩等。对于被动响应用户请求为主的应用,像 CMS、MIS、ERP 等非常适合,但是对于另外一些需要服务端主动发送的需求,像聊天室(用户不发言的时候也需要把其它用户的发言传送回来)、日志系统(客户端没有请求,当服务端有日志输出时主动发送到客户端)则处理起来很困难,或者说这类应用根本不适合使用经典的 Request-Response 交互模型来处理。当“不适合”与“有需求”同时存在时,人们就开始不断寻找突破这种限制的方法。
图1. 传统的 Web 应用模型与基于 AJAX 的模型之比较
- 简单轮询
最早期的 Web 应用中,主要通过 JavaScript 或者 Meta HTML 标签等手段,定时刷新页面来检测服务端的变化。显然定时刷新页面服务端仍然在被动响应客户端的请求,只不过客户端的请求是连续、频繁的,让用户看起来产生有服务端自动将信息发过来的错觉。这种方式简单易行,但缺陷也非常明显:可能大部分请求都是无意义的,因为服务端期待的事件没有发生,实际上并没有需要发送的信息,而不得不重复的回应着页面上所有内容给浏览器;另外就是当服务端发生变化时,并不能“实时”的返回,刷新的间隔太短,产生很大的性能浪费,间隔太长,事件通知又可能晚于用户期望的时间到达。
当绝大部分浏览器提供了 XHR(XmlHttpRequest)对象支持后,Ajax 技术出现并迅速流行,这一阶段做的轮询就不必每次都返回都返回整个页面中所有的内容,如果服务端没有事件产生,只需要返回极少量内容的 http 报文体。Ajax 可以节省轮询传输中大量的带宽浪费,但它无法减少请求的次数,因此 Ajax 实现的简单轮询仍然有轮询的局限性,对其缺陷只能一定程度缓解,而无法达到质变。
- 长轮询(混合轮询)
长轮询与简单轮询的最大区别就是连接时间的长短:简单轮询时当页面输出完连接就关闭了,而长轮询一般会保持 30 秒乃至更长时间,当服务器上期待的事件发生,将会立刻输出事件通知到客户端,接着关闭连接,同时建立下一个连接开始一次新的长轮询。
长轮询的实现方式优势在于当服务端期待事件发生,数据便立即返回到客户端,期间没有数据返回,再较长的等待时间内也没有新的请求发生,这样可以让发送的请求减少很多,而事件通知的灵敏度却大幅提高到几乎是“实时”的程度。
- Comet 流(Forever Frame)
Comet 流是按照长轮询的实现思路进一步发展的产物。令长轮询将事件通知发送回客户端后不再关闭连接,而是一直保持直到超时事件发生才重新建立新的连接,这种变体我们就称为 Comet 流。客户端可以使用 XmlHttpRequest 对象中的 readyState 属性来判断是 Receiving 还是 Loaded。Comet 流理论上可以使用一个链接来处理若干次服务端事件通知,更进一步节省了发送到服务端的请求次数。
无论是长轮询还是 Comet 流,在服务端和客户端都需要维持一个比较长时间的连接状态,这一点在客户端不算什么太大的负担,但是服务端是要同时对多个客户端服务的,按照经典 Request-Response 交互模型,每一个请求都占用一个 Web 线程不释放的话,Web 容器的线程则会很快消耗殆尽,而这些线程大部分时间处于空闲等待的状态。这也就是为什么 Comet 风格服务非常期待异步处理的原因,希望 Web 线程不需要同步的、一对一的处理客户端请求,能做到一个 Web 线程处理多个客户端请求。
图2. 基于长轮询的服务器推模型
图3. 基于流的服务器推模型
Jetty Continuations
Continuations will be replaced by standard Servlet-3.0 suspendable requests once the specification is finalized. Early releases of Jetty-7 are now available that implement the proposed standard suspend/resume API
Background
With most web applications today, the number of simultaneous users can greatly exceed the number of connections to the server. This is because connections can be closed during the frequent pauses in the conversation while the user reads the content or completes in a form. Thousands of users can be served with hundreds of connections.
But AJAX based web applications have very different traffic profiles to traditional webapps. While a user is filling out a form, AJAX requests to the server will be asking for for entry validation and completion support. While a user is reading content, AJAX requests may be issued to asynchronously obtain new or updated content. Thus an AJAX application needs a connection to the server almost continuously and it is no longer the case that the number of simultaneous users can greatly exceed the number of simultaneous TCP/IP connections.
If you want thousands of users you need thousands of connections and if you want tens of thousands of users, then you need tens of thousands of simultaneous connections. It is a challenge for java web containers to deal with significant numbers of connections, and you must look at your entire system, from your operating system, to your JVM, as well as your container implementation.
Thread per connection
One of the main challenges in building a scalable servlet server is how to handle Threads and Connections. The traditional IO model of java associates a thread with every TCP/IP connection. If you have a few very active threads, this model can scale to a very high number of requests per second.
However, the traffic profile typical of many web applications is many persistent HTTP connections that are mostly idle while users read pages or search for the next link to click. With such profiles, the thread-per-connection model can have problems scaling to the thousands of threads required to support thousands of users on large scale deployments.
长连接时,比如,用户在浏览页面内容的时候,仍然保持这个连接,而thread per connection的情况下,线程是和这个连接绑定的,也就是说,这个时间段内线程是阻塞的、闲置的,为什么不让线程在这个时间段内服务其他请求呢?因此,thread per request出场了(当然,得结合select+nio),此时,这个connection可以 added to an NIO select set to detect new requests.
Thread per request
The NIO libraries can help, as it allows asynchronous IO to be used and threads can be allocated to connections only when requests are being processed. When the connection is idle between requests, then the thread can be returned to a thread pool and the connection can be added to an NIO select set to detect new requests. This thread-per-request model allows much greater scaling of connections (users) at the expense of a reduced maximum requests per second for the server as a whole (in Jetty 6 this expense has been significantly reduced).
http://www.ibm.com/developerworks/cn/web/wa-lo-comet/
http://docs.codehaus.org/display/JETTY/Continuations
http://www.ibm.com/developerworks/cn/java/j-lo-comet/index.html
http://docs.codehaus.org/display/JETTY/Continuations
发表评论
-
Copy-on-write, 写时复制
2012-04-13 17:24 0Oracle.JRockit.The.Definitive.G ... -
同步、异步IO
2012-02-02 16:16 1540参考中文篇 http://blog.csdn.net/his ... -
Asynchronous HTTP and Comet architectures
2011-06-02 10:57 4824This story appeared on JavaWorl ... -
nio reactor proactor
2009-12-06 18:05 1618两种I/O多路复用模式:Reactor和Proactor一般地 ... -
java nio
2009-12-06 14:17 1826IO API的可伸缩性对Web应用有着极其重要的意义。Java ... -
多线程,volatile,threadLocal
2008-08-19 17:44 2172线程在程序中是独立的 ... -
java线程池
2007-01-16 00:31 4355深入研究线程池 一.什 ... -
Java 多线程入门大全
2007-01-15 23:28 2246Java 多线程入门大全 作 ...
相关推荐
The only guide of its kind, it also features an ftp site complete with support materials. Market: Electrical Engineers, Computer Scientists, Device Designers, and Developers in industry Designers , ...
The .NET Framework provides three patterns for performing asynchronous operations: Asynchronous Programming Model (APM) pattern Event-based Asynchronous Pattern (EAP) Task-based Asynchronous Pattern ...
异步 Rust 中使用线程的异步承诺 箱 特拉维斯 | | #概述 这个库提供了一种有用的方法来使用分离的线程以Promise 风格... use asynchronous :: Promise; Promise :: new ( || { // Do something let ret = 10.0 /
异步编程是现代软件开发中的核心概念,尤其是在JavaScript这样的单线程环境中,它的重要性更是不言而喻。本文将深入探讨异步编程的各种概念,包括并发模型、回调函数、Promise、生成器、协程以及事件和流。...
This book is for Android developers who want to learn how to build multithreaded and reliable Android applications using high-level and advanced asynchronous techniques and concepts. What You Will ...
These formalizations provide a foundation for the construction of static or dynamic program analysis tools, support the exploration of alternative Node.js event loop implementations, and provide a ...
Add asynchronous interface: open the selection box (I2_OpenFileDlg), add no plug-in to get the image resource binary data interface (I2_CapturePicData), increase download by time interface (I_...
Pro Asynchronous Programming with .NET teaches the essential skill of asynchronous programming in .NET. It answers critical questions in .NET application development, such as: how do I keep my program...
Asynchronous External Memory Interface(EMIF)是一种用于芯片与外部存储器进行通信的接口技术。它是德州仪器(Texas Instruments,简称TI)公司为其TMS320DM36x数字媒体系统芯片(DMSoC)所提供的一个外围设备。...
,只需使用Asynchronous并为您的用户提供自由,而无需为代码添加多个依赖项和不同的抽象! 例子 要运行示例项目,请克隆存储库,然后首先从Example目录运行pod install 。 样例用例 假设您正在编写一个执行异步任务...
learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that ...
asynchronous 该库提供了使用协程编写并发代码,通过套接字和其他资源对I / O进行多路访问,运行网络客户端和服务器以及其他相关原语的基础结构。 它实现了大多数python 3 。API参考可以在以下位置找到: : 实施状况...
Reactive Streams in Java explains how to manage the exchange of stream data across an asynchronous boundary―passing elements on to another thread or thread-pool―while ensuring that the receiving ...
Asynchronous design has been an active area of research since at least the mid 1950’s, but has yet to achieve widespread use. We examine the benefits and problems inherent in asynchronous ...
The introduction of Combine into the Swift ecosystem now gives you a native way to manage asynchronous events in Swift, meaning you don’t have to rely on third-party reactive frameworks for event-...
为了更好地实现异步服务间的集成与交互,AWSP(Asynchronous Web Services Protocol)作为一种提案被提出。AWSP的主要目标是为异步服务提供一套标准化的协议,使这些服务能够在互联网或内部网中进行有效集成和互动。...
Grails异步邮件插件 描述 Grails异步邮件是用于异步发送电子邮件的插件。... 存储库软件包(BinTray): https ://bintray.com/kefirsf/plugins/asynchronous-mail/ OpenHUB上的页面: https ://www.openhub.net/p/gra
首先,我们来看标题"asynchronous:[只读]使用SimpleBus的异步消息传递"。这里的"只读"可能指的是SimpleBus在处理消息时不会修改原始数据,保证了数据的一致性。异步消息传递意味着消息的发送者无需等待接收者的响应...