- 浏览: 46446 次
- 性别:
- 来自: 南京
文章分类
最新评论
Chapter 6. Task Execution
Most concurrent applications are organized around the execution of tasks: abstract, discrete units of work. Dividing the work of an application into tasks simplifies program organization, facilitates error recovery by providing natural transaction boundaries, and promotes concurrency by providing a natural structure for parallelizing work.
大多数并发应用程序,都是围绕着任务(抽象的,离散的工作单位)的执行。将应用程序的工作拆成多个任务,简化了程序的组织,通过提供自然事务边界有利于错误恢复,并通过提供了一个并行工作的自然结构来提高并发性。
6.1. Executing Tasks in Threads
The first step in organizing a program around task execution is identifying sensible task boundaries. Ideally, tasks are independent activities: work that doesn't depend on the state, result, or side effects of other tasks. Independence facilitates concurrency, as independent tasks can be executed in parallel if there are adequate processing resources. For greater flexibility in scheduling and load balancing tasks, each task should also represent a small fraction of your application's processing capacity.
将一个程序组织成任务的执行的第一步,是辨别明智的任务界限。理论上,任务都是独立的活动,不依赖于其他任务的状态、结果或副作用。独立性有助于编写并发程序,因为在处理资源充足的情况下,独立的任务可以并发执行。对于更大的灵活调度和负载均衡的任务,每个任务应该也代表了您的应用程序的处理能力的一小部分
Server applications should exhibit both good throughput and good responsiveness under normal load. Application providers want applications to support as many users as possible, so as to reduce provisioning costs per user; users want to get their response quickly. Further, applications should exhibit graceful degradation as they become overloaded, rather than simply falling over under heavy load. Choosing good task boundaries, coupled with a sensible task execution policy (see Section 6.2.2), can help achieve these goals.
服务器应用程序应表现出良好的吞吐量和正常负载下良好的响应。应用程序提供者希望应用程序,以支持尽可能多的用户,从而降低每个用户的配置成本,用户希望得到他们的反应很快。此外,应用程序应表现出优美的退化,而不是在重负载下,简单地崩溃。选择良好的任务边界,再加上一个明智的任务执行的政策(见6.2.2节),可以帮助实现这些目标。
Most server applications offer a natural choice of task boundary: individual client requests. Web servers, mail servers, file servers, EJB containers, and database servers all accept requests via network connections from remote clients. Using individual requests as task boundaries usually offers both independence and appropriate task sizing. For example, the result of submitting a message to a mail server is not affected by the other messages being processed at the same time, and handling a single message usually requires a very small percentage of the server's total capacity.
大多数服务端应用程序提供的任务边界的自然选择:个人客户端的请求。所有的Web服务器,邮件服务器,文件服务器,EJB容器和数据库服务器接受通过网络从远程客户端连接请求。通常使用个别请求作为任务边界提供了独立性和适当的任务大小。例如,提交邮件到邮件服务器的结果是不受在同一时间正在处理的其他消息的影响,处理一个单一的消息,通常需要一个服务器的总容量非常小的比例。
6.1.1. Executing Tasks Sequentially
There are a number of possible policies for scheduling tasks within an application, some of which exploit the potential for concurrency better than others. The simplest is to execute tasks sequentially in a single thread. SingleThreadWeb-Server in Listing 6.1 processes its tasksHTTP requests arriving on port 80sequentially. The details of the request processing aren't important; we're interested in characterizing the concurrency of various scheduling policies.
应用程序内的调度任务的的策略有许多,其中一些并发的利用比别的更好。最简单的方法是在一个单独的线程中顺序地执行任务。在清单6.1中,Web服务器顺序地处理的到达80端口的HTTP请求的任务。请求处理的细节并不重要,我们的兴趣在于描述各种调度策略的并发性。
SingleThreadedWebServer is simple and theoretically correct, but would perform poorly in production because it can handle only one request at a time. The main thread alternates between accepting connections and processing the associated request. While the server is handling a request, new connections must wait until it finishes the current request and calls accept again. This might work if request processing were so fast that handleRequest effectively returned immediately, but this doesn't describe any web server in the real world.
SingleThreadedWebServer比较简单,在理论上是正确的,但在执行中性能可能会很差,因为它只能一次处理一个请求。主线程接受连接,和处理相关的请求之间的交替。当服务器正在处理请求,新的连接必须等待,直到它完成当前请求,并再次接受新的请求。如果处理请求很快,能够立即返回处理结果,这可能会奏效。但是这并不能代表在现实世界中的任何Web服务器。
Processing a web request involves a mix of computation and I/O. The server must perform socket I/O to read the request and write the response, which can block due to network congestion or connectivity problems. It may also perform file I/O or make database requests, which can also block. In a single-threaded server, blocking not only delays completing the current request, but prevents pending requests from being processed at all. If one request blocks for an unusually long time, users might think the server is unavailable because it appears unresponsive. At the same time, resource utilization is poor, since the CPU sits idle while the single thread waits for its I/O to complete.
处理Web请求涉及计算和输入输出。服务端要执行socket的输入输出来读取请求和输出结果,由于网路的堵塞或者连接问题可能导致阻塞。它也可以执行文件I/O或数据库的请求,也可能导致阻塞。在一个单线程的服务器,阻塞不仅延误完成当前的请求,但阻止正在处理挂起的请求。如果一个请求的阻塞时间过长,用户可能会认为服务器是不可用的,因为它似乎反应迟钝。在同一时间,资源利用率差,因为C单个线程等待它的I / O完成的时候,CPU处于闲置状态。
In server applications, sequential processing rarely provides either good throughput or good responsiveness. There are exceptions such as when tasks are few and long-lived, or when the server serves a single client that makes only a single request at a time but most server applications do not work this way.[1]
在服务器应用程序,顺序处理很少提供任何良好的吞吐量或良好的响应。也有例外,如任务少时、存在时间长,或当服务器只有一个客户端,一次只有一个请求。不过大多数服务器不以这种方式工作。
6.1.2. Explicitly Creating Threads for Tasks
A more responsive approach is to create a new thread for servicing each request, as shown in ThreadPerTaskWebServer in Listing 6.2.
反应更加灵敏的方法是为每个请求创建一个新的线程来服务。如ThreadPerTaskWebServer在6.2所示。
THReadPerTaskWebServer is similar in structure to the single-threaded version the main thread still alternates between accepting an incoming connection and dispatching the request. The difference is that for each connection, the main loop creates a new thread to process the request instead of processing it within the main thread. This has three main consequences:
THReadPerTaskWebServer类是在结构上类似于单线程版本,主线程在接受连接,处理请求之间切换。所不同的是,主线程为每个连接创建一个新的线程来处理请求,而不是在主线程中处理。这主要有三个后果:
Task processing is offloaded from the main thread, enabling the main loop to resume waiting for the next incoming connection more quickly. This enables new connections to be accepted before previous requests complete, improving responsiveness.
任务处理从主线程中卸载了,使主线程更迅速地恢复等待下一个传入的连接。这使得新的连接在处理请求被完成之前被接受的,提高了反应。
Tasks can be processed in parallel, enabling multiple requests to be serviced simultaneously. This may improve throughput if there are multiple processors, or if tasks need to block for any reason such as I/O completion, lock acquisition, or resource availability.
任务可以并行处理,从而使多个请求同时进行处理。这可能会提高吞吐量,如果有多个处理器,或者如果任务需要被阻止,如I / O完成,锁定收购,或资源的可用性。
Task-handling code must be thread-safe, because it may be invoked concurrently for multiple tasks.
任务处理代码必须是线程安全的,因为它可能被多个任务同时调用。
Under light to moderate load, the thread-per-task approach is an improvement over sequential execution. As long as the request arrival rate does not exceed the server's capacity to handle requests, this approach offers better responsiveness and throughput.
在轻至中度负载,每个任务一个线程的方法是对于顺序执行任务的改善。只要请求到达率不超过服务器处理请求的能力,这种方法提供了更好的响应和吞吐量。
6.1.3. Disadvantages of Unbounded Thread Creation
For production use, however, the thread-per-task approach has some practical drawbacks, especially when a large number of threads may be created:
然而,在现实中,每个任务一个线程的方法具有一些缺点,特别是可能创建大量的线程的时候:
Thread lifecycle overhead. Thread creation and teardown are not free. The actual overhead varies across platforms, but thread creation takes time, introducing latency into request processing, and requires some processing activity by the JVM and OS. If requests are frequent and lightweight, as in most server applications, creating a new thread for each request can consume significant computing resources.
线程生命周期开销。线程创建和销毁,是不是免费的。实际开销由于平台不同而不一样,但线程的创建需要时间,引入请求的处理延迟,需要一些由JVM和OS的处理活动。如果请求频繁并且是轻量级的,在大多数服务器应用程序中,为每个请求创建一个新的线程,都会消耗大量的计算资源。
Resource consumption. Active threads consume system resources, especially memory. When there are more runnable threads than available processors, threads sit idle. Having many idle threads can tie up a lot of memory, putting pressure on the garbage collector, and having many threads competing for the CPUs can impose other performance costs as well. If you have enough threads to keep all the CPUs busy, creating more threads won't help and may even hurt.
资源的消耗。活动线程会消耗系统资源,尤其是内存。当有运行的线程数目超出可用的处理器的数目,线程就会处于闲置状态。空闲线程很多的话,会占用大量的内存,增加垃圾收集器的压力,多个线程争夺CPU会增加他性能成本。如果你有足够的线程所有CPU保持忙碌,创建更多的线程不会帮助,反而适得其反。
Stability. There is a limit on how many threads can be created. The limit varies by platform and is affected by factors including JVM invocation parameters, the requested stack size in the Thread constructor, and limits on threads placed by the underlying operating system.[2] When you hit this limit, the most likely result is an OutOfMemoryError. trying to recover from such an error is very risky; it is far easier to structure your program to avoid hitting this limit.
稳定。有一个可以创建多少个线程的限制。限制因平台而异,受以下因素的影响:JVM调用参数、线程构造器要求的堆栈大小,底层操作系统的线程限制等。当你超过了这个限制,最可能的结果是一个OutOfMemoryError,试图从这样的错误恢复是非常危险的;通过组织程序要比避免超过限制要容易得多。
Up to a certain point, more threads can improve throughput, but beyond that point creating more threads just slows down your application, and creating one thread too many can cause your entire application to crash horribly. The way to stay out of danger is to place some bound on how many threads your application creates, and to test your application thoroughly to ensure that, even when this bound is reached, it does not run out of resources.
在某个临界点之前,多个线程可以提高吞吐量。但是当超过这个临界点,创造更多的线程只会减慢您的应用程序,并且创建线程太多,可能会导致整个应用程序崩溃可怕。解决危险的方法是限制应用程序创建线程的数目,彻底测试您的应用程序来确保即使达到这个界限,它不会耗尽资源。
The problem with the thread-per-task approach is that nothing places any limit on the number of threads created except the rate at which remote users can throw HTTP requests at it. Like other concurrency hazards, unbounded thread creation may appear to work just fine during prototyping and development, with problems surfacing only when the application is deployed and under heavy load. So a malicious user, or enough ordinary users, can make your web server crash if the traffic load ever reaches a certain threshold. For a server application that is supposed to provide high availability and graceful degradation under load, this is a serious failing.
每个任务一个线程的方法问题,是除了客户端发送HTTP请求到服务端的速率之外,没有限制创建的线程数量。像其他并发危害一样,在创建原型设计和开发过程中,无限制创建线程可能没有问题,当应用程序部署和重负载下,就会导致问题的出现。因此,一个恶意用户,或足够普通用户,可以让你的Web服务器崩溃,如果服务器负荷达到一定的阈值。对于服务器应用程序应该在重负载下提供高可用性和优美的退化,这是一个严重的失败。
Most concurrent applications are organized around the execution of tasks: abstract, discrete units of work. Dividing the work of an application into tasks simplifies program organization, facilitates error recovery by providing natural transaction boundaries, and promotes concurrency by providing a natural structure for parallelizing work.
大多数并发应用程序,都是围绕着任务(抽象的,离散的工作单位)的执行。将应用程序的工作拆成多个任务,简化了程序的组织,通过提供自然事务边界有利于错误恢复,并通过提供了一个并行工作的自然结构来提高并发性。
6.1. Executing Tasks in Threads
The first step in organizing a program around task execution is identifying sensible task boundaries. Ideally, tasks are independent activities: work that doesn't depend on the state, result, or side effects of other tasks. Independence facilitates concurrency, as independent tasks can be executed in parallel if there are adequate processing resources. For greater flexibility in scheduling and load balancing tasks, each task should also represent a small fraction of your application's processing capacity.
将一个程序组织成任务的执行的第一步,是辨别明智的任务界限。理论上,任务都是独立的活动,不依赖于其他任务的状态、结果或副作用。独立性有助于编写并发程序,因为在处理资源充足的情况下,独立的任务可以并发执行。对于更大的灵活调度和负载均衡的任务,每个任务应该也代表了您的应用程序的处理能力的一小部分
Server applications should exhibit both good throughput and good responsiveness under normal load. Application providers want applications to support as many users as possible, so as to reduce provisioning costs per user; users want to get their response quickly. Further, applications should exhibit graceful degradation as they become overloaded, rather than simply falling over under heavy load. Choosing good task boundaries, coupled with a sensible task execution policy (see Section 6.2.2), can help achieve these goals.
服务器应用程序应表现出良好的吞吐量和正常负载下良好的响应。应用程序提供者希望应用程序,以支持尽可能多的用户,从而降低每个用户的配置成本,用户希望得到他们的反应很快。此外,应用程序应表现出优美的退化,而不是在重负载下,简单地崩溃。选择良好的任务边界,再加上一个明智的任务执行的政策(见6.2.2节),可以帮助实现这些目标。
Most server applications offer a natural choice of task boundary: individual client requests. Web servers, mail servers, file servers, EJB containers, and database servers all accept requests via network connections from remote clients. Using individual requests as task boundaries usually offers both independence and appropriate task sizing. For example, the result of submitting a message to a mail server is not affected by the other messages being processed at the same time, and handling a single message usually requires a very small percentage of the server's total capacity.
大多数服务端应用程序提供的任务边界的自然选择:个人客户端的请求。所有的Web服务器,邮件服务器,文件服务器,EJB容器和数据库服务器接受通过网络从远程客户端连接请求。通常使用个别请求作为任务边界提供了独立性和适当的任务大小。例如,提交邮件到邮件服务器的结果是不受在同一时间正在处理的其他消息的影响,处理一个单一的消息,通常需要一个服务器的总容量非常小的比例。
6.1.1. Executing Tasks Sequentially
There are a number of possible policies for scheduling tasks within an application, some of which exploit the potential for concurrency better than others. The simplest is to execute tasks sequentially in a single thread. SingleThreadWeb-Server in Listing 6.1 processes its tasksHTTP requests arriving on port 80sequentially. The details of the request processing aren't important; we're interested in characterizing the concurrency of various scheduling policies.
应用程序内的调度任务的的策略有许多,其中一些并发的利用比别的更好。最简单的方法是在一个单独的线程中顺序地执行任务。在清单6.1中,Web服务器顺序地处理的到达80端口的HTTP请求的任务。请求处理的细节并不重要,我们的兴趣在于描述各种调度策略的并发性。
SingleThreadedWebServer is simple and theoretically correct, but would perform poorly in production because it can handle only one request at a time. The main thread alternates between accepting connections and processing the associated request. While the server is handling a request, new connections must wait until it finishes the current request and calls accept again. This might work if request processing were so fast that handleRequest effectively returned immediately, but this doesn't describe any web server in the real world.
SingleThreadedWebServer比较简单,在理论上是正确的,但在执行中性能可能会很差,因为它只能一次处理一个请求。主线程接受连接,和处理相关的请求之间的交替。当服务器正在处理请求,新的连接必须等待,直到它完成当前请求,并再次接受新的请求。如果处理请求很快,能够立即返回处理结果,这可能会奏效。但是这并不能代表在现实世界中的任何Web服务器。
Processing a web request involves a mix of computation and I/O. The server must perform socket I/O to read the request and write the response, which can block due to network congestion or connectivity problems. It may also perform file I/O or make database requests, which can also block. In a single-threaded server, blocking not only delays completing the current request, but prevents pending requests from being processed at all. If one request blocks for an unusually long time, users might think the server is unavailable because it appears unresponsive. At the same time, resource utilization is poor, since the CPU sits idle while the single thread waits for its I/O to complete.
处理Web请求涉及计算和输入输出。服务端要执行socket的输入输出来读取请求和输出结果,由于网路的堵塞或者连接问题可能导致阻塞。它也可以执行文件I/O或数据库的请求,也可能导致阻塞。在一个单线程的服务器,阻塞不仅延误完成当前的请求,但阻止正在处理挂起的请求。如果一个请求的阻塞时间过长,用户可能会认为服务器是不可用的,因为它似乎反应迟钝。在同一时间,资源利用率差,因为C单个线程等待它的I / O完成的时候,CPU处于闲置状态。
In server applications, sequential processing rarely provides either good throughput or good responsiveness. There are exceptions such as when tasks are few and long-lived, or when the server serves a single client that makes only a single request at a time but most server applications do not work this way.[1]
在服务器应用程序,顺序处理很少提供任何良好的吞吐量或良好的响应。也有例外,如任务少时、存在时间长,或当服务器只有一个客户端,一次只有一个请求。不过大多数服务器不以这种方式工作。
6.1.2. Explicitly Creating Threads for Tasks
A more responsive approach is to create a new thread for servicing each request, as shown in ThreadPerTaskWebServer in Listing 6.2.
反应更加灵敏的方法是为每个请求创建一个新的线程来服务。如ThreadPerTaskWebServer在6.2所示。
THReadPerTaskWebServer is similar in structure to the single-threaded version the main thread still alternates between accepting an incoming connection and dispatching the request. The difference is that for each connection, the main loop creates a new thread to process the request instead of processing it within the main thread. This has three main consequences:
THReadPerTaskWebServer类是在结构上类似于单线程版本,主线程在接受连接,处理请求之间切换。所不同的是,主线程为每个连接创建一个新的线程来处理请求,而不是在主线程中处理。这主要有三个后果:
Task processing is offloaded from the main thread, enabling the main loop to resume waiting for the next incoming connection more quickly. This enables new connections to be accepted before previous requests complete, improving responsiveness.
任务处理从主线程中卸载了,使主线程更迅速地恢复等待下一个传入的连接。这使得新的连接在处理请求被完成之前被接受的,提高了反应。
Tasks can be processed in parallel, enabling multiple requests to be serviced simultaneously. This may improve throughput if there are multiple processors, or if tasks need to block for any reason such as I/O completion, lock acquisition, or resource availability.
任务可以并行处理,从而使多个请求同时进行处理。这可能会提高吞吐量,如果有多个处理器,或者如果任务需要被阻止,如I / O完成,锁定收购,或资源的可用性。
Task-handling code must be thread-safe, because it may be invoked concurrently for multiple tasks.
任务处理代码必须是线程安全的,因为它可能被多个任务同时调用。
Under light to moderate load, the thread-per-task approach is an improvement over sequential execution. As long as the request arrival rate does not exceed the server's capacity to handle requests, this approach offers better responsiveness and throughput.
在轻至中度负载,每个任务一个线程的方法是对于顺序执行任务的改善。只要请求到达率不超过服务器处理请求的能力,这种方法提供了更好的响应和吞吐量。
6.1.3. Disadvantages of Unbounded Thread Creation
For production use, however, the thread-per-task approach has some practical drawbacks, especially when a large number of threads may be created:
然而,在现实中,每个任务一个线程的方法具有一些缺点,特别是可能创建大量的线程的时候:
Thread lifecycle overhead. Thread creation and teardown are not free. The actual overhead varies across platforms, but thread creation takes time, introducing latency into request processing, and requires some processing activity by the JVM and OS. If requests are frequent and lightweight, as in most server applications, creating a new thread for each request can consume significant computing resources.
线程生命周期开销。线程创建和销毁,是不是免费的。实际开销由于平台不同而不一样,但线程的创建需要时间,引入请求的处理延迟,需要一些由JVM和OS的处理活动。如果请求频繁并且是轻量级的,在大多数服务器应用程序中,为每个请求创建一个新的线程,都会消耗大量的计算资源。
Resource consumption. Active threads consume system resources, especially memory. When there are more runnable threads than available processors, threads sit idle. Having many idle threads can tie up a lot of memory, putting pressure on the garbage collector, and having many threads competing for the CPUs can impose other performance costs as well. If you have enough threads to keep all the CPUs busy, creating more threads won't help and may even hurt.
资源的消耗。活动线程会消耗系统资源,尤其是内存。当有运行的线程数目超出可用的处理器的数目,线程就会处于闲置状态。空闲线程很多的话,会占用大量的内存,增加垃圾收集器的压力,多个线程争夺CPU会增加他性能成本。如果你有足够的线程所有CPU保持忙碌,创建更多的线程不会帮助,反而适得其反。
Stability. There is a limit on how many threads can be created. The limit varies by platform and is affected by factors including JVM invocation parameters, the requested stack size in the Thread constructor, and limits on threads placed by the underlying operating system.[2] When you hit this limit, the most likely result is an OutOfMemoryError. trying to recover from such an error is very risky; it is far easier to structure your program to avoid hitting this limit.
稳定。有一个可以创建多少个线程的限制。限制因平台而异,受以下因素的影响:JVM调用参数、线程构造器要求的堆栈大小,底层操作系统的线程限制等。当你超过了这个限制,最可能的结果是一个OutOfMemoryError,试图从这样的错误恢复是非常危险的;通过组织程序要比避免超过限制要容易得多。
Up to a certain point, more threads can improve throughput, but beyond that point creating more threads just slows down your application, and creating one thread too many can cause your entire application to crash horribly. The way to stay out of danger is to place some bound on how many threads your application creates, and to test your application thoroughly to ensure that, even when this bound is reached, it does not run out of resources.
在某个临界点之前,多个线程可以提高吞吐量。但是当超过这个临界点,创造更多的线程只会减慢您的应用程序,并且创建线程太多,可能会导致整个应用程序崩溃可怕。解决危险的方法是限制应用程序创建线程的数目,彻底测试您的应用程序来确保即使达到这个界限,它不会耗尽资源。
The problem with the thread-per-task approach is that nothing places any limit on the number of threads created except the rate at which remote users can throw HTTP requests at it. Like other concurrency hazards, unbounded thread creation may appear to work just fine during prototyping and development, with problems surfacing only when the application is deployed and under heavy load. So a malicious user, or enough ordinary users, can make your web server crash if the traffic load ever reaches a certain threshold. For a server application that is supposed to provide high availability and graceful degradation under load, this is a serious failing.
每个任务一个线程的方法问题,是除了客户端发送HTTP请求到服务端的速率之外,没有限制创建的线程数量。像其他并发危害一样,在创建原型设计和开发过程中,无限制创建线程可能没有问题,当应用程序部署和重负载下,就会导致问题的出现。因此,一个恶意用户,或足够普通用户,可以让你的Web服务器崩溃,如果服务器负荷达到一定的阈值。对于服务器应用程序应该在重负载下提供高可用性和优美的退化,这是一个严重的失败。
发表评论
-
CORBA笔记
2012-12-06 10:29 1156一、原理和概念1)CORBA独立于任何编程语言,独立于操作系统 ... -
RMI笔记
2012-12-05 15:49 1085RMI(remote method invoke) ... -
Java性能调优笔记
2012-11-27 15:46 717Java性能调优笔记(http:/ ... -
buffer
2012-11-15 17:00 0Buffer一、Basic Buffer Use四个步骤:1、 ... -
应用线程池
2012-11-13 15:45 903应用线程池一、任务与执行策略之间的隐性耦合。1、有些类型的任务 ... -
Thread 学习小结
2012-10-09 17:36 8151.ThreadFactory 创建线程 ... -
CyclicBarrier理解
2012-08-29 17:13 981package com.zhoubo.concurrent.b ... -
FutureTask实例
2012-08-24 16:08 957package com.zhoubo.concurrent.f ... -
CountDownLatch示例
2012-08-24 11:31 765import java.util.concurrent.Cou ... -
Task Cancellation
2012-08-15 17:35 07.1. Task Cancellation An activ ... -
Finding Exploitable Parallelism
2012-08-15 17:32 11446.3. Finding Exploitable Parall ... -
The Executor Framework
2012-08-12 17:41 18596.2. The Executor Framework Tas ...
相关推荐
A light framework of task execution in distributed system Written with go 安装: go get github.com/foreversmart/distributed-task 更新: go get -u github.com/foreversmart/distributed-task 添加flag u ...
In order to solve the problem, this paper proposes a novel high level architecture support for automatic out-of-order (OoO) task execution on FPGA based heterogeneous MPSoCs. The architecture support...
数据库备份是保障数据安全的重要措施,但在实际操作中可能会遇到多种问题导致备份失败或恢复困难。以下是关于数据库备份可能出现的十个问题的详细分析: 1. RAID并非万能:RAID可以保护磁盘故障,但无法防止人为...
JBPM4 中 ProcessDefinition、ProcessInstance、Execution、Task 关系和区别 ProcessDefinition 是流程的定义,也可以理解为流程的规范。它有一个 id,这个 id 的格式为 {key}-{version},其中 key 和 version 之间...
Spring-task,也称为Spring的Task Execution and Scheduling模块,提供了一个统一的接口来创建、管理和执行任务。它可以处理一次性任务和周期性任务,支持基于时间(如cron表达式)或间隔时间的调度。 ### 二、注解...
### 解决Java_heap_space问题:深入理解与策略 在Java应用程序开发与运行过程中,经常会遇到一个常见的内存管理问题——“Java heap space”。这个问题通常表现为Java虚拟机(JVM)在执行过程中因可用堆内存不足而...
`@Scheduled`是Spring的Task Execution and Scheduling模块的一部分,允许开发者方便地定义定时任务。下面我们将深入探讨这个知识点: 1. **Spring Task Execution and Scheduling**: - Spring的任务调度框架允许...
Java提供了多种实现任务调度的方法,其中最常用的是Quartz和Spring的TaskExecution和TaskScheduling模块。 Quartz是一个开源的作业调度框架,它可以用来创建、调度和执行计划任务。Quartz允许开发者精确地定义任务...
简单任务执行管理器 (STEM) 是一种 OSGi UPnP 服务,当可用 UPnP 服务或设备上的一组条件评估为真时,它会在 UPnP 设备和服务上生成事件和/或执行操作。
public void beforeTask(TaskExecution taskExecution) { // 任务开始前执行 } @Override public void afterTask(TaskExecution taskExecution) { // 任务结束后执行 } // 其他接口方法... } ``` 结合...
scala 编译工具 sbt 安装包。...Parallel task execution, including parallel test execution Library management support: inline declarations, external Ivy or Maven configuration files, or manual management
3. **任务运行(Task Execution)**: 提供API来启动新任务,可以指定任务定义、任务数量、任务超时和其他相关参数。 4. **资源管理(Resource Management)**: 可能包括销毁或更新已部署的Fargate任务和服务。 5. ...
在Spring中,我们可以使用Spring的`TaskExecution`和`TaskScheduling`模块来实现这一目标,这两个模块提供了丰富的定时任务管理功能。 1. **Spring Task Execution**: Spring Task Execution模块提供了一个统一的...
首先,Spring提供了两种主要的任务调度机制:Spring内置的`TaskExecution`和`TaskScheduling`以及与Quartz Scheduler的集成。`TaskExecution`和`TaskScheduling`是Spring框架的基础任务调度API,它们允许开发者创建...
6. **Task Execution IAM Role(任务执行IAM角色)**:Fargate任务需要一个IAM角色来访问必要的AWS服务,如ECS和ECR(Amazon Elastic Container Registry)。`aws_cdk.aws_iam.Role`类将用于创建这个角色。 7. **...
Spring提供了TaskExecution和TaskScheduling两个接口,它们分别用于执行一次性任务和定时任务。这两个接口是Spring Task模块的核心,可以帮助我们构建异步和定时任务。 2. **@Scheduled Annotation** 在Java中,...
fail("Unexpected BuildException during task execution: " + e.getMessage()); } } } ``` 这个测试会确保`execute()`方法正确地打印了预期的消息。在实际项目中,可能需要更复杂的测试,比如检查任务是否正确...
2. **定时任务**:这可能是通过Spring的TaskExecution和TaskScheduling模块实现的,允许开发者定义周期性任务或者定时执行的任务,例如使用@Scheduled注解来调度任务,或者配置一个TaskExecutor来异步执行任务。...