1) The problems that you solve with concurrency can be roughly classified as "speed" and "design manageability."
2) From a performance standpoint, it makes no sense to use concurrency on a single-processor machine unless one of the tasks might block.
3) One very straightforward way to implement concurrency is at the operating system level, using processes. A process is a self-contained program running within its own address space. A multitasking operating system can run more than one process (program) at a time by periodically switching the CPU from one process to another, while making it look as if each process is chugging along on its own. Processes are very attractive because the operating system usually isolates one process from another so they cannot interfere with each other, which makes programming with processes relatively easy.
4) Instead of forking external processes in a multitasking operating system, Java threading creates tasks within the single process represented by the executing program. One advantage that this provided was operating system transparency.
5) Java’s threading is preemptive, which means that a scheduling mechanism provides time slices for each thread, periodically interrupting a thread and context switching to another thread so that each one is given a reasonable amount of time to drive its task. In a cooperative system, each task voluntarily gives up control, which requires the programmer to consciously insert some kind of yielding statement into each task. The advantage to a cooperative system is twofold: Context switching is typically much cheaper than with a preemptive system, and there is theoretically no limit to the number of independent tasks that can be running at once. When you are dealing with a large number of simulation elements, this can be the ideal solution. Note, however, that some cooperative systems are not designed to distribute tasks across processors, which can be very limiting.
6) Concurrent programming allows you to partition a program into separate, independently running tasks. Using multithreading, each of these independent tasks (also called subtasks) is driven by a thread of execution. A thread is a single sequential flow of control within a process. A single process can thus have multiple concurrently executing tasks, but you program as if each task has the CPU to itself. An underlying mechanism divides up the CPU time for you, but in general, you don’t need to think about it.
7) A thread drives a task, so you need a way to describe that task. This is provided by the Runnable interface. To define a task, simply implement Runnable and write a run( ) method to make the task do your bidding.
8) A task’s run( ) method usually has some kind of loop that continues until the task is no longer necessary, so you must establish the condition on which to break out of this loop (one option is to simply return from run( ) ). Often, run( ) is cast in the form of an infinite loop, which means that, barring some factor that causes run( ) to terminate, it will continue forever.
9) Thread.yield( ) is a suggestion to the thread scheduler (the part of the Java threading mechanism that moves the CPU from one thread to the next) that says, "I’ve done the important parts of my cycle and this would be a good time to switch to another task for a while."
10) The traditional way to turn a Runnable object into a working task is to hand it to a Thread constructor. A Thread constructor only needs a Runnable object. Calling a Thread object’s start( ) will perform the necessary initialization for the thread and then call that Runnable ’s run( ) method to start the task in the new thread. Each Thread "registers" itself so there is actually a reference to it someplace, and the garbage collector can’t clean it up until the task exits its run( ) and dies.
11) Java SE5 java.util.concurrent.Executors simplify concurrent programming by managing Thread objects for you. Executors provide a layer of indirection between a client and the execution of a task; instead of a client executing a task directly, an intermediate object executes the task. Executors allow you to manage the execution of asynchronous tasks without having to explicitly manage the lifecycle of threads. Executors are the preferred method for starting tasks in Java SE5/6.
12) An ExecutorService (an Executor with a service lifecycle—e.g., shutdown) knows how to build the appropriate context to execute Runnable objects. Note that an ExecutorService object is created using a static Executors method which determines the kind of Executor it will be. A FixedThreadPool uses a limited set of threads to execute the submitted tasks. You don’t overrun the available resources because the FixedThreadPool uses a bounded number of Thread objects. A CachedThreadPool will generally create as many threads as it needs during the execution of a program and then will stop creating new threads as it recycles the old ones. A SingleThreadExecutor is like a FixedThreadPool with a size of one thread. This is useful for anything you want to run in another thread continually (a long-lived task), such as a task that listens to incoming socket connections. If more than one task is submitted to a SingleThreadExecutor , the tasks will be queued and each task will run to completion before the next task is begun, all using the same thread. Note that in any of the thread pools, existing threads are automatically reused when possible.
13) call execute to pass in a Runnable task to the Executor so that the task will be scheduled and run. The call to shutdown( ) prevents new tasks from being submitted to that Executor.
14) A Runnable is a separate task that performs work, but it doesn’t return a value. If you want the task to produce a value when it’s done, you can implement the Callable interface rather than the Runnable interface. Callable , introduced in Java SE5, is a generic with a type parameter representing the return value from the method call( ) (instead of run( ) ), and must be invoked using an ExecutorService.submit( ) method.
15) The submit( ) method produces a Future object, parameterized for the particular type of result returned by the Callable . You can query the Future with isDone( ) to see if it has completed. When the task is completed and has a result, you can call get( ) to fetch the result. You can simply call get( ) without checking isDone( ) , in which case get( ) will block until the result is ready. You can also call get( ) with a timeout.
16) The overloaded Executors.callable( ) method takes a Runnable and produces a Callable(retruning null). ExecutorService has some "invoke " methods that run collections of Callable objects.
17) A simple way to affect the behavior of your tasks is by calling TimeUnit.MILLISECONDS. sleep( ) to cease (block) the execution of that task for a given time. The call to sleep( ) can throw an InterruptedException.
18) The vast majority of the time, all threads should run at the default priority. Trying to manipulate thread priorities is usually a mistake. You can read the priority of an existing thread with getPriority( ) and change it at any time with setPriority( ) . The thread with Thread.MAX_PRIORITY is given a higher preference by the thread scheduler. Although the JDK has 10 priority levels, this doesn’t map well to many operating systems. The only portable approach is to stick to MAX_PRIORITY, NORM_PRIORITY, and MIN_PRIORITY when you’re adjusting priority levels.
Commented By Sean, higher priority only means higher probability to run.
19) Thread.toString( ) prints the thread name, the priority level, and the "thread group" that the thread belongs to. You can set the thread name yourself via the constructor otherwise it’s automatically generated as pool-1-thread-1, pool-1-thread-2. You can get a reference to the Thread object that is driving a task, inside that task, by calling Thread.currentThread( ) . The priority should be set at the beginning of run( ) ; setting it in the constructor would do no good since the Executor has not begun the task at that point.
20) If you know that you’ve accomplished what you need to during one pass through a loop in your run( ) method, you can give a hint to the thread scheduling mechanism that you’ve done enough and that some other task might as well have the CPU. This hint (and it is a hint—there’s no guarantee your implementation will listen to it) takes the form of the yield( ) method. When you call yield( ) , you are suggesting that other threads of the same priority might be run.
21) A "daemon" thread is intended to provide a general service in the background as long as the program is running, but is not part of the essence of the program. Thus, when all of the non-daemon threads complete, the program is terminated, killing all daemon threads in the process. Conversely, if there are any non-daemon threads still running, the program doesn’t terminate. You must set the thread to be a daemon by calling setDaemon( ) before it is started. You can find out if a thread is a daemon by calling isDaemon( ) . If a thread is a daemon, then any threads it creates will automatically be daemons. You should be aware that daemon threads will terminate their run( ) methods without executing finally clauses.
22) It is possible to customize the attributes (daemon, priority, name) of threads created by Executors by writing a custom ThreadFactory. You can pass a ThreadFactory object to Executors.newCachedThreadPool() then ThreadFactory.newThread(Runnable) will be invoked to create thread for the Executor.
23) You give the Thread objects specific names by calling the appropriate Thread constructor. This name is retrieved in toString( ) using getName( ) .
24) You should see by now that there’s a distinction between the task that’s being executed and the thread that drives it; this distinction is especially clear in the Java libraries because you don’t really have any control over the Thread class (and this separation is even clearer with executors, which take care of the creation and management of threads for you). You create tasks and somehow attach a thread to your task so that the thread will drive that task.
25) If a thread calls t.join() on another thread t , then the calling thread is suspended until the target thread t finishes (when t.isAlive( ) is false). You may also call join() with a timeout argument so that if the target thread doesn’t finish in that period of time, the call to join( ) returns anyway. The call to join( ) or sleep() may be aborted by calling interrupt( ) on the calling thread, so a try-catch clause (for InterruptedException ) is required.
26) When thread A calls interrupt() on thread B, a flag is set to indicate that the thread B has been interrupted. However, this flag is cleared when the exception is caught in thread B, so the result of isInterrupted() will always be false inside the catch clause in thread B. The flag is used for other situations where a thread may examine its interrupted state apart from the exception.
Commented by Sean : isInterrupted() is not static , it's used by other thread to check whether the target thread has ever been interrupted and will not clear the interrupted flag. The interrupted() is static , it's used to detect the interrupted flag of current thread and then clear it.
27) Because of the nature of threads, you can’t catch an exception that has escaped from a thread. Once an exception gets outside of a task’s run( ) method, it will propagate out to the console unless you take special steps to capture such errant exceptions. Before Java SE5, you used thread groups to catch these exceptions, but with Java SE5 you can solve the problem with Executors .
28) Thread.UncaughtExceptionHandler is a new interface in Java SE5; it allows you to attach an exception handler to each Thread object. Thread.UncaughtExceptionHandler.uncaughtException(Thread t, Throwable e) is automatically called when that thread is about to die from an uncaught exception. To use it, we create a new type of ThreadFactory which attaches a new Thread.UncaughtExceptionHandler to each new Thread object it creates. We pass that factory to the Executors method that creates a new ExecutorService . You can set and get the handler via setUncaughtExceptionHandler() and getUncaughtExceptionHandler() of a thread.
29) Thread.setDefaultUncaughtExceptionHandler() is to set the default uncaught exception handler, which sets a static field inside the Thread class. The system checks for a per-thread version, and if it doesn’t find one it checks to see if the thread group specializes its uncaughtException( ) method; if not, it calls the defaultUncaughtExceptionHandler .
Commented By Sean: If a thread has an uncaught exception handler, that one will be called , otherwise it's thread group (which also implements the UncaughtExceptionHandler) will be called. The implementation of ThreadGroup.uncaughtException() is to invoke the parent thread group's uncaughtException() if parent exists, otherwise it will print the exception and stack trace to System.err except for ThreadDeath which is sent by stop() method of Thread ( in that case nothing will be done).
30) It’s important to note that the increment operation(++ or -- ) itself requires multiple steps, and the task can be suspended by the threading mechanism in the midst of an increment—that is, increment is not an atomic operation in Java. So even a single increment isn’t safe to do without protecting the task.
31) To solve the problem of thread collision, virtually all concurrency schemes serialize access to shared resources. This means that only one task at a time is allowed to access the shared resource. This is ordinarily accomplished by putting a clause around a piece of code that only allows one task at a time to pass through that piece of code. Because this clause produces mutual exclusion, a common name for such a mechanism is mutex.
32) To prevent collisions over resources, Java has built-in support in the form of the synchronized keyword. When a task wishes to execute a piece of code guarded by the synchronized keyword, it checks to see if the lock is available, then acquires it, executes the code, and releases it.
33) To control access to a shared resource, you first put it inside an object. You should make the data elements of the class private and access that memory only through methods. You can prevent collisions by declaring those methods that uses the resource synchronized. If a task is in a call to one of the synchronized methods, all other tasks are blocked from entering any of the synchronized methods of that object until the first task returns from its call. All objects automatically contain a single lock (also referred to as a monitor). When you call any synchronized method, that object is locked and no other synchronized method of that object can be called by other tasks/threads until the first one finishes and releases the lock.
34) There’s also a single lock per class (as part of the Class object for the class), so that synchronized static methods can lock each other out from simultaneous access of static data on a class-wide basis.
35) One task may acquire an object’s lock multiple times. This happens if one method calls a second method on the same object, which in turn calls another method on the same object, etc.The JVM keeps track of the number of times the object has been locked. If the object is unlocked, it has a count of zero. Naturally, multiple lock acquisition is only allowed for the task that acquired the lock in the first place.
36) If you are writing a variable that might next be read by another thread, or reading a variable that might have last been written by another thread, you must use synchronization, and further, both the reader and the writer must synchronize using the same monitor lock.
37) When you are using Lock objects, it is important to internalize the idiom : Right after the call to lock( ) , you must place a try-finally statement with unlock( ) in the finally clause—this is the only way to guarantee that the lock is always released.
Commented By Sean: Lock is not autocloasable. Throwing exceptions from a synchronized method will released lock automatically.
38) A ReentrantLock allows you to try and fail to acquire the lock with tryLock() , so that if someone else already has the lock, you can decide to go off and do something else rather than waiting until it is free. The overloaded form of trylock() can let you specify a timeout for retrieving the lock.
39) Atomic operations do not need to be synchronized. An atomic operation is one that cannot be interrupted by the thread scheduler; if the operation begins, then it will run to completion before the possibility of a context switch. Relying on atomicity is tricky and dangerous—you should only try to use atomicity instead of synchronization if you are a concurrency expert, or you have help from such an expert.
40) Atomicity applies to "simple operations" on primitive types except for longs and doubles . Reading and writing primitive variables other than long and double is guaranteed to go to and from memory as indivisible (atomic) operations. The JVM is allowed to perform reads and writes of 64- bit quantities (long and double variables) as two separate 32-bit operations. However, you do get atomicity (for simple assignments and returns) if you use the volatile keyword when defining a long or double variable.
41) On multiprocessor systems (which are now appearing in the form of multicore processors—multiple CPUs on a single chip), visibility rather than atomicity is much more of an issue than on single-processor systems. Changes made by one task, even if they’re atomic in the sense of not being interruptible, might not be visible to other tasks (the changes might be temporarily stored in a local processor cache, for example). The synchronization mechanism, on the other hand, forces changes by one task on a multiprocessor system to be visible across the application.
42) The volatile keyword also ensures visibility across the application. If you declare a field to be volatile , this means that as soon as a write occurs for that field, all reads will see the change. This is true even if local caches are involved—volatile fields are immediately written through to main memory, and reads occur from main memory. If multiple tasks are accessing a field, that field should be volatile ; otherwise, the field should only be accessed via synchronization. Synchronization also causes flushing to main memory, so if a field is completely guarded by synchronized methods or blocks, it is not necessary to make it volatile .
43) volatile doesn’t work when the value of a field depends on its previous value (such as incrementing a counter), nor does it work on fields whose values are constrained by the values of other fields, such as the lower and upper bound of a Range class which must obey the constraint lower <= upper . It’s typically only safe to use volatile instead of synchronized if the class has only one mutable field.
44) It is possible for each thread to have a local stack and maintain copies of some variables there. If you define a variable as volatile , it tells the compiler not to do any optimizations that would remove reads and writes that keep the field in exact synchronization with the local data in the threads. In effect, reads and writes go directly to memory, and are not cached, volatile also restricts compiler reordering of accesses during optimization. Basically, you should make a field volatile if that field could be simultaneously accessed by multiple tasks, and at least one of those accesses is a write.
45) Brian’s Rule of Synchronization: If you are writing a variable that might next be read by another thread, or reading a variable that might have last been written by another thread, you must use synchronization, and further, both the reader and the writer must synchronize using the same monitor lock.
46) Java SE5 introduces special atomic variable classes such as Atomiclnteger , AtomicLong , AtomicReference , etc. that provide an atomic conditional update operation of the form : boolean compareAndSet(expectedValue, updateValue);
47) Sometimes, you only want to prevent multiple thread access to part of the code inside a method instead of the entire method. The section of code you want to isolate this way is called a critical section and is created using the synchronized keyword. Here, synchronized is used to specify the object whose lock is being used to synchronize the enclosed code:
synchronized(syncObject) {
// This code can be accessed by only one task at a time }
This is also called a synchronized block; before it can be entered, the lock must be acquired on syncObject . If some other task already has this lock, then the critical section cannot be entered until the lock is released.
48) The synchronized keyword is not part of the method signature and thus may be added during overriding.
49) Functionality implemented in the base class uses one or more abstract methods defined in derived classes, is called a Template Method in Design Patterns parlance. This design pattern allow you to encapsulate change in your code.
50) ThreadLocal objects are usually stored as static fields. When you create a ThreadLocal object, you are only able to access the contents of the object using the get( ) and set( ) methods. The get( ) method returns a copy of the object that is associated with that thread, and set( ) inserts its argument into the object stored for that thread, returning the old object that was in storage.
Commented By Sean: Each thread will keep a map whose key is the threadlocal variable and value is the threadlocal value of that thread.
51) ExecutorService.awaitTermination( ) waits for each task to complete, and if they all complete before the timeout value, it returns true , otherwise it returns false to indicate that not all tasks have completed.
52) A thread can be in any one of four states:
a. New : A thread remains in this state only momentarily, as it is being created. It allocates any necessary system resources and performs initialization. At this point it becomes eligible to receive CPU time. The scheduler will then transition this thread to the runnable or blocked state.
b. Runnable : This means that a thread can be run when the time-slicing mechanism has CPU cycles available for the thread. Thus, the thread might or might not be running at any moment, but there’s nothing to prevent it from being run if the scheduler can arrange it. That is, it’s not dead or blocked.
c. Blocked : The thread can be run, but something prevents it. While a thread is in the blocked state, the scheduler will simply skip it and not give it any CPU time. Until a thread reenters the runnable state, it won’t perform any operations.
d. Dead : A thread in the dead or terminated state is no longer schedulable and will not receive any CPU time. Its task is completed, and it is no longer runnable. One way for a task to die is by returning from its run( ) method, but a task’s thread can also be interrupted.
Comment By Sean: Here "Blocked" status includes Thread.State.Blocked, Thread.State.WAITING and Thread.State.TIMED_WAITING.
53) A task can become blocked for the following reasons:
a. You’ve put the task to sleep by calling sleep(milliseconds) , in which case it will not be run for the specified time.
b. You’ve suspended the execution of the thread with wait( ) . It will not become runnable again until the thread gets the notify( ) or notifyAll( ) message (or the equivalent signal( ) or signalAll( ) for the Java SE5 java.util.concurrent library tools).
c. The task is waiting for some I/O to complete.
d. The task is trying to call a synchronized method on another object, and that object’s lock is not available because it has already been acquired by another task.
54) In old code, you may also see suspend( ) and resume( ) used to block and unblock threads, but these are deprecated in modern Java (because they are deadlock-prone). The stop( ) method is also deprecated, because it doesn’t release the locks that the thread has acquired, and if the objects are in an inconsistent state ("damaged"), other tasks can view and modify them in that state.
Commented By Sean: The thread that has been suspended also keeps the lock. While it can be blocked forever if no one resume it. But a thread that has slept will certainly wake up for some interval and a thread that wait on some condition will release the corresponding lock first which is used to notify it.
55) When you break out of a blocked task, you might need to clean up resources. Because of this, breaking out of the middle of a task’s run( ) is more like throwing an exception than anything else, so in Java threads, exceptions are used for this kind of abort. To return to a known good state when terminating a task this way, you must carefully consider the execution paths of your code and write your catch clause to properly clean everything up.
56) The Thread class contains the interrupt( ) method. This sets the interrupted status for that thread. A thread with its interrupted status set will throw an InterruptedException if it is already blocked or if it attempts a blocking operation. The interrupted status will be reset when the exception is thrown or if the task calls Thread.interrupted( ) . Thread.interrupted( ) provides a second way to leave your run( ) loop, without throwing an exception.
57) If you call shutdownNow( ) on an Executor , it will send an interrupt( ) call to each of the threads it has started. There are times when you may want to only interrupt a single task. If you’re using Executors , you can hold on to the context of a task when you start it by calling submit( ) instead of execute( ) . submit(Runnable r ) returns a generic Future<?> , with an unspecified parameter because you won’t ever call get( ) on it—the point of holding this kind of Future is that you can call cancel( ) on it and thus use it to interrupt a particular task. If you pass true to cancel( ) , it has permission to call interrupt( ) on that thread in order to stop it.
Commented By Sean , when you pass true to cancel, it means even if the task has been scheduled to a thread, you can interrupt that thread , otherwise only when the task hasn't been scheduled , it will be canceled.
58) I/O and waiting on a synchronized lock are not interruptible. You can interrupt a call to sleep( ) (or any call that requires you to catch InterruptedException ). However, you cannot interrupt a task that is trying to acquire a synchronized lock or one that is trying to perform I/O.
59) The task is unblocked once the underlying resource is closed. It’s interesting to note that the interrupted() is true when you are closing the Socket but not when closing System.in .
60) The nio classes provide for more civilized interruption of I/O. Blocked nio channels automatically respond to interrupts. You will get ClosedByInterruptException when the thread is interrupted and AsynchronousCloseException when the underlying channel is closed.
61) One of the features added in the Java SE5 concurrency libraries is the ability for tasks blocked on ReentrantLocks ( using lockInterruptibly() ) to be interrupted, unlike tasks blocked on synchronized methods or critical sections.
62) You check for the interrupted status by calling interrupted( ) . This not only tells you whether interrupt( ) has been called, it also clears the interrupted status. Clearing the interrupted status ensures that the framework will not notify you twice about a task being interrupted. You will be notified via either a single InterruptedException (for blocked execution path) or a single successful Thread.interrupted( ) (for non-blocked execution path) test.
63) A class designed to respond to an interrupt( ) must establish a policy to ensure that it will remain in a consistent state. This generally means that the creation of all objects that require cleanup must be followed by try-finally clauses so that cleanup will occur regardless of how the run( ) loop exits.
64) The key issue when tasks are cooperating is handshaking between those tasks. To accomplish this handshaking, we use the same foundation: the mutex, which in this case guarantees that only one task can respond to a signal. This eliminates any possible race conditions.
65) wait( ) allows you to wait for a change in some condition that is outside the control of the forces in the current method. Often, this condition will be changed by another task. You don’t want to idly loop while testing the condition inside your task; this is called busy waiting, and it’s usually a bad use of CPU cycles. So wait( ) suspends the task while waiting for the world to change, and only when a notify( ) or notifyAll( ) occurs—suggesting that something of interest may have happened—does the task wake up and check for changes.
66) sleep( ) does not release the object lock when it is called, and neither does yield( ) . On the other hand, when a task enters a call to wait( ) inside a method, that thread’s execution is suspended, and the lock on that object is released. Because wait( ) releases the lock, it means that the lock can be acquired by another task, so other synchronized methods in the (now unlocked) object can be called during a wait( ) . This is essential, because those other methods are typically what cause the change that makes it interesting for the suspended task to reawaken. Thus, when you call wait( ) , you’re saying, "I’ve done all I can right now, so I’m going to wait right here, but I want to allow other synchronized operations to take place if they can."
67) There are two forms of wait( ) . One version takes an argument in milliseconds that has the same meaning as in sleep( ) : "Pause for this period of time." But unlike with sleep( ) , with wait(pause) :
a. The object lock is released during the wait( ) .
b. You can also come out of the wait( ) due to a notify( ) or notifyAll( ) , in addition to letting the clock run out.
The second, more commonly used form of wait( ) takes no arguments. This wait( ) continues indefinitely until the thread receives a notify( ) or notifyAll( ) .
68) wait( ) , notify( ) , and notifyAll( ) are part of the base class Object and not part of Thread . It’s essential because these methods manipulate the lock that’s also part of every object. In fact, the only place you can call wait( ) , notify( ) , or notifyAll( ) is within a synchronized method or block. If you call any of these within a method that’s not synchronized , you’ll get an IllegalMonitorStateException with the somewhat nonintuitive message "current thread not owner." This message means that the task calling wait( ) , notify( ) , or notifyAll( ) must "own" (acquire) the lock for the object before it can call any of those methods. You can ask another object to perform an operation that manipulates its own lock. To do this, you must first capture that object’s lock. In order for the task to wake up from a wait( ) , it must first reacquire the lock that it released when it entered the wait( ) . The task will not wake up until that lock becomes available.
69) You must surround a wait( ) with a while loop that checks the condition(s) of interest. This is important because:
a. You may have multiple tasks waiting on the same lock for the same reason, and the first task that wakes up might change the situation (even if you don’t do this someone might inherit from your class and do it). If that is the case, this task should be suspended again until its condition of interest changes.
b. By the time this task awakens from its wait( ) , it’s possible that some other task will have changed things such that this task is unable to perform or is uninterested in performing its operation at this time. Again, it should be resuspended by calling wait( ) again.
c. It’s also possible that tasks could be waiting on your object’s lock for different reasons (in which case you must use notifyAll( ) ). In this case, you need to check whether you’ve been woken up for the right reason, and if not, call wait( ) again.
Thus, it’s essential that you check for your particular condition of interest, and go back into wait( ) if that condition is not met. This is idiomatically written using a while . The only safe approach is to always use the following idiom for a wait( ) (within proper synchronization, of course, and programming against the possibility of missed signals):
synchronized(sharedMonitor){ while(conditionlsNotMet) sharedMonitor.wait(); } /** bad usage while(conditionlsNotMet) //thread switch may happen here synchronized(sharedMonitor){ sharedMonitor.wait(); } **/
70) Using notify( ) instead of notifyAll( ) is an optimization. Only one task of the possible many that are waiting on a lock will be awoken with notify( ) , so you must be certain that the right task will wake up if you try to use notify( ) . In addition, all tasks must be waiting on the same condition in order for you to use notify( ) , because if you have tasks that are waiting on different conditions, you don’t know if the right one will wake up. If you use notify( ) , only one task must benefit when the condition changes. Finally, these constraints must always be true for all possible subclasses. If any of these rules cannot be met, you must use notifyAll( ) rather than notify( ) .
Commented By Sean: The state of notify will not be kept, if there are no thread waiting on notify. Any thread that wait after notify will miss the signal. While the interrupted status will keep, any interruptible method call after interrupt will still throww InterruptedException.
71) Call to notifyAll( ) or notify( ) must first capture the lock on the corresponding object. The call to wait( ) in that object automatically releases the lock, so this is possible. Because the lock must be owned in order for notifyAll( ) or notify( ) to be called, it’s guaranteed that two tasks trying to call notifyAll( ) or notify( ) on one object won’t step on each other’s toes.
72) In the Java SE5 java.util.concurrent library, the basic class that uses a mutex and allows task suspension is the Condition , and you can suspend a task by calling await( ) on a Condition . When external state changes take place that might mean that a task should continue processing, you notify the task by calling signal( ) , to wake up one task, or signalAll( ) , to wake up all tasks that have suspended themselves on that Condition object. Lock.newCondition() can generate a Condition object. Each call to Lock.lock( ) must immediately be followed by a try-finally clause to guarantee that unlocking happens in all cases. As with the built-in versions, a task must own the lock before it can call await( ) , signal( ) or signalAll( ) .
73) notify() and notifyAll() won't release the lock, they just notify those threads that wait on the same lock to com back to the Thread.State.BLOCKED state.( trying to get the lock again ) while interrupt() will also notify a waiting thread. The interrupt only throws InterruptedException as the task attempts to enter an (interruptible) blocking operation otherwise it can only be detected by static Thread.interrupted() method which will clean the interrupt signal or non-static Thread.isInterrupted() method.
74) A synchronized queue, which only allows one task at a time to insert or remove an element. This is provided for you in the java.util.concurrent.BlockingQueue interface, which has a number of standard implementations. You’ll usually use the LinkedBlockingQueue , which is an unbounded queue; the ArrayBlockingQueue has a fixed size, so you can only put so many elements in it before it blocks. These queues also suspend a consumer task if that task tries to get an object from the queue and the queue is empty, and resume when more elements become available.
Commented By Sean: A synchronous queue(SynchronousQueue) does not have any internal capacity, not even a capacity of one. You cannot peek at a synchronous queue because an element is only present when you try to remove it; you cannot insert an element (using any method) unless another thread is trying to remove it; you cannot iterate as there is nothing to iterate. You can also specify a capacity for LinkedBlockingQueue in its constructor.
75) Threading libraries may provide support for inter-task I/O in the form of pipes. These exist in the Java I/O library as the classes PipedWriter (which allows a task to write into a pipe) and PipedReader (which allows a different task to read from the same pipe). when PipedReader does a read( ) , the pipe automatically blocks when there is no more data. The pipe is basically a blocking queue, which existed in versions of Java before BlockingQueue was introduced. An important difference between a PipedReader and normal I/O is ---- the PipedReader is interruptible with InterruptedIOException .
Commented By Sean: The cache is kept in PipedReader while you can either pass a PipedReader to the constrcutor of PipedWriter or vice versa to connect the Reader and Writer.
76) Deadlock can occur if four conditions are simultaneously met:
a) Mutual exclusion. At least one resource used by the tasks must not be shareable.
b) At least one task must be holding a resource and waiting to acquire a resource currently held by another task.
c) A resource cannot be preemptively taken away from a task. Tasks only release resources as a normal event.
d) A circular wait can happen, whereby a task waits on a resource held by another task, which in turn is waiting on a resource held by another task, and so on, until one of the tasks is waiting on a resource held by the first task, thus gridlocking everything.
Because all these conditions must be met to cause deadlock, you only need to prevent one of them from occurring to prohibit deadlock.
77) CountDownLatch is used to synchronize one or more tasks by forcing them to wait for the completion of a set of operations being performed by other tasks. You give an initial count to a CountDownLatch object, and any task that calls await( ) on that object will block until the count reaches zero. Other tasks may call countDown( ) on the object to reduce the count, presumably when a task finishes its job. A CountDownLatch is designed to be used in a one-shot fashion; the count cannot be reset. The tasks that call countDown( ) are not blocked when they make that call. Only the call to await( ) is blocked until the count reaches zero. A typical use is to divide a problem into n independently solvable tasks and create a CountDownLatch with a value of n. When each task is finished it calls countDown( ) on the latch. Tasks waiting for the problem to be solved call await( ) on the latch to hold themselves back until it is completed. Random.nextInt( ) is thread-safe.
78) A CyclicBarrier is used in situations where you want to create a group of tasks to perform work in parallel, and then wait until they are all finished before moving on to the next step. It brings all the parallel tasks into alignment at the barrier so you can move forward in unison. This is very similar to the CountDownLatch, except that a CountDownLatch is a one-shot event, whereas a CyclicBarrier can be reused over and over.
79) A CyclicBarrier can be given a "barrier action," which is a Runnable that is automatically executed when the count reaches zero—this is another distinction between CyclicBarrier and CountdownLatch . Once all the tasks have passed the barrier, it is automatically ready for the next round.
Commented By Sean, it's the last thread which makes the count reach zero that execute the barrier action.
80) DelayQueue is an unbounded BlockingQueue of objects that implement the Delayed interface. An object can only be taken from the queue when its delay has expired. Usually, the queue should be sorted so that the object at the head has a delay that has expired for the longest time. If the delay of first element hasn't expired, then there is no head element and poll( ) will return null (because of this, you cannot place null elements in the queue). The Delayed interface has one method, getDelay( ), which tells how long it is until the delay time expires or how long ago the delay time has expired. In getDelay( ), the desired units are passed in as the unit argument, and you use TimeUnit.convert to convert the time difference from the trigger time to the units requested by the caller, without even knowing what those units are (this is a simple example of the Strategy design pattern, where part of the algorithm is passed in as an argument). The Delayed interface also inherits the Comparable interface, so compareTo( ) must be implemented so that it produces a reasonable comparison.
Commented By Sean: DelayQueue will only look at the first element to see whether its getDelay() method returns a negative value. If yes, this element will be returned when someone tries to extract an element from the queue, otherwise it won't look at follow up elements. The sequence of the elements will be decided by compareTo method.
81) PriorityBlockingQueue is basically a priority queue that has blocking retrieval operations.
82) Executors.newScheduledThreadPool(int corePoolSize)
creates a thread pool that can schedule commands to run after a given delay, or to execute periodically. Using either schedule( ) (to run a task once) or scheduleAtFixedRate( ) (to repeat a task at a regular interval).
83) A counting semaphore allows n tasks to access the resource at the same time. You can also think of a semaphore as handing out "permits" to use a resource, although no actual permit objects are used. You can pass the number of permits to Semaphore constructor and also a boolean to indicate whether to guarantee first-in first-out granting of permits under contention. acquire( ) acquires a permit from this semaphore, blocking until one is available, or the thread is interrupted. release( ) releases a permit, increasing the number of available permits by one. There is no requirement that a thread that releases a permit must have acquired that permit by calling acquire() and more number of the initial number of permits can be released.
84) An Exchanger is a barrier that swaps objects between two tasks. When the tasks enter the barrier, they offer one object, and when they leave, they have the object that was formerly held and then offered by the other task. Exchangers are typically used when one task is creating objects that are expensive to produce and another task is consuming those objects; this way, more objects can be created at the same time as they are being consumed. exchange( ) takes the object you offer and it will bock until another taks provides another object and then that object will be returned to you.
85) SynchronousQueue is a blocking queue that has no internal capacity, so each put( ) must wait for a take( ), and vice versa. It’s as if you were handing an object to someone—there’s no table to put it on, so it only works if that person is holding a hand out, ready to receive the object.
86) Using Lock is usually significantly more efficient than using synchronized, and it also appears that the overhead of synchronized varies widely, while Locks are relatively consistent. However the percentage of time in the critical section will probably be significantly bigger than the overhead of entering and exiting the mutex, and could overwhelm any benefit of speeding up the mutex. The synchronized keyword produces much more readable code than the lock try/finally-unlock idiom that Locks require. As a result, it makes sense to start with the synchronized keyword and only change to Lock objects when you are tuning for performance. Atomic objects are only useful in very simple cases, generally when you only have one Atomic object that’s being modified and when that object is independent from all other objects. It’s safer to start with more traditional mutexing approaches and only attempt to change to Atomic later, if performance requirements dictate.
87) Vector and Hashtable had many synchronized methods, which caused unacceptable overhead when they were not being used in multithreaded applications. The Collections class was given various static "synchronized" decoration methods to synchronize the different types of containers.
88) The general strategy behind these lock-free containers is : Modifications to the containers can happen at the same time that reads are occurring, as long as the readers can only see the results of completed modifications. A modification is performed on a separate copy of a portion of the data structure (or sometimes a copy of the whole thing), and this copy is invisible during the modification process. Only when the modification is complete is the modified structure atomically swapped with the "main" data structure, and after that readers will see the modification.
89) In CopyOnWriteArrayList, a write will cause a copy of the entire underlying array to be created. The original array is left in place so that reads can safely occur while the copied array is being modified. When the modification is complete, an atomic operation swaps the new array in so that new reads will see the new information. One of the benefits of CopyOnWriteArrayList is that it does not throw ConcurrentModificationException when multiple iterators are traversing and modifying the list. CopyOnWriteArraySet uses CopyOnWriteArrayList to achieve its lock-free behavior.
90) ConcurrentHashMap and ConcurrentLinkedQueue use similar techniques to allow concurrent reads and writes, but only portions of the container are copied and modified rather than the entire container. However, readers will still not see any modifications before they are complete. ConcurrentHashMap doesn’t throw ConcurrentModificationException.
91) Atomic classes also allow you to perform what is called "optimistic locking." This means that you do not actually use a mutex when you are performing a calculation, but after the calculation is finished and you’re ready to update the Atomic object, you use a method called compareAndSet( ). You hand it the old value and the new value, and if the old value doesn’t agree with the value it finds in the Atomic object, the operation fails—this means that some other task has modified the object in the meantime. By using an Atomic instead of synchronized or Lock, you might gain performance benefits.
92) ReadWriteLock optimize the situation where you write to a data structure relatively infrequently, but multiple tasks read from it often. The ReadWriteLock allows you to have many readers at one time as long as no one is attempting to write. If the write lock is held, then no readers are allowed until the write lock is released. ReentrantReadWriteLock.getReadLockCount( ) returns the number of read lock that has been acquired.
93) One concurrent programming model is called "active objects" or "actors". The reason the objects are called "active" is that each object maintains its own worker thread and message queue, and all requests to that object are enqueued, to be run one at a time. So with active objects, we serialize messages rather than methods, which means we no longer need to guard against problems that happen when a task is interrupted midway through its loop. With active objects:
a) Each object has its own worker thread ( single ).
b) Each object maintains total control of its own fields (which is somewhat more rigorous than normal classes, which only have the option of guarding their fields).
c) All communication between active objects happens in the form of messages between those objects.
d) All messages between active objects are enqueued.
相关推荐
资源分类:Python库 所属语言:Python 资源全名:oslo.concurrency-3.15.0-py2.py3-none-any.whl 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059
资源分类:Python库 所属语言:Python 资源全名:oslo.concurrency-4.0.2-py3-none-any.whl 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059
《PyPI官网下载:深入解析oslo.concurrency-3.18.1.tar.gz》 在Python的世界里,PyPI(Python Package Index)是开发者获取和分享开源软件的重要平台。今天我们将聚焦于一个名为oslo.concurrency的库,具体版本为...
至于文件`PabloSichert-concurrency-logger-59c2d94`,这看起来是`concurrency-logger`项目的一个特定版本或分支。通常,这样的文件名表示它是从Git仓库中克隆下来的,其中`PabloSichert`可能是作者或维护者的用户名...
介绍java的concurrency,绝对超值
《Java并发实践》是Addison-Wesley在2006年5月出版的一本经典书籍,由Brian Goetz、Tim Peierls、Joshua Bloch、David Holmes和Doug Lea合著,全面深入地探讨了Java编程中的多线程和并发问题。这本书是Java并发领域...
`Java-concurrency-master.zip`这个压缩包很可能包含了关于Java并发编程的各种资料和示例,对于学习和理解Java并发机制非常有帮助。 Java并发主要包括以下几个核心知识点: 1. **线程**:Java通过`Thread`类来创建...
官方离线安装包,测试可用。使用rpm -ivh [rpm完整包名] 进行安装
Concurrency on the JVM and the Java Memory Model Chapter 3. Traditional Building Blocks of Concurrency Chapter 4. Asynchronous Programming with Futures and Promises Chapter 5. Data-Parallel ...
Create robust and scalable applications along with responsive UI using concurrency and the multi-threading infrastructure in .NET and C# About This Book Learn to combine your asynchronous operations ...
Concurrency is always a challenge for developers and writing concurrent programs can be extremely hard. There is a number of things that could potentially blow up and the complexity of systems rises ...
官方离线安装包,亲测可用。使用rpm -ivh [rpm完整包名] 进行安装
学习GO语言并发的朋友可以看看,本人亲测有效 Table of Contents Chapter 1. An Introduction to ... Concurrency Patterns in Go Chapter 5. Concurrency at Scale Chapter 6. Goroutines and the Go Runtime
This concise book empowers all Java developers to master the complexity of the Java thread APIs and concurrency utilities. This knowledge aids the Java developer in writing correct and complex ...
官方离线安装包,测试可用。使用rpm -ivh [rpm完整包名] 进行安装
- **并发(Concurrency)**:Go语言内置goroutine(轻量级线程)和channel(通信机制),实现高效的并发编程。 了解并熟练掌握这些基本概念和操作,你就可以开始在Linux环境下愉快地使用Go语言进行开发了。记得持续...
### Java并发编程实践 #### 书籍概述 《Java并发编程实践》是一本由Brian Goetz等人编写的关于Java并发编程的经典著作。本书深入浅出地介绍了Java 5.0及之后版本中新增加的并发特性,并对并发编程进行了全面而详尽...