- 浏览: 170238 次
- 性别:
- 来自: 上海
文章分类
- 全部博客 (193)
- Axis2 (10)
- Andriod (2)
- Java (22)
- Eclipse (2)
- 程序人生 (3)
- Windows (1)
- Sql Server 2005/2008 (7)
- 健身 (2)
- Log4j (1)
- Ant (1)
- Fatjar (2)
- 国际化 (1)
- Linux (3)
- JDBC (1)
- Oracle (2)
- 各种报错 (4)
- SWT (5)
- Tomcat (2)
- 车辆管理 (1)
- SVN (2)
- Spring (5)
- 域名服务器 (0)
- HaoWaYa (1)
- FTP (1)
- 集散中心 (1)
- 专业知识 (1)
- 面试准备 (19)
- 设计模式 (22)
- Junit (1)
- 软件下载 (3)
- 深入理解Java虚拟机 (3)
- 数据结构 (4)
- 雅思 托福 (0)
- UML (1)
- Maven (1)
- CV (1)
- ServiceMix (1)
- 电子书 (5)
- Struts1/2 (4)
- DOM W3C DHTML (3)
- Jawr (1)
- LoadRunner (1)
- Java反编译 (0)
- 英语学习 (0)
- 技术书籍 (1)
- Cygwin (0)
- ibatis (1)
- 数据库 (1)
- jQuery (0)
- s (2)
- 源代码项目 (5)
- JSRs (0)
- JCP (0)
- XML (2)
- Dojo (3)
- Effective Java (1)
- 一站到底 (3)
- JavaScript (6)
- DB2 (1)
- 刷机 (1)
- 字符 (1)
- Dynamic Web Project (1)
- 股市日记 (1)
- 代码片段 (0)
- CSS (1)
- PDF (0)
- 英语口语 (1)
- 乒乓球 (1)
- 体检 (0)
- 送花 (0)
- 面试准备-再战江湖 (5)
- ddq (0)
- sss (0)
- ssssss (0)
- 2020面试 (0)
最新评论
-
samsongbest:
Copperfield 写道你的目标很远大,佩服~惭愧,都忘了 ...
人生目标 -
Copperfield:
你的目标很远大,佩服~
人生目标
Java Concurrency
In Practice
Brian Göetz
Tim Peierls
Joshua Bloch
Joseph Bowbeer
David Holmes
Doug Lea
Addison‐Wesley Professional
ISBN‐10: 0‐321‐34960‐1
ISBN‐13: 978‐0‐321‐34960‐6
ii Java Concurrency In Practice
Index
Index ii
Preface xiii
How to Use this Book xiii
Code Examples xiv
Acknowledgments xv
Chapter 1 - Introduction 1
1.1. A (Very) Brief History of Concurrency 2
1.2. Benefits of Threads 3
1.2.1. Exploiting Multiple Processors 3
1.2.2. Simplicity of Modeling 3
1.2.3. Simplified Handling of Asynchronous Events 3
1.2.4. More Responsive User Interfaces 4
1.3. Risks of Threads 5
1.3.1. Safety Hazards 5
1.3.2. Liveness Hazards 6
1.3.3. Performance Hazards 6
1.4. Threads are Everywhere 8
Part I: Fundamentals 10
Chapter 2. Thread Safety 11
2.1. What is Thread Safety? 12
2.2. Atomicity 13
2.3. Locking 16
2.4. Guarding State with Locks 19
2.5. Liveness and Performance 20
Chapter 3. Sharing Objects 23
3.1. Visibility 23
3.2. Publication and Escape 26
3.3. Thread Confinement 28
3.4. Immutability 31
3.5. Safe Publication 33
Chapter 4. Composing Objects 37
4.1. Designing a Thread‐safe Class 37
4.2. Instance Confinement 39
4.3. Delegating Thread Safety 41
4.4. Adding Functionality to Existing Thread‐safe Classes 47
4.5. Documenting Synchronization Policies 49
Chapter 5. Building Blocks 51
5.1. Synchronized Collections 51
5.2. Concurrent Collections 54
5.3. Blocking Queues and the Producer‐consumer Pattern 56
5.4. Blocking and Interruptible Methods 59
5.5. Synchronizers 60
5.6. Building an Efficient, Scalable Result Cache 64
Summary of Part I 69
<Index iii
Part II: Structuring Concurrent Applications 71
Chapter 6. Task Execution 72
6.1. Executing Tasks in Threads 72
6.2. The Executor Framework 74
6.3. Finding Exploitable Parallelism 78
Summary 83
Chapter 7. Cancellation and Shutdown 85
7.1. Task Cancellation 85
7.2. Stopping a Thread‐based Service 93
7.3. Handling Abnormal Thread Termination 100
7.4. JVM Shutdown 102
Summary 103
Chapter 8. Applying Thread Pools 104
8.1. Implicit Couplings Between Tasks and Execution Policies 104
8.2. Sizing Thread Pools 105
8.3. Configuring ThreadPoolExecutor 106
8.4. Extending ThreadPoolExecutor 111
8.5. Parallelizing Recursive Algorithms 112
Summary 116
Chapter 9. GUI Applications 117
9.1. Why are GUIs Single‐threaded? 117
9.2. Short‐running GUI Tasks 119
9.3. Long‐running GUI Tasks 121
9.4. Shared Data Models 123
9.5. Other Forms of Single‐threaded Subsystems 125
Summary 126
Part III: Liveness, Performance, and Testing 127
Chapter 10. Avoiding Liveness Hazards 128
10.1. Deadlock 128
10.2. Avoiding and Diagnosing Deadlocks 133
10.3. Other Liveness Hazards 135
Summary 136
Chapter 11. Performance and Scalability 137
11.1. Thinking about Performance 137
11.2. Amdahl's Law 139
11.3. Costs Introduced by Threads 142
11.4. Reducing Lock Contention 144
11.5. Example: Comparing Map Performance 150
11.6. Reducing Context Switch Overhead 151
Summary 152
Chapter 12. Testing Concurrent Programs 153
12.1. Testing for Correctness 153
12.2. Testing for Performance 160
12.3. Avoiding Performance Testing Pitfalls 165
12.4. Complementary Testing Approaches 167
Summary 169
Part IV: Advanced Topics 170
Chapter 13 - Explicit Locks 171
13.1. Lock and ReentrantLock 171
13.2. Performance Considerations 174
13.3. Fairness 175
iv Java Concurrency In Practice
13.4. Choosing Between Synchronized and ReentrantLock 176
13.5. Read‐write Locks 176
Summary 178
Chapter 14 - Building Custom Synchronizers 179
14.1. Managing State Dependence 179
14.2. Using Condition Queues 183
14.3. Explicit Condition Objects 188
14.4. Anatomy of a Synchronizer 189
14.5. AbstractQueuedSynchronizer 190
14.6. AQS in Java.util.concurrent Synchronizer Classes 192
Summary 194
Chapter 15. Atomic Variables and Non-blocking Synchronization 195
15.1. Disadvantages of Locking 195
15.2. Hardware Support for Concurrency 196
15.3. Atomic Variable Classes 198
15.4. Non‐blocking Algorithms 201
Summary 206
Chapter 16. The Java Memory Model 207
16.1. What is a Memory Model, and Why would I Want One? 207
16.2. Publication 211
Summary 215
Appendix A. Annotations for Concurrency 216
A.1. Class Annotations 216
A.2. Field and Method Annotations 216
Bibliography 217
1BListing and Image Index v
Listing and Image Index
Preface v
Listing 1. Bad Way to Sort a List. Don't Do this. xiv
Listing 2. Less than Optimal Way to Sort a List. xv
Chapter 1. Introduction 11
Listing 1.1. Non‐thread‐safe Sequence Generator. 5
Figure 1.1. Unlucky Execution of UnsafeSequence.Nextvalue. 5
Listing 1.2. Thread‐safe Sequence Generator. 6
Chapter 2. Thread Safety 11
Listing 2.1. A Stateless Servlet. 13
Listing 2.2. Servlet that Counts Requests without the Necessary Synchronization. Don't Do this. 14
Listing 2.3. Race Condition in Lazy Initialization. Don't Do this. 15
Listing 2.4. Servlet that Counts Requests Using AtomicLong. 16
Listing 2.5. Servlet that Attempts to Cache its Last Result without Adequate Atomicity. Don't Do this. 17
Listing 2.6. Servlet that Caches Last Result, But with Unacceptably Poor Concurrency. Don't Do this. 18
Listing 2.7. Code that would Deadlock if Intrinsic Locks were Not Reentrant. 18
Figure 2.1. Poor Concurrency of SynchronizedFactorizer. 21
Listing 2.8. Servlet that Caches its Last Request and Result. 21
Chapter 3. Sharing Objects 23
Listing 3.1. Sharing Variables without Synchronization. Don't Do this. 23
Listing 3.2. Non‐thread‐safe Mutable Integer Holder. 24
Listing 3.3. Thread‐safe Mutable Integer Holder. 24
Figure 3.1. Visibility Guarantees for Synchronization. 25
Listing 3.4. Counting Sheep. 26
Listing 3.5. Publishing an Object. 27
Listing 3.6. Allowing Internal Mutable State to Escape. Don't Do this. 27
Listing 3.7. Implicitly Allowing the this Reference to Escape. Don't Do this. 28
Listing 3.8. Using a Factory Method to Prevent the this Reference from Escaping During Construction. 28
Listing 3.9. Thread Confinement of Local Primitive and Reference Variables. 30
Listing 3.10. Using ThreadLocal to Ensure thread Confinement. 30
Listing 3.11. Immutable Class Built Out of Mutable Underlying Objects. 32
Listing 3.12. Immutable Holder for Caching a Number and its Factors. 33
Listing 3.13. Caching the Last Result Using a Volatile Reference to an Immutable Holder Object. 33
Listing 3.14. Publishing an Object without Adequate Synchronization. Don't Do this. 33
vi Java Concurrency In Practice
Listing 3.15. Class at Risk of Failure if Not Properly Published. 34
Chapter 4. Composing Objects 37
Listing 4.1. Simple Thread‐safe Counter Using the Java Monitor Pattern. 37
Listing 4.2. Using Confinement to Ensure Thread Safety. 39
Listing 4.3. Guarding State with a Private Lock. 40
Listing 4.4. Monitor‐based Vehicle Tracker Implementation. 42
Listing 4.5. Mutable Point Class Similar to Java.awt.Point. 42
Listing 4.6. Immutable Point class used by DelegatingVehicleTracker. 42
Listing 4.7. Delegating Thread Safety to a ConcurrentHashMap. 43
Listing 4.8. Returning a Static Copy of the Location Set Instead of a "Live" One. 43
Listing 4.9. Delegating Thread Safety to Multiple Underlying State Variables. 44
Listing 4.10. Number Range Class that does Not Sufficiently Protect Its Invariants. Don't Do this. 45
Listing 4.11. Thread‐safe Mutable Point Class. 46
Listing 4.12. Vehicle Tracker that Safely Publishes Underlying State. 46
Listing 4.13. Extending Vector to have a Put‐if‐absent Method. 47
Listing 4.14. Non‐thread‐safe Attempt to Implement Put‐if‐absent. Don't Do this. 48
Listing 4.15. Implementing Put‐if‐absent with Client‐side Locking. 48
Listing 4.16. Implementing Put‐if‐absent Using Composition. 49
Chapter 5. Building Blocks 51
Listing 5.1. Compound Actions on a Vector that may Produce Confusing Results. 51
Figure 5.1. Interleaving of Getlast and Deletelast that throws
ArrayIndexOutOfBoundsException. 51
Listing 5.2. Compound Actions on Vector Using Client‐side Locking. 52
Listing 5.3. Iteration that may Throw ArrayIndexOutOfBoundsException. 52
Listing 5.4. Iteration with Client‐side Locking. 52
Listing 5.5. Iterating a List with an Iterator. 53
Listing 5.6. Iteration Hidden within String Concatenation. Don't Do this. 54
Listing 5.7. ConcurrentMap Interface. 56
Listing 5.8. Producer and Consumer Tasks in a Desktop Search Application. 58
Listing 5.9. Starting the Desktop Search. 58
Listing 5.10. Restoring the Interrupted Status so as Not to Swallow the Interrupt. 60
Listing 5.11. Using CountDownLatch for Starting and Stopping Threads in Timing Tests. 61
Listing 5.12. Using FutureTask to Preload Data that is Needed Later. 62
Listing 5.13. Coercing an Unchecked Throwable to a RuntimeException. 62
Listing 5.14. Using Semaphore to Bound a Collection. 64
1BListing and Image Index vii
Listing 5.15. Coordinating Computation in a Cellular Automaton with 64
Listing 5.16. Initial Cache Attempt Using HashMap and Synchronization. 66
Figure 5.2. Poor Concurrency of 66
Figure 5.3. Two Threads Computing the Same Value When Using 67
Listing 5.17. Replacing HashMap with ConcurrentHashMap. 67
Figure 5.4. Unlucky Timing that could Cause Memorizer3 to Calculate the Same Value Twice. 68
Listing 5.18. Memorizing Wrapper Using FutureTask. 68
Listing 5.19. Final Implementation of Memorizer. 69
Listing 5.20. Factorizing Servlet that Caches Results Using Memorizer. 69
Chapter 6. Task Execution 72
Listing 6.1. Sequential Web Server. 72
Listing 6.2. Web Server that Starts a New Thread for Each Request. 73
Listing 6.3. Executor Interface. 74
Listing 6.4. Web Server Using a Thread Pool. 75
Listing 6.5. Executor that Starts a New Thread for Each Task. 75
Listing 6.6. Executor that Executes Tasks Synchronously in the Calling Thread. 75
Listing 6.7. Lifecycle Methods in ExecutorService. 77
Listing 6.8. Web Server with Shutdown Support. 77
Listing 6.9. Class Illustrating Confusing Timer Behavior. 78
Listing 6.10. Rendering Page Elements Sequentially. 79
Listing 6.11. Callable and Future Interfaces. 80
Listing 6.12. Default Implementation of newTaskFor in ThreadPoolExecutor. 80
Listing 6.13. Waiting for Image Download with Future. 81
Listing 6.14. QueueingFuture Class Used By ExecutorCompletionService. 82
Listing 6.15. Using CompletionService to Render Page Elements as they Become Available. 82
Listing 6.16. Fetching an Advertisement with a Time Budget. 83
Listing 6.17. Requesting Travel Quotes Under a Time Budget. 84
Chapter 7. Cancellation and Shutdown 85
Listing 7.1. Using a Volatile Field to Hold Cancellation State. 86
Listing 7.2. Generating a Second's Worth of Prime Numbers. 86
Listing 7.3. Unreliable Cancellation that can Leave Producers Stuck in a Blocking Operation. Don't Do this. 87
Listing 7.4. Interruption Methods in Thread. 87
Listing 7.5. Using Interruption for Cancellation. 88
Listing 7.6. Propagating InterruptedException to Callers. 89
Listing 7.7. Non‐cancelable Task that Restores Interruption Before Exit. 90
Listing 7.8. Scheduling an Interrupt on a Borrowed Thread. Don't Do this. 90
viii Java Concurrency In Practice
Listing 7.9. Interrupting a Task in a Dedicated Thread. 91
Listing 7.10. Cancelling a Task Using Future. 92
Listing 7.11. Encapsulating Nonstandard Cancellation in a Thread by Overriding Interrupt. 93
Listing 7.12. Encapsulating Nonstandard Cancellation in a Task with Newtaskfor. 94
Listing 7.13. Producer‐Consumer Logging Service with No Shutdown Support. 95
Listing 7.14. Unreliable Way to Add Shutdown Support to the Logging Service. 95
Listing 7.15. Adding Reliable Cancellation to LogWriter. 96
Listing 7.16. Logging Service that Uses an ExecutorService. 97
Listing 7.17. Shutdown with Poison Pill. 97
Listing 7.18. Producer Thread for IndexingService. 98
Listing 7.19. Consumer Thread for IndexingService. 98
Listing 7.20. Using a Private Executor Whose Lifetime is Bounded by a Method Call. 98
Listing 7.21. ExecutorService that Keeps Track of Cancelled Tasks After Shutdown. 99
Listing 7.22. Using TRackingExecutorService to Save Unfinished Tasks for Later Execution. 100
Listing 7.23. Typical Thread‐pool Worker Thread Structure. 101
Listing 7.24. UncaughtExceptionHandler Interface. 101
Listing 7.25. UncaughtExceptionHandler that Logs the Exception. 101
Listing 7.26. Registering a Shutdown Hook to Stop the Logging Service. 103
Chapter 8. Applying Thread Pools 104
Listing 8.1. Task that Deadlocks in a Single‐threaded Executor. Don't Do this. 105
Listing 8.2. General Constructor for ThreadPoolExecutor. 107
Listing 8.3. Creating a Fixed‐sized Thread Pool with a Bounded Queue and the Caller‐runs Saturation Policy. 109
Listing 8.4. Using a Semaphore to Throttle Task Submission. 109
Listing 8.5. ThreadFactory Interface. 109
Listing 8.6. Custom Thread Factory. 110
Listing 8.7. Custom Thread Base Class. 111
Listing 8.8. Modifying an Executor Created with the Standard Factories. 111
Listing 8.9. Thread Pool Extended with Logging and Timing. 112
Listing 8.10. Transforming Sequential Execution into Parallel Execution. 112
Listing 8.11. Transforming Sequential Tail‐recursion into Parallelized Recursion. 113
Listing 8.12. Waiting for Results to be Calculated in Parallel. 113
Listing 8.13. Abstraction for Puzzles Like the "Sliding Blocks Puzzle". 113
Listing 8.14. Link Node for the Puzzle Solver Framework. 114
Listing 8.15. Sequential Puzzle Solver. 115
Listing 8.16. Concurrent Version of Puzzle Solver. 115
Listing 8.17. Result‐bearing Latch Used by ConcurrentPuzzleSolver. 116
1BListing and Image Index ix
Listing 8.18. Solver that Recognizes when No Solution Exists. 116
Chapter 9. GUI Applications 117
Figure 9.1. Control Flow of a Simple Button Click. 119
Listing 9.1. Implementing SwingUtilities Using an Executor. 120
Listing 9.2. Executor Built Atop SwingUtilities. 120
Listing 9.3. Simple Event Listener. 120
Figure 9.2. Control Flow with Separate Model and View Objects. 121
Listing 9.4. Binding a Long‐running Task to a Visual Component. 121
Listing 9.5. Long‐running Task with User Feedback. 122
Listing 9.6. Cancelling a Long‐running Task. 122
Listing 9.7. Background Task Class Supporting Cancellation, Completion Notification, and Progress Notification.
124
Listing 9.8. Initiating a Long‐running, Cancellable Task with BackgroundTask. 124
Chapter 10. Avoiding Liveness Hazards 128
Figure 10.1. Unlucky Timing in LeftRightDeadlock. 128
Listing 10.1. Simple Lock‐ordering Deadlock. Don't Do this. 129
Listing 10.2. Dynamic Lock‐ordering Deadlock. Don't Do this. 129
Listing 10.3. Inducing a Lock Ordering to Avoid Deadlock. 130
Listing 10.4. Driver Loop that Induces Deadlock Under Typical Conditions. 131
Listing 10.5. Lock‐ordering Deadlock Between Cooperating Objects. Don't Do this. 132
Listing 10.6. Using Open Calls to Avoiding Deadlock Between Cooperating Objects. 133
Listing 10.7. Portion of Thread Dump After Deadlock. 135
Chapter 11. Performance and Scalability 137
Figure 11.1. Maximum Utilization Under Amdahl's Law for Various Serialization Percentages. 140
Listing 11.1. Serialized Access to a Task Queue. 141
Figure 11.2. Comparing Queue Implementations. 141
Listing 11.2. Synchronization that has No Effect. Don't Do this. 142
Listing 11.3. Candidate for Lock Elision. 143
Listing 11.4. Holding a Lock Longer than Necessary. 145
Listing 11.5. Reducing Lock Duration. 145
Listing 11.6. Candidate for Lock Splitting. 146
Listing 11.7. ServerStatus Refactored to Use Split Locks. 146
Listing 11.8. Hash‐based Map Using Lock Striping. 148
Figure 11.3. Comparing Scalability of Map Implementations. 150
Chapter 12. Testing Concurrent Programs 153
Listing 12.1. Bounded Buffer Using Semaphore. 154
x Java Concurrency In Practice
Listing 12.2. Basic Unit Tests for BoundedBuffer. 154
Listing 12.3. Testing Blocking and Responsiveness to Interruption. 156
Listing 12.4. Medium‐quality Random Number Generator Suitable for Testing. 157
Listing 12.5. Producer‐consumer Test Program for BoundedBuffer. 158
Listing 12.6. Producer and Consumer Classes Used in PutTakeTest. 158
Listing 12.7. Testing for Resource Leaks. 159
Listing 12.8. Thread Factory for Testing ThreadPoolExecutor. 160
Listing 12.9. Test Method to Verify Thread Pool Expansion. 160
Listing 12.10. Using Thread.yield to Generate More Interleavings. 160
Listing 12.11. Barrier‐based Timer. 161
Figure 12.1. TimedPutTakeTest with Various Buffer Capacities. 162
Listing 12.12. Testing with a Barrier‐based Timer. 162
Listing 12.13. Driver Program ‐ for TimedPutTakeTest. 163
Figure 12.2. Comparing Blocking Queue Implementations. 163
Figure 12.3. Completion Time Histogram for TimedPutTakeTest with Default (Non‐fair) and Fair
Semaphores. 164
Figure 12.4. Completion Time Histogram for TimedPutTakeTest with Single‐item Buffers. 164
Figure 12.5. Results Biased by Dynamic Compilation. 165
Chapter 13 - Explicit Locks 171
Listing 13.1. Lock Interface. 171
Listing 13.2. Guarding Object State Using ReentrantLock. 171
Listing 13.3. Avoiding Lock‐ordering Deadlock Using trylock. 173
Listing 13.4. Locking with a Time Budget. 173
Listing 13.5. Interruptible Lock Acquisition. 173
Figure 13.1. Intrinsic Locking Versus ReentrantLock Performance on Java 5.0 and Java 6. 174
Figure 13.2. Fair Versus Non‐fair Lock Performance. 175
Listing 13.6. ReadWriteLock Interface. 176
Listing 13.7. Wrapping a Map with a Read‐write Lock. 178
Figure 13.3. Read‐write Lock Performance. 178
Chapter 14 - Building Custom Synchronizers 179
Listing 14.1. Structure of Blocking State‐dependent Actions. 179
Listing 14.2. Base Class for Bounded Buffer Implementations. 180
Listing 14.3. Bounded Buffer that Balks When Preconditions are Not Met. 180
Listing 14.4. Client Logic for Calling GrumpyBoundedBuffer. 181
Figure 14.1. Thread Oversleeping Because the Condition Became True Just After It Went to Sleep. 181
Listing 14.5. Bounded Buffer Using Crude Blocking. 182
1BListing and Image Index xi
Listing 14.6. Bounded Buffer Using Condition Queues. 183
Listing 14.7. Canonical Form for State‐dependent Methods. 184
Listing 14.8. Using Conditional Notification in BoundedBuffer.put. 186
Listing 14.9. Recloseable Gate Using Wait and Notifyall. 187
Listing 14.10. Condition Interface. 188
Listing 14.11. Bounded Buffer Using Explicit Condition Variables. 189
Listing 14.12. Counting Semaphore Implemented Using Lock. 190
Listing 14.13. Canonical Forms for Acquisition and Release in AQS. 191
Listing 14.14. Binary Latch Using AbstractQueuedSynchronizer. 192
Listing 14.15. tryAcquire Implementation From Non‐fair ReentrantLock. 193
Listing 14.16. tryacquireshared and tryreleaseshared from Semaphore. 193
Chapter 15. Atomic Variables and Non-blocking Synchronization 195
Listing 15.1. Simulated CAS Operation. 197
Listing 15.2. Non‐blocking Counter Using CAS. 197
Listing 15.3. Preserving Multivariable Invariants Using CAS. 199
Figure 15.1. Lock and AtomicInteger Performance Under High Contention. 200
Figure 15.2. Lock and AtomicInteger Performance Under Moderate Contention. 200
Listing 15.4. Random Number Generator Using ReentrantLock. 200
Listing 15.5. Random Number Generator Using AtomicInteger. 201
Listing 15.6. Non‐blocking Stack Using Treiber's Algorithm (Treiber, 1986). 203
Figure 15.3. Queue with Two Elements in Quiescent State. 203
Figure 15.4. Queue in Intermediate State During Insertion. 204
Figure 15.5. Queue Again in Quiescent State After Insertion is Complete. 204
Listing 15.7. Insertion in the Michael‐Scott Non‐blocking Queue Algorithm (Michael and Scott, 1996). 205
Listing 15.8. Using Atomic Field Updaters in ConcurrentLinkedQueue. 205
Chapter 16. The Java Memory Model 207
Figure 16.1. Interleaving Showing Reordering in PossibleReordering. 208
Listing 16.1. Insufficiently Synchronized Program that can have Surprising Results. Don't Do this. 209
Figure 16.2. Illustration of Happens‐before in the Java Memory Model. 210
Listing 16.2. Inner Class of FutureTask Illustrating Synchronization Piggybacking. 211
Listing 16.3. Unsafe Lazy Initialization. Don't Do this. 212
Listing 16.4. Thread‐safe Lazy Initialization. 213
Listing 16.5. Eager Initialization. 213
Listing 16.6. Lazy Initialization Holder Class Idiom. 213
Listing 16.7. Double‐checked‐locking Anti‐pattern. Don't Do this. 214
Listing 16.8. Initialization Safety for Immutable Objects. 215
xii Java Concurrency In Practice
2BPreface xiii
Preface
At this writing, multi‐core processors are just now becoming inexpensive enough for midrange desktop systems. Not
coincidentally, many development teams are noticing more and more threading‐related bug reports in their projects. In
a recent post on the NetBeans developer site, one of the core maintainers observed that a single class had been
patched over 14 times to fix threading‐related problems. Dion Almaer, former editor of TheServerSide, recently blogged
(after a painful debugging session that ultimately revealed a threading bug) that most Java programs are so rife with
concurrency bugs that they work only "by accident".
Indeed, developing, testing and debugging multithreaded programs can be extremely difficult because concurrency bugs
do not manifest themselves predictably. And when they do surface, it is often at the worst possible time in production,
under heavy load.
One of the challenges of developing concurrent programs in Java is the mismatch between the concurrency features
offered by the platform and how developers need to think about concurrency in their programs. The language provides
low‐level mechanisms such as synchronization and condition waits, but these mechanisms must be used consistently to
implement application‐level protocols or policies. Without such policies, it is all too easy to create programs that
compile and appear to work but are nevertheless broken. Many otherwise excellent books on concurrency fall short of
their goal by focusing excessively on low‐level mechanisms and APIs rather than design‐level policies and patterns.
Java 5.0 is a huge step forward for the development of concurrent applications in Java, providing new higher‐level
components and additional low‐level mechanisms that make it easier for novices and experts alike to build concurrent
applications. The authors are the primary members of the JCP Expert Group that created these facilities; in addition to
describing their behavior and features, we present the underlying design patterns and anticipated usage scenarios that
motivated their inclusion in the platform libraries.
Our goal is to give readers a set of design rules and mental models that make it easier and more fun to build correct,
performant concurrent classes and applications in Java.
We hope you enjoy Java Concurrency in Practice.
Brian Goetz
Williston, VT
March 2006
How to Use this Book
To address the abstraction mismatch between Java's low‐level mechanisms and the necessary design‐level policies, we
present a simplified set of rules for writing concurrent programs. Experts may look at these rules and say "Hmm, that's
not entirely true: class C is thread‐safe even though it violates rule R." While it is possible to write correct programs that
break our rules, doing so requires a deep understanding of the low‐level details of the Java Memory Model, and we
want developers to be able to write correct concurrent programs without having to master these details. Consistently
following our simplified rules will produce correct and maintainable concurrent programs.
We assume the reader already has some familiarity with the basic mechanisms for concurrency in Java. Java
Concurrency in Practice is not an introduction to concurrency for that, see the threading chapter of any decent
introductory volume, such as The Java Programming Language (Arnold et al., 2005). Nor is it an encyclopedic reference
for All Things Concurrency for that, see Concurrent Programming in Java (Lea, 2000). Rather, it offers practical design
rules to assist developers in the difficult process of creating safe and performant concurrent classes. Where appropriate,
we cross‐reference relevant sections of The Java Programming Language, Concurrent Programming in Java, The Java
Language Specification (Gosling et al., 2005), and Effective Java (Bloch, 2001) using the conventions [JPL n.m], [CPJ n.m],
[JLS n.m], and [EJ Item n].
After the introduction (Chapter 1), the book is divided into four parts:
Fundamentals. Part I (Chapters 2‐5) focuses on the basic concepts of concurrency and thread safety, and how to
compose thread‐safe classes out of the concurrent building blocks provided by the class library. A "cheat sheet"
summarizing the most important of the rules presented in Part I appears on page 110.
xiv Java Concurrency In Practice
Chapters 2 (Thread Safety) and 3 (Sharing Objects) form the foundation for the book. Nearly all of the rules on avoiding
concurrency hazards, constructing thread‐safe classes, and verifying thread safety are here. Readers who prefer
"practice" to "theory" may be tempted to skip ahead to Part II, but make sure to come back and read Chapters 2 and 3
before writing any concurrent code!
Chapter 4 (Composing Objects) covers techniques for composing thread‐safe classes into larger thread‐safe classes.
Chapter 5 (Building Blocks) covers the concurrent building blocks ‐ thread‐safe collections and synchronizers ‐ provided
by the platform libraries.
Structuring Concurrent Applications. Part II (Chapters 6‐9) describes how to exploit threads to improve the throughput
or responsiveness of concurrent applications. Chapter 6 (Task Execution) covers identifying parallelizable tasks and
executing them within the task‐execution framework. Chapter 7 (Cancellation and Shutdown) deals with techniques for
convincing tasks and threads to terminate before they would normally do so; how programs deal with cancellation and
shutdown is often one of the factors that separate truly robust concurrent applications from those that merely work.
Chapter 8 (Applying Thread Pools) addresses some of the more advanced features of the task‐execution framework.
Chapter 9 (GUI Applications) focuses on techniques for improving responsiveness in single‐threaded subsystems.
Liveness, Performance, and Testing. Part III (Chapters 10‐12) concerns itself with ensuring that concurrent programs
actually do what you want them to do and do so with acceptable performance. Chapter 10 (Avoiding Liveness Hazards)
describes how to avoid liveness failures that can prevent programs from making forward progress. Chapter 11
(Performance and Scalability) covers techniques for improving the performance and scalability of concurrent code.
Chapter 12 (Testing Concurrent Programs) covers techniques for testing concurrent code for both correctness and
performance.
Advanced Topics. Part IV (Chapters 13‐16) covers topics that are likely to be of interest only to experienced developers:
explicit locks, atomic variables, non‐blocking algorithms, and developing custom synchronizers.
Code Examples
While many of the general concepts in this book are applicable to versions of Java prior to Java 5.0 and even to non‐Java
environments, most of the code examples (and all the statements about the Java Memory Model) assume Java 5.0 or
later. Some of the code examples may use library features added in Java 6.
The code examples have been compressed to reduce their size and to highlight the relevant portions. The full versions
of the code examples, as well as supplementary examples and errata, are available from the book's website,
http://www.javaconcurrencyinpractice.com.
The code examples are of three sorts: "good" examples, "not so good" examples, and "bad" examples. Good examples
illustrate techniques that should be emulated. Bad examples illustrate techniques that should definitely not be
emulated, and are identified with a "Mr. Yuk" icon[1] to make it clear that this is "toxic" code (see Listing 1). Not‐so‐good
examples illustrate techniques that are not necessarily wrong but are fragile, risky, or perform poorly, and are decorated
with a "Mr. CouldBeHappier" icon as in Listing 2.
[1] Mr. Yuk is a registered trademark of the Children's Hospital of Pittsburgh and appears by permission.
Listing 1. Bad Way to Sort a List. Don't Do this.
public <T extends Comparable<? super T>> void sort(List<T> list) {
// Never returns the wrong answer!
System.exit(0);
}
Some readers may question the role of the "bad" examples in this book; after all, a book should show how to do things
right, not wrong. The bad examples have two purposes. They illustrate common pitfalls, but more importantly they
demonstrate how to analyze a program for thread safety ‐ and the best way to do that is to see the ways in which
thread safety is compromised.
2BPreface xv
Listing 2. Less than Optimal Way to Sort a List.
public <T extends Comparable<? super T>> void sort(List<T> list) {
for (int i=0; i<1000000; i++)
doNothing();
Collections.sort(list);
}
Acknowledgments
This book grew out of the development process for the java.util.concurrent package that was created by the Java
Community Process JSR 166 for inclusion in Java 5.0. Many others contributed to JSR 166; in particular we thank Martin
Buchholz for doing all the work related to getting the code into the JDK, and all the readers of the concurrencyinterest
mailing list who offered their suggestions and feedback on the draft APIs.
This book has been tremendously improved by the suggestions and assistance of a small army of reviewers, advisors,
cheerleaders, and armchair critics. We would like to thank Dion Almaer, Tracy Bialik, Cindy Bloch, Martin Buchholz, Paul
Christmann, Cliff Click, Stuart Halloway, David Hovemeyer, Jason Hunter, Michael Hunter, Jeremy Hylton, Heinz Kabutz,
Robert Kuhar, Ramnivas Laddad, Jared Levy, Nicole Lewis, Victor Luchangco, Jeremy Manson, Paul Martin, Berna
Massingill, Michael Maurer, Ted Neward, Kirk Pepperdine, Bill Pugh, Sam Pullara, Russ Rufer, Bill Scherer, Jeffrey Siegal,
Bruce Tate, Gil Tene, Paul Tyma, and members of the Silicon Valley Patterns Group who, through many interesting
technical conversations, offered guidance and made suggestions that helped make this book better.
We are especially grateful to Cliff Biffle, Barry Hayes, Dawid Kurzyniec, Angelika Langer, Doron Rajwan, and Bill Venners,
who reviewed the entire manuscript in excruciating detail, found bugs in the code examples, and suggested numerous
improvements.
We thank Katrina Avery for a great copy‐editing job and Rosemary Simpson for producing the index under unreasonable
time pressure. We thank Ami Dewar for doing the illustrations.
Thanks to the whole team at Addison‐Wesley who helped make this book a reality. Ann Sellers got the project launched
and Greg Doench shepherded it to a smooth completion; Elizabeth Ryan guided it through the production process.
We would also like to thank the thousands of software engineers who contributed indirectly by creating the software
used to create this book, including TEX, LATEX, Adobe Acrobat, pic, grap, Adobe Illustrator, Perl, Apache Ant, IntelliJ
IDEA, GNU emacs, Subversion, TortoiseSVN, and of course, the Java platform and class libraries.
3BChapter 1 ‐ Introduction ‐ 10B1.1. A (Very) Brief History of Concurrency 1
Chapter 1 Introduction
Writing correct programs is hard; writing correct concurrent programs is harder. There are simply more things that can
go wrong in a concurrent program than in a sequential one. So, why do we bother with concurrency? Threads are an
inescapable feature of the Java language, and they can simplify the development of complex systems by turning
complicated asynchronous code into simpler straight‐line code. In addition, threads are the easiest way to tap the
computing power of multiprocessor systems. And, as processor counts increase, exploiting concurrency effectively will
only become more important.
2 Java Concurrency In Practice
1.1. A (Very) Brief History of Concurrency
In the ancient past, computers didn't have operating systems; they executed a single program from beginning to end,
and that program had direct access to all the resources of the machine. Not only was it difficult to write programs that
ran on the bare metal, but running only a single program at a time was an inefficient use of expensive and scarce
computer resources.
Operating systems evolved to allow more than one program to run at once, running individual programs in processes:
isolated, independently executing programs to which the operating system allocates resources such as memory, file
handles, and security credentials. If they needed to, processes could communicate with one another through a variety
of coarse‐grained communication mechanisms: sockets, signal handlers, shared memory, semaphores, and files.
Several motivating factors led to the development of operating systems that allowed multiple programs to execute
simultaneously:
Resource utilization. Programs sometimes have to wait for external operations such as input or output, and while
waiting can do no useful work. It is more efficient to use that wait time to let another program run.
Fairness. Multiple users and programs may have equal claims on the machine's resources. It is preferable to let them
share the computer via finer‐grained time slicing than to let one program run to completion and then start another.
Convenience. It is often easier or more desirable to write several programs that each perform a single task and have
them coordinate with each other as necessary than to write a single program that performs all the tasks.
In early timesharing systems, each process was a virtual von Neumann computer; it had a memory space storing both
instructions and data, executing instructions sequentially according to the semantics of the machine language, and
interacting with the outside world via the operating system through a set of I/O primitives. For each instruction
executed there was a clearly defined "next instruction", and control flowed through the program according to the rules
of the instruction set. Nearly all widely used programming languages today follow this sequential programming model,
where the language specification clearly defines "what comes next" after a given action is executed.
The sequential programming model is intuitive and natural, as it models the way humans work: do one thing at a time,
in sequence mostly. Get out of bed, put on your bathrobe, go downstairs and start the tea. As in programming
languages, each of these real‐world actions is an abstraction for a sequence of finer‐grained actions ‐ open the
cupboard, select a flavor of tea, measure some tea into the pot, see if there's enough water in the teakettle, if not put
some more water in, set it on the stove, turn the stove on, wait for the water to boil, and so on. This last step ‐ waiting
for the water to boil ‐ also involves a degree of asynchrony. While the water is heating, you have a choice of what to do ‐
just wait, or do other tasks in that time such as starting the toast (another asynchronous task) or fetching the
newspaper, while remaining aware that your attention will soon be needed by the teakettle. The manufacturers of
teakettles and toasters know their products are often used in an asynchronous manner, so they raise an audible signal
when they complete their task. Finding the right balance of sequentiality and asynchrony is often a characteristic of
efficient people ‐ and the same is true of programs.
The same concerns (resource utilization, fairness, and convenience) that motivated the development of processes also
motivated the development of threads. Threads allow multiple streams of program control flow to coexist within a
process. They share process‐wide resources such as memory and file handles, but each thread has its own program
counter, stack, and local variables. Threads also provide a natural decomposition for exploiting hardware parallelism on
multiprocessor systems; multiple threads within the same program can be scheduled simultaneously on multiple CPUs.
Threads are sometimes called lightweight processes, and most modern operating systems treat threads, not processes,
as the basic units of scheduling. In the absence of explicit coordination, threads execute simultaneously and
asynchronously with respect to one another. Since threads share the memory address space of their owning process, all
threads within a process have access to the same variables and allocate objects from the same heap, which allows finergrained
data sharing than inter‐process mechanisms. But without explicit synchronization to coordinate access to
shared data, a thread may modify variables that another thread is in the middle of using, with unpredictable results.
3BChapter 1 ‐ Introduction ‐ 11B1.2. Benefits of Threads 3
1.2. Benefits of Threads
When used properly, threads can reduce development and maintenance costs and improve the performance of complex
applications. Threads make it easier to model how humans work and interact, by turning asynchronous workflows into
mostly sequential ones. They can also turn otherwise convoluted code into straight‐line code that is easier to write,
read, and maintain.
Threads are useful in GUI applications for improving the responsiveness of the user interface, and in server applications
for improving resource utilization and throughput. They also simplify the implementation of the JVM ‐ the garbage
collector usually runs in one or more dedicated threads. Most nontrivial Java applications rely to some degree on
threads for their organization.
1.2.1. Exploiting Multiple Processors
Multiprocessor systems used to be expensive and rare, found only in large data centers and scientific computing
facilities. Today they are cheap and plentiful; even low‐end server and midrange desktop systems often have multiple
processors. This trend will only accelerate; as it gets harder to scale up clock rates, processor manufacturers will instead
put more processor cores on a single chip. All the major chip manufacturers have begun this transition, and we are
already seeing machines with dramatically higher processor counts.
Since the basic unit of scheduling is the thread, a program with only one thread can run on at most one processor at a
time. On a two‐processor system, a single‐threaded program is giving up access to half the available CPU resources; on a
100‐processor system, it is giving up access to 99%. On the other hand, programs with multiple active threads can
execute simultaneously on multiple processors. When properly designed, multithreaded programs can improve
throughput by utilizing available processor resources more effectively.
Using multiple threads can also help achieve better throughput on single‐processor systems. If a program is singlethreaded,
the processor remains idle while it waits for a synchronous I/O operation to complete. In a multithreaded
program, another thread can still run while the first thread is waiting for the I/O to complete, allowing the application to
still make progress during the blocking I/O. (This is like reading the newspaper while waiting for the water to boil, rather
than waiting for the water to boil before starting to read.)
1.2.2. Simplicity of Modeling
It is often easier to manage your time when you have only one type of task to perform (fix these twelve bugs) than
when you have several (fix the bugs, interview replacement candidates for the system administrator, complete your
team's performance evaluations, and create the slides for your presentation next week). When you have only one type
of task to do, you can start at the top of the pile and keep working until the pile is exhausted (or you are); you don't
have to spend any mental energy figuring out what to work on next. On the other hand, managing multiple priorities
and deadlines and switching from task to task usually carries some overhead.
The same is true for software: a program that processes one type of task sequentially is simpler to write, less errorprone,
and easier to test than one managing multiple different types of tasks at once. Assigning a thread to each type of
task or to each element in a simulation affords the illusion of sequentiality and insulates domain logic from the details of
scheduling, interleaved operations, asynchronous I/O, and resource waits. A complicated, asynchronous workflow can
be decomposed into a number of simpler, synchronous workflows each running in a separate thread, interacting only
with each other at specific synchronization points.
This benefit is often exploited by frameworks such as servlets or RMI (Remote Method Invocation). The framework
handles the details of request management, thread creation, and load balancing, dispatching portions of the request
handling to the appropriate application component at the appropriate point in the work‐flow. Servlet writers do not
need to worry about how many other requests are being processed at the same time or whether the socket input and
output streams block; when a servlet's service method is called in response to a web request, it can process the
request synchronously as if it were a single‐threaded program. This can simplify component development and reduce
the learning curve for using such frameworks.
1.2.3. Simplified Handling of Asynchronous Events
A server application that accepts socket connections from multiple remote clients may be easier to develop when each
connection is allocated its own thread and allowed to use synchronous I/O.
4 Java Concurrency In Practice
If an application goes to read from a socket when no data is available, read blocks until some data is available. In a
single‐threaded application, this means that not only does processing the corresponding request stall, but processing of
all requests stalls while the single thread is blocked. To avoid this problem, single‐threaded server applications are
forced to use non‐blocking I/O, which is far more complicated and error‐prone than synchronous I/O. However, if each
request has its own thread, then blocking does not affect the processing of other requests.
Historically, operating systems placed relatively low limits on the number of threads that a process could create, as few
as several hundred (or even less). As a result, operating systems developed efficient facilities for multiplexed I/O, such
as the Unix select and poll system calls, and to access these facilities, the Java class libraries acquired a set of
packages (java.nio) for non‐blocking I/O. However, operating system support for larger numbers of threads has
improved significantly, making the thread‐per‐client model practical even for large numbers of clients on some
platforms.[1]
[1] The NPTL threads package, now part of most Linux distributions, was designed to support hundreds of thousands of threads. Non‐blocking I/O
has its own benefits, but better OS support for threads means that there are fewer situations for which it is essential.
1.2.4. More Responsive User Interfaces
GUI applications used to be single‐threaded, which meant that you had to either frequently poll throughout the code for
input events (which is messy and intrusive) or execute all application code indirectly through a "main event loop". If
code called from the main event loop takes too long to execute, the user interface appears to "freeze" until that code
finishes, because subsequent user interface events cannot be processed until control is returned to the main event loop.
Modern GUI frameworks, such as the AWT and Swing toolkits, replace the main event loop with an event dispatch
thread (EDT). When a user interface event such as a button press occurs, application‐defined event handlers are called
in the event thread. Most GUI frameworks are single‐threaded subsystems, so the main event loop is effectively still
present, but it runs in its own thread under the control of the GUI toolkit rather than the application.
If only short‐lived tasks execute in the event thread, the interface remains responsive since the event thread is always
able to process user actions reasonably quickly. However, processing a long‐running task in the event thread, such as
spell‐checking a large document or fetching a resource over the network, impairs responsiveness. If the user performs
an action while this task is running, there is a long delay before the event thread can process or even acknowledge it. To
add insult to injury, not only does the UI become unresponsive, but it is impossible to cancel the offending task even if
the UI provides a cancel button because the event thread is busy and cannot handle the cancel button‐press event until
the lengthy task completes! If, however, the long‐running task is instead executed in a separate thread, the event
thread remains free to process UI events, making the UI more responsive.
3BChapter 1 ‐ Introduction ‐ 12B1.3. Risks of Threads 5
1.3. Risks of Threads
Java's built‐in support for threads is a double‐edged sword. While it simplifies the development of concurrent
applications by providing language and library support and a formal cross‐platform memory model (it is this formal
cross‐platform memory model that makes possible the development of write‐once, run‐anywhere concurrent
applications in Java), it also raises the bar for developers because more programs will use threads. When threads were
more esoteric, concurrency was an "advanced" topic; now, mainstream developers must be aware of thread‐safety
issues.
1.3.1. Safety Hazards
Thread safety can be unexpectedly subtle because, in the absence of sufficient synchronization, the ordering of
operations in multiple threads is unpredictable and sometimes surprising. UnsafeSequence in Listing 1.1, which is
supposed to generate a sequence of unique integer values, offers a simple illustration of how the interleaving of actions
in multiple threads can lead to undesirable results. It behaves correctly in a single‐threaded environment, but in a
multithreaded environment does not.
Listing 1.1. Nonthreadsafe
Sequence Generator.
@NotThreadSafe
public class UnsafeSequence {
private int value;
/** Returns a unique value. */
public int getNext() {
return value++;
}
}
The problem with UnsafeSequence is that with some unlucky timing, two threads could call getNext and receive the
same value. Figure 1.1 shows how this can happen. The increment notation, nextValue++, may appear to be a single
operation, but is in fact three separate operations: read the value, add one to it, and write out the new value. Since
operations in multiple threads may be arbitrarily interleaved by the runtime, it is possible for two threads to read the
value at the same time, both see the same value, and then both add one to it. The result is that the same sequence
number is returned from multiple calls in different threads.
Figure 1.1. Unlucky Execution of UnsafeSequence.Nextvalue.
Diagrams like Figure 1.1 depict possible interleavings of operations in different threads. In these diagrams, time runs
from left to right, and each line represents the activities of a different thread. These interleaving diagrams usually depict
the worst case[2] and are intended to show the danger of incorrectly assuming things will happen in a particular order.
[2] Actually, as we'll see in Chapter 3, the worst case can be even worse than these diagrams usually show because of the possibility of reordering.
UnsafeSequence uses a nonstandard annotation: @NotThreadSafe. This is one of several custom annotations used
throughout this book to document concurrency properties of classes and class members. (Other class‐level annotations
used in this way are @ThreadSafe and @Immutable; see Appendix A for details.) Annotations documenting thread safety
are useful to multiple audiences. If a class is annotated with @ThreadSafe, users can use it with confidence in a
multithreaded environment, maintainers are put on notice that it makes thread safety guarantees that must be
preserved, and software analysis tools can identify possible coding errors.
6 Java Concurrency In Practice
UnsafeSequence illustrates a common concurrency hazard called a race condition. Whether or not nextValue returns a
unique value when called from multiple threads, as required by its specification, depends on how the runtime
interleaves the operations ‐ which is not a desirable state of affairs.
Because threads share the same memory address space and run concurrently, they can access or modify variables that
other threads might be using. This is a tremendous convenience, because it makes data sharing much easier than would
other inter‐thread communications mechanisms. But it is also a significant risk: threads can be confused by having data
change unexpectedly. Allowing multiple threads to access and modify the same variables introduces an element of nonsequentiality
into an otherwise sequential programming model, which can be confusing and difficult to reason about.
For a multithreaded program's behavior to be predictable, access to shared variables must be properly coordinated so
that threads do not interfere with one another. Fortunately, Java provides synchronization mechanisms to coordinate
such access.
UnsafeSequence can be fixed by making getNext a synchronized method, as shown in Sequence in Listing 1.2,[3] thus
preventing the unfortunate interaction in Figure 1.1. (Exactly why this works is the subject of Chapters 2 and 3.)
[3] @GuardedBy is described in Section 2.4; it documents the synchronization policy for Sequence.
Listing 1.2. Threadsafe
Sequence Generator.
@ThreadSafe
public class Sequence {
@GuardedBy("this") private int nextValue;
public synchronized int getNext() {
return nextValue++;
}
}
In the absence of synchronization, the compiler, hardware, and runtime are allowed to take substantial liberties with
the timing and ordering of actions, such as caching variables in registers or processor‐local caches where they are
temporarily (or even permanently) invisible to other threads. These tricks are in aid of better performance and are
generally desirable, but they place a burden on the developer to clearly identify where data is being shared across
threads so that these optimizations do not undermine safety. (Chapter 16 gives the gory details on exactly what
ordering guarantees the JVM makes and how synchronization affects those guarantees, but if you follow the rules in
Chapters 2 and 3, you can safely avoid these low‐level details.)
1.3.2. Liveness Hazards
It is critically important to pay attention to thread safety issues when developing concurrent code: safety cannot be
compromised. The importance of safety is not unique to multithreaded programs ‐ single‐threaded programs also must
take care to preserve safety and correctness ‐ but the use of threads introduces additional safety hazards not present in
single‐threaded programs. Similarly, the use of threads introduces additional forms of liveness failure that do not occur
in single‐threaded programs.
While safety means "nothing bad ever happens", liveness concerns the complementary goal that "something good
eventually happens". A liveness failure occurs when an activity gets into a state such that it is permanently unable to
make forward progress. One form of liveness failure that can occur in sequential programs is an inadvertent infinite
loop, where the code that follows the loop never gets executed. The use of threads introduces additional liveness risks.
For example, if thread A is waiting for a resource that thread B holds exclusively, and B never releases it, A will wait
forever. Chapter 10 describes various forms of liveness failures and how to avoid them, including deadlock (Section
10.1), starvation (Section 10.3.1), and livelock (Section 10.3.3). Like most concurrency bugs, bugs that cause liveness
failures can be elusive because they depend on the relative timing of events in different threads, and therefore do not
always manifest themselves in development or testing.
1.3.3. Performance Hazards
Related to liveness is performance. While liveness means that something good eventually happens, eventually may not
be good enough ‐ we often want good things to happen quickly. Performance issues subsume a broad range of
problems, including poor service time, responsiveness, throughput, resource consumption, or scalability. Just as with
safety and liveness, multithreaded programs are subject to all the performance hazards of single‐threaded programs,
and to others as well that are introduced by the use of threads.
3BChapter 1 ‐ Introduction ‐ 12B1.3. Risks of Threads 7
In well designed concurrent applications the use of threads is a net performance gain, but threads nevertheless carry
some degree of runtime overhead. Context switches ‐ when the scheduler suspends the active thread temporarily so
another thread can run ‐ are more frequent in applications with many threads, and have significant costs: saving and
restoring execution context, loss of locality, and CPU time spent scheduling threads instead of running them. When
threads share data, they must use synchronization mechanisms that can inhibit compiler optimizations, flush or
invalidate memory caches, and create synchronization traffic on the shared memory bus. All these factors introduce
additional performance costs; Chapter 11 covers techniques for analyzing and reducing these costs.
8 Java Concurrency In Practice
1.4. Threads are Everywhere
Even if your program never explicitly creates a thread, frameworks may create threads on your behalf, and code called
from these threads must be thread‐safe. This can place a significant design and implementation burden on developers,
since developing thread‐safe classes requires more care and analysis than developing non‐thread‐safe classes.
Every Java application uses threads. When the JVM starts, it creates threads for JVM housekeeping tasks (garbage
collection, finalization) and a main thread for running the main method. The AWT (Abstract Window Toolkit) and Swing
user interface frameworks create threads for managing user interface events. Timer creates threads for executing
deferred tasks. Component frameworks, such as servlets and RMI create pools of threads and invoke component
methods in these threads.
If you use these facilities ‐ as many developers do ‐ you have to be familiar with concurrency and thread safety, because
these frameworks create threads and call your components from them. It would be nice to believe that concurrency is
an "optional" or "advanced" language feature, but the reality is that nearly all Java applications are multithreaded and
these frameworks do not insulate you from the need to properly coordinate access to application state.
When concurrency is introduced into an application by a framework, it is usually impossible to restrict the concurrencyawareness
to the framework code, because frameworks by their nature make callbacks to application components that
in turn access application state. Similarly, the need for thread safety does not end with the components called by the
framework ‐ it extends to all code paths that access the program state accessed by those components. Thus, the need
for thread safety is contagious.
Frameworks introduce concurrency into applications by calling application components from framework threads.
Components invariably access application state, thus requiring that all code paths accessing that state be thread‐safe.
The facilities described below all cause application code to be called from threads not managed by the application.
While the need for thread safety may start with these facilities, it rarely ends there; instead, it ripples through the
application.
Timer. Timer is a convenience mechanism for scheduling tasks to run at a later time, either once or periodically. The
introduction of a Timer can complicate an otherwise sequential program, because TimerTasks are executed in a thread
managed by the Timer, not the application. If a TimerTask accesses data that is also accessed by other application
threads, then not only must the TimerTask do so in a thread‐safe manner, but so must any other classes that access
that data. Often the easiest way to achieve this is to ensure that objects accessed by the TimerTask are themselves
thread‐safe, thus encapsulating the thread safety within the shared objects.
Servlets and JavaServer Pages (JSPs). The servlets framework is designed to handle all the infrastructure of deploying a
web application and dispatching requests from remote HTTP clients. A request arriving at the server is dispatched,
perhaps through a chain of filters, to the appropriate servlet or JSP. Each servlet represents a component of application
logic, and in high‐volume web sites, multiple clients may require the services of the same servlet at once. The servlets
specification requires that a servlet be prepared to be called simultaneously from multiple threads. In other words,
servlets need to be thread‐safe.
Even if you could guarantee that a servlet was only called from one thread at a time, you would still have to pay
attention to thread safety when building a web application. Servlets often access state information shared with other
servlets, such as application‐scoped objects (those stored in the ServletContext) or session‐scoped objects (those
stored in the per‐client HttpSession). When a servlet accesses objects shared across servlets or requests, it must
coordinate access to these objects properly, since multiple requests could be accessing them simultaneously from
separate threads. Servlets and JSPs, as well as servlet filters and objects stored in scoped containers like
ServletContext and HttpSession, simply have to be thread‐safe.
Remote Method Invocation. RMI lets you invoke methods on objects running in another JVM. When you call a remote
method with RMI, the method arguments are packaged (marshaled) into a byte stream and shipped over the network to
the remote JVM, where they are unpacked (unmarshaled) and passed to the remote method.
When the RMI code calls your remote object, in what thread does that call happen? You don't know, but it's definitely
not in a thread you created ‐ your object gets called in a thread managed by RMI. How many threads does RMI create?
Could the same remote method on the same remote object be called simultaneously in multiple RMI threads?[4]
3BChapter 1 ‐ Introduction ‐ 13B1.4. Threads are Everywhere 9
[4] Answer: yes, but it's not all that clear from the Javadoc ‐ you have to read the RMI spec.
A remote object must guard against two thread safety hazards: properly coordinating access to state that may be
shared with other objects, and properly coordinating access to the state of the remote object itself (since the same
object may be called in multiple threads simultaneously). Like servlets, RMI objects should be prepared for multiple
simultaneous calls and must provide their own thread safety.
Swing and AWT. GUI applications are inherently asynchronous. Users may select a menu item or press a button at any
time, and they expect that the application will respond promptly even if it is in the middle of doing something else.
Swing and AWT address this problem by creating a separate thread for handling user‐initiated events and updating the
graphical view presented to the user.
Swing components, such as JTable, are not thread‐safe. Instead, Swing programs achieve their thread safety by
confining all access to GUI components to the event thread. If an application wants to manipulate the GUI from outside
the event thread, it must cause the code that will manipulate the GUI to run in the event thread instead.
When the user performs a UI action, an event handler is called in the event thread to perform whatever operation the
user requested. If the handler needs to access application state that is also accessed from other threads (such as a
document being edited), then the event handler, along with any other code that accesses that state, must do so in a
thread‐safe manner.
10 Java Concurrency In Practice
Part I: Fundamentals
Chapter 2. Thread Safety
Chapter 3. Sharing Objects
Chapter 4. Composing Objects
Chapter 5. Building Blocks
11
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 14BChapter 2. Thread
Safety
Chapter 2. Thread Safety
Perhaps surprisingly, concurrent programming isn't so much about threads or locks, any more than civil engineering is
about rivets and I‐beams. Of course, building bridges that don't fall down requires the correct use of a lot of rivets and Ibeams,
just as building concurrent programs require the correct use of threads and locks. But these are just mechanisms
means to an end. Writing thread‐safe code is, at its core, about managing access to state, and in particular to shared,
mutable state.
Informally, an object's state is its data, stored in state variables such as instance or static fields. An object's state may
include fields from other, dependent objects; a HashMap's state is partially stored in the HashMap object itself, but also in
many Map.Entry objects. An object's state encompasses any data that can affect its externally visible behavior.
By shared, we mean that a variable could be accessed by multiple threads; by mutable, we mean that its value could
change during its lifetime. We may talk about thread safety as if it were about code, but what we are really trying to do
is protect data from uncontrolled concurrent access.
Whether an object needs to be thread‐safe depends on whether it will be accessed from multiple threads. This is a
property of how the object is used in a program, not what it does. Making an object thread‐safe requires using
synchronization to coordinate access to its mutable state; failing to do so could result in data corruption and other
undesirable consequences.
Whenever more than one thread accesses a given state variable, and one of them might write to it, they all must
coordinate their access to it using synchronization. The primary mechanism for synchronization in Java is the
synchronized keyword, which provides exclusive locking, but the term "synchronization" also includes the use of
volatile variables, explicit locks, and atomic variables.
You should avoid the temptation to think that there are "special" situations in which this rule does not apply. A program
that omits needed synchronization might appear to work, passing its tests and performing well for years, but it is still
broken and may fail at any moment.
If multiple threads access the same mutable state variable without appropriate synchronization, your program is
broken. There are three ways to fix it:
• Don't share the state variable across threads;
• Make the state variable immutable; or
• Use synchronization whenever accessing the state variable.
If you haven't considered concurrent access in your class design, some of these approaches can require significant
design modifications, so fixing the problem might not be as trivial as this advice makes it sound. It is far easier to design
a class to be thread‐safe than to retrofit it for thread safety later.
In a large program, identifying whether multiple threads might access a given variable can be complicated. Fortunately,
the same object‐oriented techniques that help you write well‐organized, maintainable classes ‐ such as encapsulation
and data hiding ‐ can also help you create thread‐safe classes. The less code that has access to a particular variable, the
easier it is to ensure that all of it uses the proper synchronization, and the easier it is to reason about the conditions
under which a given variable might be accessed. The Java language doesn't force you to encapsulate state ‐ it is
perfectly allowable to store state in public fields (even public static fields) or publish a reference to an otherwise
internal object ‐ but the better encapsulated your program state, the easier it is to make your program thread‐safe and
to help maintainers keep it that way.
When designing thread‐safe classes, good object‐oriented techniques ‐ encapsulation, immutability, and clear
specification of invariants ‐ are your best friends.
There will be times when good object‐oriented design techniques are at odds with real‐world requirements; it may be
necessary in these cases to compromise the rules of good design for the sake of performance or for the sake of
backward compatibility with legacy code. Sometimes abstraction and encapsulation are at odds with performance ‐
12 Java Concurrency In Practice
although not nearly as often as many developers believe ‐ but it is always a good practice first to make your code right,
and then make it fast. Even then, pursue optimization only if your performance measurements and requirements tell
you that you must, and if those same measurements tell you that your optimizations actually made a difference under
realistic conditions. [1]
[1] In concurrent code, this practice should be adhered to even more than usual. Because concurrency bugs are so difficult to reproduce and
debug, the benefit of a small performance gain on some infrequently used code path may well be dwarfed by the risk that the program will fail in
the field.
If you decide that you simply must break encapsulation, all is not lost. It is still possible to make your program threadsafe,
it is just a lot harder. Moreover, the thread safety of your program will be more fragile, increasing not only
development cost and risk but maintenance cost and risk as well. Chapter 4 characterizes the conditions under which it
is safe to relax encapsulation of state variables.
We've used the terms "thread‐safe class" and "thread‐safe program" nearly interchangeably thus far. Is a thread‐safe
program one that is constructed entirely of thread‐safe classes? Not necessarily ‐ a program that consists entirely of
thread‐safe classes may not be thread‐safe, and a thread‐safe program may contain classes that are not thread‐safe.
The issues surrounding the composition of thread‐safe classes are also taken up in Chapter 4. In any case, the concept of
a thread‐safe class makes sense only if the class encapsulates its own state. Thread safety may be a term that is applied
to code, but it is about state, and it can only be applied to the entire body of code that encapsulates its state, which may
be an object or an entire program.
2.1. What is Thread Safety?
Defining thread safety is surprisingly tricky. The more formal attempts are so complicated as to offer little practical
guidance or intuitive understanding, and the rest are informal descriptions that can seem downright circular. A quick
Google search turns up numerous "definitions" like these:
. . . can be called from multiple program threads without unwanted interactions between the threads.
. . .may be called by more than one thread at a time without requiring any other action on the caller's part.
Given definitions like these, it's no wonder we find thread safety confusing! They sound suspiciously like "a class is
thread‐safe if it can be used safely from multiple threads." You can't really argue with such a statement, but it doesn't
offer much practical help either. How do we tell a thread‐safe class from an unsafe one? What do we even mean by
"safe"?
At the heart of any reasonable definition of thread safety is the concept of correctness. If our definition of thread safety
is fuzzy, it is because we lack a clear definition of correctness.
Correctness means that a class conforms to its specification. A good specification defines invariants constraining an
object's state and post ‐ conditions describing the effects of its operations. Since we often don't write adequate
specifications for our classes, how can we possibly know they are correct? We can't, but that doesn't stop us from using
them anyway once we've convinced ourselves that "the code works". This "code confidence" is about as close as many
of us get to correctness, so let's just assume that single‐threaded correctness is something that "we know it when we
see it". Having optimistically defined "correctness" as something that can be recognized, we can now define thread
safety in a somewhat less circular way: a class is thread‐safe when it continues to behave correctly when accessed from
multiple threads.
A class is thread‐safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or
interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or
other coordination on the part of the calling code.
Since any single‐threaded program is also a valid multithreaded program, it cannot be thread‐safe if it is not even
correct in a single‐threaded environment. [2] If an object is correctly implemented, no sequence of operations ‐ calls to
public methods and reads or writes of public fields ‐ should be able to violate any of its invariants or post‐conditions. No
set of operations performed sequentially or concurrently on instances of a thread‐safe class can cause an instance to be
in an invalid state.
[2] If the loose use of "correctness" here bothers you, you may prefer to think of a thread‐safe class as one that is no more broken in a concurrent
environment than in a single‐threaded environment.
13
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 14BChapter 2. Thread
Safety
Thread‐safe classes encapsulate any needed synchronization so that clients need not provide their own.
2.1.1. Example: A Stateless Servlet
In Chapter 1, we listed a number of frameworks that create threads and call your components from those threads,
leaving you with the responsibility of making your components thread‐safe. Very often, thread‐safety requirements
stem not from a decision to use threads directly but from a decision to use a facility like the Servlets framework. We're
going to develop a simple example ‐ a servlet‐based factorization service ‐ and slowly extend it to add features while
preserving its thread safety.
Listing 2.1 shows our simple factorization servlet. It unpacks the number to be factored from the servlet request, factors
it, and packages the results into the servlet response.
Listing 2.1. A Stateless Servlet.
@ThreadSafe
public class StatelessFactorizer implements Servlet {
public void service(ServletRequest req, ServletResponse resp) {
BigInteger i = extractFromRequest(req);
BigInteger[] factors = factor(i);
encodeIntoResponse(resp, factors);
}
}
StatelessFactorizer is, like most servlets, stateless: it has no fields and references no fields from other classes. The
transient state for a particular computation exists solely in local variables that are stored on the thread's stack and are
accessible only to the executing thread. One thread accessing a StatelessFactorizer cannot influence the result of
another thread accessing the same StatelessFactorizer; because the two threads do not share state, it is as if they
were accessing different instances. Since the actions of a thread accessing a stateless object cannot affect the
correctness of operations in other threads, stateless objects are thread‐safe.
Stateless objects are always thread‐safe.
The fact that most servlets can be implemented with no state greatly reduces the burden of making servlets threadsafe.
It is only when servlets want to remember things from one request to another that the thread safety requirement
becomes an issue.
2.2. Atomicity
What happens when we add one element of state to what was a stateless object? Suppose we want to add a "hit
counter" that measures the number of requests processed. The obvious approach is to add a long field to the servlet
and increment it on each request, as shown in UnsafeCountingFactorizer in Listing 2.2.
Unfortunately, UnsafeCountingFactorizer is not thread‐safe, even though it would work just fine in a single‐threaded
environment. Just like UnsafeSequence on page 6, it is susceptible to lost updates. While the increment operation,
++count, may look like a single action because of its compact syntax, it is not atomic, which means that it does not
execute as a single, indivisible operation. Instead, it is shorthand for a sequence of three discrete operations: fetch the
current value, add one to it, and write the new value back. This is an example of a read‐modify‐write operation, in which
the resulting state is derived from the previous state.
14 Java Concurrency In Practice
Listing 2.2. Servlet that Counts Requests without the Necessary Synchronization. Don't Do this.
@NotThreadSafe
public class UnsafeCountingFactorizer implements Servlet {
private long count = 0;
public long getCount() { return count; }
public void service(ServletRequest req, ServletResponse resp) {
BigInteger i = extractFromRequest(req);
BigInteger[] factors = factor(i);
++count;
encodeIntoResponse(resp, factors);
}
}
Figure 1.1 on page 6 shows what can happen if two threads try to increment a counter simultaneously without
synchronization. If the counter is initially 9, with some unlucky timing each thread could read the value, see that it is 9,
add one to it, and each set the counter to 10. This is clearly not what is supposed to happen; an increment got lost along
the way, and the hit counter is now permanently off by one.
You might think that having a slightly inaccurate count of hits in a web‐based service is an acceptable loss of accuracy,
and sometimes it is. But if the counter is being used to generate sequences or unique object identifiers, returning the
same value from multiple invocations could cause serious data integrity problems.[3] The possibility of incorrect results
in the presence of unlucky timing is so important in concurrent programming that it has a name: a race condition.
[3] The approach taken by UnsafeSequence and UnsafeCountingFactorizer has other serious problems, including the possibility of stale data
(Section 3.1.1).
2.2.1. Race Conditions
UnsafeCountingFactorizer has several race conditions that make its results unreliable. A race condition occurs when
the correctness of a computation depends on the relative timing or interleaving of multiple threads by the runtime; in
other words, when getting the right answer relies on lucky timing.[4] The most common type of race condition is checkthen‐
act, where a potentially stale observation is used to make a decision on what to do next.
[4] The term race condition is often confused with the related term data race, which arises when synchronization is not used to coordinate all
access to a shared non‐final field. You risk a data race whenever a thread writes a variable that might next be read by another thread or reads a
variable that might have last been written by another thread if both threads do not use synchronization; code with data races has no useful
defined semantics under the Java Memory Model. Not all race conditions are data races, and not all data races are race conditions, but they both
can cause concurrent programs to fail in unpredictable ways. UnsafeCountingFactorizer has both race conditions and data races. See Chapter
16 for more on data races.
We often encounter race conditions in real life. Let's say you planned to meet a friend at noon at the Starbucks on
University Avenue. But when you get there, you realize there are two Starbucks on University Avenue, and you're not
sure which one you agreed to meet at. At 12:10, you don't see your friend at Starbucks A, so you walk over to Starbucks
B to see if he's there, but he isn't there either. There are a few possibilities: your friend is late and not at either
Starbucks; your friend arrived at Starbucks A after you left; or your friend was at Starbucks B, but went to look for you,
and is now en route to Starbucks A. Let's assume the worst and say it was the last possibility. Now it's 12:15, you've both
been to both Starbucks, and you're both wondering if you've been stood up. What do you do now? Go back to the other
Starbucks? How many times are you going to go back and forth? Unless you have agreed on a protocol, you could both
spend the day walking up and down University Avenue, frustrated and undercaffeinated.
The problem with the "I'll just nip up the street and see if he's at the other one" approach is that while you're walking up
the street, your friend might have moved. You look around Starbucks A, observe "he's not here", and go looking for him.
And you can do the same for Starbucks B, but not at the same time. It takes a few minutes to walk up the street, and
during those few minutes, the state of the system may have changed.
The Starbucks example illustrates a race condition because reaching the desired outcome (meeting your friend) depends
on the relative timing of events (when each of you arrives at one Starbucks or the other, how long you wait there before
switching, etc). The observation that he is not at Starbucks A becomes potentially invalid as soon as you walk out the
front door; he could have come in through the back door and you wouldn't know. It is this invalidation of observations
that characterizes most race conditions ‐ using a potentially stale observation to make a decision or perform a
computation. This type of race condition is called check‐then‐act: you observe something to be true (file X doesn't exist)
15
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 14BChapter 2. Thread
Safety
and then take action based on that observation (create X); but in fact the observation could have become invalid
between the time you observed it and the time you acted on it (someone else created X in the meantime), causing a
problem (unexpected exception, overwritten data, file corruption).
2.2.2. Example: Race Conditions in Lazy Initialization
A common idiom that uses check‐then‐act is lazy initialization. The goal of lazy initialization is to defer initializing an
object until it is actually needed while at the same time ensuring that it is initialized only once. LazyInitRace in Listing
2.3 illustrates the lazy initialization idiom. The getInstance method first checks whether the ExpensiveObject has
already been initialized, in which case it returns the existing instance; otherwise it creates a new instance and returns it
after retaining a reference to it so that future invocations can avoid the more expensive code path.
Listing 2.3. Race Condition in Lazy Initialization. Don't Do this.
@NotThreadSafe
public class LazyInitRace {
private ExpensiveObject instance = null;
public ExpensiveObject getInstance() {
if (instance == null)
instance = new ExpensiveObject();
return instance;
}
}
LazyInitRace has race conditions that can undermine its correctness. Say that threads A and B execute getInstance at
the same time. A sees that instance is null, and instantiates a new ExpensiveObject. B also checks if instance is
null. Whether instance is null at this point depends unpredictably on timing, including the vagaries of scheduling and
how long A takes to instantiate the ExpensiveObject and set the instance field. If instance is null when B examines
it, the two callers to getInstance may receive two different results, even though getInstance is always supposed to
return the same instance.
The hit‐counting operation in UnsafeCountingFactorizer has another sort of race condition. Read‐modify‐write
operations, like incrementing a counter, define a transformation of an object's state in terms of its previous state. To
increment a counter, you have to know its previous value and make sure no one else changes or uses that value while
you are in mid‐update.
Like most concurrency errors, race conditions don't always result in failure: some unlucky timing is also required. But
race conditions can cause serious problems. If LazyInitRace is used to instantiate an application‐wide registry, having it
return different instances from multiple invocations could cause registrations to be lost or multiple activities to have
inconsistent views of the set of registered objects. If UnsafeSequence is used to generate entity identifiers in a
persistence framework, two distinct objects could end up with the same ID, violating identity integrity constraints.
2.2.3. Compound Actions
Both LazyInitRace and UnsafeCountingFactorizer contained a sequence of operations that needed to be atomic, or
indivisible, relative to other operations on the same state. To avoid race conditions, there must be a way to prevent
other threads from using a variable while we're in the middle of modifying it, so we can ensure that other threads can
observe or modify the state only before we start or after we finish, but not in the middle.
Operations A and B are atomic with respect to each other if, from the perspective of a thread executing A, when
another thread executes B, either all of B has executed or none of it has. An atomic operation is one that is atomic with
respect to all operations, including itself, that operate on the same state.
If the increment operation in UnsafeSequence were atomic, the race condition illustrated in Figure 1.1 on page 6 could
not occur, and each execution of the increment operation would have the desired effect of incrementing the counter by
exactly one. To ensure thread safety, check‐then‐act operations (like lazy initialization) and read‐modify‐write
operations (like increment) must always be atomic. We refer collectively to check‐then‐act and read‐modify‐write
sequences as compound actions: sequences of operations that must be executed atomically in order to remain thread16
Java Concurrency In Practice
safe. In the next section, we'll consider locking, Java's built‐in mechanism for ensuring atomicity. For now, we're going to
fix the problem another way, by using an existing thread‐safe class, as shown in CountingFactorizer in Listing 2.4.
Listing 2.4. Servlet that Counts Requests Using AtomicLong.
@ThreadSafe
public class CountingFactorizer implements Servlet {
private final AtomicLong count = new AtomicLong(0);
public long getCount() { return count.get(); }
public void service(ServletRequest req, ServletResponse resp) {
BigInteger i = extractFromRequest(req);
BigInteger[] factors = factor(i);
count.incrementAndGet();
encodeIntoResponse(resp, factors);
}
}
The java.util.concurrent.atomic package contains atomic variable classes for effecting atomic state transitions on
numbers and object references. By replacing the long counter with an AtomicLong, we ensure that all actions that
access the counter state are atomic. [5] Because the state of the servlet is the state of the counter and the counter is
thread‐safe, our servlet is once again thread‐safe.
[5] CountingFactorizer calls incrementAndGet to increment the counter, which also returns the incremented value; in this case the return value is
ignored.
We were able to add a counter to our factoring servlet and maintain thread safety by using an existing thread‐safe class
to manage the counter state, AtomicLong. When a single element of state is added to a stateless class, the resulting
class will be thread‐safe if the state is entirely managed by a thread‐safe object. But, as we'll see in the next section,
going from one state variable to more than one is not necessarily as simple as going from zero to one.
Where practical, use existing thread‐safe objects, like AtomicLong, to manage your class's state. It is simpler to reason
about the possible states and state transitions for existing thread‐safe objects than it is for arbitrary state variables, and
this makes it easier to maintain and verify thread safety.
2.3. Locking
We were able to add one state variable to our servlet while maintaining thread safety by using a thread‐safe object to
manage the entire state of the servlet. But if we want to add more state to our servlet, can we just add more threadsafe
state variables?
Imagine that we want to improve the performance of our servlet by caching the most recently computed result, just in
case two consecutive clients request factorization of the same number. (This is unlikely to be an effective caching
strategy; we offer a better one in Section 5.6.) To implement this strategy, we need to remember two things: the last
number factored, and its factors.
We used AtomicLong to manage the counter state in a thread‐safe manner; could we perhaps use its cousin,
AtomicReference, [6] to manage the last number and its factors? An attempt at this is shown in
UnsafeCachingFactorizer in Listing 2.5.
[6] Just as AtomicLong is a thread‐safe holder class for a long integer, AtomicReference is a thread safe holder class for an object reference. Atomic
variables and their benefits are covered in Chapter 15.
17
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 14BChapter 2. Thread
Safety
Listing 2.5. Servlet that Attempts to Cache its Last Result without Adequate Atomicity. Don't Do this.
@NotThreadSafe
public class UnsafeCachingFactorizer implements Servlet {
private final AtomicReference<BigInteger> lastNumber
= new AtomicReference<BigInteger>();
private final AtomicReference<BigInteger[]> lastFactors
= new AtomicReference<BigInteger[]>();
public void service(ServletRequest req, ServletResponse resp) {
BigInteger i = extractFromRequest(req);
if (i.equals(lastNumber.get()))
encodeIntoResponse(resp, lastFactors.get() );
else {
BigInteger[] factors = factor(i);
lastNumber.set(i);
lastFactors.set(factors);
encodeIntoResponse(resp, factors);
}
}
}
Unfortunately, this approach does not work. Even though the atomic references are individually thread‐safe,
UnsafeCachingFactorizer has race conditions that could make it produce the wrong answer.
The definition of thread safety requires that invariants be preserved regardless of timing or interleaving of operations in
multiple threads. One invariant of UnsafeCachingFactorizer is that the product of the factors cached in lastFactors
equal the value cached in lastNumber; our servlet is correct only if this invariant always holds. When multiple variables
participate in an invariant, they are not independent: the value of one constrains the allowed value(s) of the others.
Thus when updating one, you must update the others in the same atomic operation.
With some unlucky timing, UnsafeCachingFactorizer can violate this invariant. Using atomic references, we cannot
update both lastNumber and lastFactors simultaneously, even though each call to set is atomic; there is still a
window of vulnerability when one has been modified and the other has not, and during that time other threads could
see that the invariant does not hold. Similarly, the two values cannot be fetched simultaneously: between the time
when thread A fetches the two values, thread B could have changed them, and again A may observe that the invariant
does not hold.
To preserve state consistency, update related state variables in a single atomic operation.
2.3.1. Intrinsic Locks
Java provides a built‐in locking mechanism for enforcing atomicity: the synchronized block. (There is also another
critical aspect to locking and other synchronization mechanismsvisibility ‐ which is covered in Chapter 3.) A
synchronized block has two parts: a reference to an object that will serve as the lock, and a block of code to be
guarded by that lock. A synchronized method is shorthand for a synchronized block that spans an entire method
body, and whose lock is the object on which the method is being invoked. (Static synchronized methods use the Class
object for the lock.)
synchronized (lock) {
// Access or modify shared state guarded by lock
}
Every Java object can implicitly act as a lock for purposes of synchronization; these built‐in locks are called intrinsic locks
or monitor locks. The lock is automatically acquired by the executing thread before entering a synchronized block and
automatically released when control exits the synchronized block, whether by the normal control path or by throwing
an exception out of the block. The only way to acquire an intrinsic lock is to enter a synchronized block or method
guarded by that lock.
Intrinsic locks in Java act as mutexes (or mutual exclusion locks), which means that at most one thread may own the
lock. When thread A attempts to acquire a lock held by thread B, A must wait, or block, until B releases it. If B never
releases the lock, A waits forever.
18 Java Concurrency In Practice
Since only one thread at a time can execute a block of code guarded by a given lock, the synchronized blocks guarded
by the same lock execute atomically with respect to one another. In the context of concurrency, atomicity means the
same thing as it does in transactional applications ‐ that a group of statements appear to execute as a single, indivisible
unit. No thread executing a synchronized block can observe another thread to be in the middle of a synchronized
block guarded by the same lock.
The machinery of synchronization makes it easy to restore thread safety to the factoring servlet. Listing 2.6 makes the
service method synchronized, so only one thread may enter service at a time. SynchronizedFactorizer is now
thread‐safe; however, this approach is fairly extreme, since it inhibits multiple clients from using the factoring servlet
simultaneously at all ‐ resulting in unacceptably poor responsiveness. This problem ‐ which is a performance problem,
not a thread safety problem ‐ is addressed in Section 2.5.
Listing 2.6. Servlet that Caches Last Result, But with Unacceptably Poor Concurrency. Don't Do this.
@ThreadSafe
public class SynchronizedFactorizer implements Servlet {
@GuardedBy("this") private BigInteger lastNumber;
@GuardedBy("this") private BigInteger[] lastFactors;
public synchronized void service(ServletRequest req,
ServletResponse resp) {
BigInteger i = extractFromRequest(req);
if (i.equals(lastNumber))
encodeIntoResponse(resp, lastFactors);
else {
BigInteger[] factors = factor(i);
lastNumber = i;
lastFactors = factors;
encodeIntoResponse(resp, factors);
}
}
}
2.3.2. Reentrancy
When a thread requests a lock that is already held by another thread, the requesting thread blocks. But because
intrinsic locks are reentrant, if a thread tries to acquire a lock that it already holds, the request succeeds. Reentrancy
means that locks are acquired on a per‐thread rather than per‐invocation basis. [7] Reentrancy is implemented by
associating with each lock an acquisition count and an owning thread. When the count is zero, the lock is considered
unheld. When a thread acquires a previously unheld lock, the JVM records the owner and sets the acquisition count to
one. If that same thread acquires the lock again, the count is incremented, and when the owning thread exits the
synchronized block, the count is decremented. When the count reaches zero, the lock is released.
[7] This differs from the default locking behavior for pthreads (POSIX threads) mutexes, which are granted on a per‐invocation basis.
Reentrancy facilitates encapsulation of locking behavior, and thus simplifies the development of object‐oriented
concurrent code. Without reentrant locks, the very natural‐looking code in Listing 2.7, in which a subclass overrides a
synchronized method and then calls the superclass method, would deadlock. Because the doSomething methods in
Widget and LoggingWidget are both synchronized, each tries to acquire the lock on the Widget before proceeding.
But if intrinsic locks were not reentrant, the call to super.doSomething would never be able to acquire the lock because
it would be considered already held, and the thread would permanently stall waiting for a lock it can never acquire.
Reentrancy saves us from deadlock in situations like this.
Listing 2.7. Code that would Deadlock if Intrinsic Locks were Not Reentrant.
public class Widget {
public synchronized void doSomething() {
...
}
}
public class LoggingWidget extends Widget {
public synchronized void doSomething() {
System.out.println(toString() + ": calling doSomething");
super.doSomething();
}
}
19
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 14BChapter 2. Thread
Safety
2.4. Guarding State with Locks
Because locks enable serialized [8] access to the code paths they guard, we can use them to construct protocols for
guaranteeing exclusive access to shared state. Following these protocols consistently can ensure state consistency.
[8] Serializing access to an object has nothing to do with object serialization (turning an object into a byte stream); serializing access means that
threads take turns accessing the object exclusively, rather than doing so concurrently.
Compound actions on shared state, such as incrementing a hit counter (read‐modify‐write) or lazy initialization (checkthen‐
act), must be made atomic to avoid race conditions. Holding a lock for the entire duration of a compound action
can make that compound action atomic. However, just wrapping the compound action with a synchronized block is not
sufficient; if synchronization is used to coordinate access to a variable, it is needed everywhere that variable is accessed.
Further, when using locks to coordinate access to a variable, the same lock must be used wherever that variable is
accessed.
It is a common mistake to assume that synchronization needs to be used only when writing to shared variables; this is
simply not true. (The reasons for this will become clearer in Section 3.1.)
For each mutable state variable that may be accessed by more than one thread, all accesses to that variable must be
performed with the same lock held. In this case, we say that the variable is guarded by that lock.
In SynchronizedFactorizer in Listing 2.6, lastNumber and lastFactors are guarded by the servlet object's intrinsic
lock; this is documented by the @GuardedBy annotation.
There is no inherent relationship between an object's intrinsic lock and its state; an object's fields need not be guarded
by its intrinsic lock, though this is a perfectly valid locking convention that is used by many classes. Acquiring the lock
associated with an object does not prevent other threads from accessing that object ‐ the only thing that acquiring a
lock prevents any other thread from doing is acquiring that same lock. The fact that every object has a built‐in lock is
just a convenience so that you needn't explicitly create lock objects. [9] It is up to you to construct locking protocols or
synchronization policies that let you access shared state safely, and to use them consistently throughout your program.
[9] In retrospect, this design decision was probably a bad one: not only can it be confusing, but it forces JVM implementers to make tradeoffs
between object size and locking performance.
Every shared, mutable variable should be guarded by exactly one lock. Make it clear to maintainers which lock that is.
A common locking convention is to encapsulate all mutable state within an object and to protect it from concurrent
access by synchronizing any code path that accesses mutable state using the object's intrinsic lock. This pattern is used
by many thread‐safe classes, such as Vector and other synchronized collection classes. In such cases, all the variables in
an object's state are guarded by the object's intrinsic lock. However, there is nothing special about this pattern, and
neither the compiler nor the runtime enforces this (or any other) pattern of locking. [10] It is also easy to subvert this
locking protocol accidentally by adding a new method or code path and forgetting to use synchronization.
[10] Code auditing tools like FindBugs can identify when a variable is frequently but not always accessed with a lock held, which may indicate a
bug.
Not all data needs to be guarded by locks ‐ only mutable data that will be accessed from multiple threads. In Chapter 1,
we described how adding a simple asynchronous event such as a TimerTask can create thread safety requirements that
ripple throughout your program, especially if your program state is poorly encapsulated. Consider a single‐threaded
program that processes a large amount of data. Single‐threaded programs require no synchronization, because no data
is shared across threads. Now imagine you want to add a feature to create periodic snapshots of its progress, so that it
does not have to start again from the beginning if it crashes or must be stopped. You might choose to do this with a
TimerTask that goes off every ten minutes, saving the program state to a file.
Since the TimerTask will be called from another thread (one managed by Timer), any data involved in the snapshot is
now accessed by two threads: the main program thread and the Timer thread. This means that not only must the
TimerTask code use synchronization when accessing the program state, but so must any code path in the rest of the
program that touches that same data. What used to require no synchronization now requires synchronization
throughout the program.
20 Java Concurrency In Practice
When a variable is guarded by a lock ‐ meaning that every access to that variable is performed with that lock held ‐
you've ensured that only one thread at a time can access that variable. When a class has invariants that involve more
than one state variable, there is an additional requirement: each variable participating in the invariant must be guarded
by the same lock. This allows you to access or update them in a single atomic operation, preserving the invariant.
SynchronizedFactorizer demonstrates this rule: both the cached number and the cached factors are guarded by the
servlet object's intrinsic lock.
For every invariant that involves more than one variable, all the variables involved in that invariant must be guarded by
the same lock.
If synchronization is the cure for race conditions, why not just declare every method synchronized? It turns out that
such indiscriminate application of synchronized might be either too much or too little synchronization. Merely
synchronizing every method, as Vector does, is not enough to render compound actions on a Vector atomic:
if (!vector.contains(element))
vector.add(element);
This attempt at a put‐if‐absent operation has a race condition, even though both contains and add are atomic. While
synchronized methods can make individual operations atomic, additional locking is required ‐ when multiple operations
are combined into a compound action. (See Section 4.4 for some techniques for safely adding additional atomic
operations to thread‐safe objects.) At the same time, synchronizing every method can lead to liveness or performance
problems, as we saw in SynchronizedFactorizer.
2.5. Liveness and Performance
In UnsafeCachingFactorizer, we introduced some caching into our factoring servlet in the hope of improving
performance. Caching required some shared state, which in turn required synchronization to maintain the integrity of
that state. But the way we used synchronization in SynchronizedFactorizer makes it perform badly. The
synchronization policy for SynchronizedFactorizer is to guard each state variable with the servlet object's intrinsic
lock, and that policy was implemented by synchronizing the entirety of the service method. This simple, coarsegrained
approach restored safety, but at a high price.
Because service is synchronized, only one thread may execute it at once. This subverts the intended use of the servlet
framework ‐ that servlets be able to handle multiple requests simultaneously ‐ and can result in frustrated users if the
load is high enough. If the servlet is busy factoring a large number, other clients have to wait until the current request is
complete before the servlet can start on the new number. If the system has multiple CPUs, processors may remain idle
even if the load is high. In any case, even short‐running requests, such as those for which the value is cached, may take
an unexpectedly long time because they must wait for previous long‐running requests to complete.
Figure 2.1 shows what happens when multiple requests arrive for the synchronized factoring servlet: they queue up and
are handled sequentially. We would describe this web application as exhibiting poor concurrency: the number of
simultaneous invocations is limited not by the availability of processing resources, but by the structure of the
application itself. Fortunately, it is easy to improve the concurrency of the servlet while maintaining thread safety by
narrowing the scope of the synchronized block. You should be careful not to make the scope of the synchronized
block too small; you would not want to divide an operation that should be atomic into more than one synchronized
block. But it is reasonable to try to exclude from synchronized blocks long‐running operations that do not affect shared
state, so that other threads are not prevented from accessing the shared state while the long‐running operation is in
progress.
21
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 14BChapter 2. Thread
Safety
Figure 2.1. Poor Concurrency of SynchronizedFactorizer.
CachedFactorizer in Listing 2.8 restructures the servlet to use two separate synchronized blocks, each limited to a
short section of code. One guards the check‐then‐act sequence that tests whether we can just return the cached result,
and the other guards updating both the cached number and the cached factors. As a bonus, we've reintroduced the hit
counter and added a "cache hit" counter as well, updating them within the initial synchronized block. Because these
counters constitute shared mutable state as well, we must use synchronization everywhere they are accessed. The
portions of code that are outside the synchronized blocks operate exclusively on local (stack‐based) variables, which
are not shared across threads and therefore do not require synchronization.
Listing 2.8. Servlet that Caches its Last Request and Result.
@ThreadSafe
public class CachedFactorizer implements Servlet {
@GuardedBy("this") private BigInteger lastNumber;
@GuardedBy("this") private BigInteger[] lastFactors;
@GuardedBy("this") private long hits;
@GuardedBy("this") private long cacheHits;
public synchronized long getHits() { return hits; }
public synchronized double getCacheHitRatio() {
return (double) cacheHits / (double) hits;
}
public void service(ServletRequest req, ServletResponse resp) {
BigInteger i = extractFromRequest(req);
BigInteger[] factors = null;
synchronized (this) {
++hits;
if (i.equals(lastNumber)) {
++cacheHits;
factors = lastFactors.clone();
}
}
if (factors == null) {
factors = factor(i);
synchronized (this) {
lastNumber = i;
lastFactors = factors.clone();
}
}
encodeIntoResponse(resp, factors);
}
}
CachedFactorizer no longer uses AtomicLong for the hit counter, instead reverting to using a long field. It would be
safe to use AtomicLong here, but there is less benefit than there was in CountingFactorizer. Atomic variables are
useful for effecting atomic operations on a single variable, but since we are already using synchronized blocks to
construct atomic operations, using two different synchronization mechanisms would be confusing and would offer no
performance or safety benefit.
The restructuring of CachedFactorizer provides a balance between simplicity (synchronizing the entire method) and
concurrency (synchronizing the shortest possible code paths). Acquiring and releasing a lock has some overhead, so it is
undesirable to break down synchronized blocks too far (such as factoring ++hits into its own synchronized block),
even if this would not compromise atomicity. CachedFactorizer holds the lock when accessing state variables and for
the duration of compound actions, but releases it before executing the potentially long‐running factorization operation.
22 Java Concurrency In Practice
This preserves thread safety without unduly affecting concurrency; the code paths in each of the synchronized blocks
are "short enough".
Deciding how big or small to make synchronized blocks may require tradeoffs among competing design forces,
including safety (which must not be compromised), simplicity, and performance. Sometimes simplicity and performance
are at odds with each other, although as CachedFactorizer illustrates, a reasonable balance can usually be found.
There is frequently a tension between simplicity and performance. When implementing a synchronization policy, resist
the temptation to prematurely sacrifice simplicity (potentially compromising safety) for the sake of performance.
Whenever you use locking, you should be aware of what the code in the block is doing and how likely it is to take a long
time to execute. Holding a lock for a long time, either because you are doing something compute‐intensive or because
you execute a potentially blocking operation, introduces the risk of liveness or performance problems.
Avoid holding locks during lengthy computations or operations at risk of not completing quickly such as network or
console I/O.
23
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 15BChapter 3. Sharing
Objects
Chapter 3. Sharing Objects
We stated at the beginning of Chapter 2 that writing correct concurrent programs is primarily about managing access to
shared, mutable state. That chapter was about using synchronization to prevent multiple threads from accessing the
same data at the same time; this chapter examines techniques for sharing and publishing objects so they can be safely
accessed by multiple threads. Together, they lay the foundation for building thread‐safe classes and safely structuring
concurrent applications using the java.util.concurrent library classes.
We have seen how synchronized blocks and methods can ensure that operations execute atomically, but it is a
common misconception that synchronized is only about atomicity or demarcating "critical sections". Synchronization
also has another significant, and subtle, aspect: memory visibility. We want not only to prevent one thread from
modifying the state of an object when another is using it, but also to ensure that when a thread modifies the state of an
object, other threads can actually see the changes that were made. But without synchronization, this may not happen.
You can ensure that objects are published safely either by using explicit synchronization or by taking advantage of the
synchronization built into library classes.
3.1. Visibility
Visibility is subtle because the things that can go wrong are so counterintuitive. In a single‐threaded environment, if you
write a value to a variable and later read that variable with no intervening writes, you can expect to get the same value
back. This seems only natural. It may be hard to accept at first, but when the reads and writes occur in different threads,
this is simply not the case. In general, there is no guarantee that the reading thread will see a value written by another
thread on a timely basis, or even at all. In order to ensure visibility of memory writes across threads, you must use
synchronization.
NoVisibility in Listing 3.1 illustrates what can go wrong when threads share data without synchronization. Two
threads, the main thread and the reader thread, access the shared variables ready and number. The main thread starts
the reader thread and then sets number to 42 and ready to true. The reader thread spins until it sees ready is true, and
then prints out number. While it may seem obvious that NoVisibility will print 42, it is in fact possible that it will print
zero, or never terminate at all! Because it does not use adequate synchronization, there is no guarantee that the values
of ready and number written by the main thread will be visible to the reader thread.
Listing 3.1. Sharing Variables without Synchronization. Don't Do this.
public class NoVisibility {
private static boolean ready;
private static int number;
private static class ReaderThread extends Thread {
public void run() {
while (!ready)
Thread.yield();
System.out.println(number);
}
}
public static void main(String[] args) {
new ReaderThread().start();
number = 42;
ready = true;
}
}
NoVisibility could loop forever because the value of ready might never become visible to the reader thread. Even
more strangely, NoVisibility could print zero because the write to ready might be made visible to the reader thread
before the write to number, a phenomenon known as reordering. There is no guarantee that operations in one thread
will be performed in the order given by the program, as long as the reordering is not detectable from within that thread
‐ even if the reordering is apparent to other threads.[1] When the main thread writes first to number and then to done
without synchronization, the reader thread could see those writes happen in the opposite order ‐ or not at all.
[1] This may seem like a broken design, but it is meant to allow JVMs to take full advantage of the performance of modern multiprocessor
hardware. For example, in the absence of synchronization, the Java Memory Model permits the compiler to reorder operations and cache values in
registers, and permits CPUs to reorder operations and cache values in processor‐specific caches. For more details, see Chapter 16.
24 Java Concurrency In Practice
In the absence of synchronization, the compiler, processor, and runtime can do some downright weird things to the
order in which operations appear to execute. Attempts to reason about the order in which memory actions "must"
happen in insufficiently synchronized multithreaded programs will almost certainly be incorrect.
NoVisibility is about as simple as a concurrent program can get ‐ two threads and two shared variables ‐ and yet it is
still all too easy to come to the wrong conclusions about what it does or even whether it will terminate. Reasoning
about insufficiently synchronized concurrent programs is prohibitively difficult.
This may all sound a little scary, and it should. Fortunately, there's an easy way to avoid these complex issues: always
use the proper synchronization whenever data is shared across threads.
3.1.1. Stale Data
NoVisibility demonstrated one of the ways that insufficiently synchronized programs can cause surprising results:
stale data. When the reader thread examines ready, it may see an out‐of‐date value. Unless synchronization is used
every time a variable is accessed, it is possible to see a stale value for that variable. Worse, staleness is not all‐ornothing:
a thread can see an up‐to‐date value of one variable but a stale value of another variable that was written first.
When food is stale, it is usually still edible ‐ just less enjoyable. But stale data can be more dangerous. While an out‐ofdate
hit counter in a web application might not be so bad,[2] stale values can cause serious safety or liveness failures. In
NoVisibility, stale values could cause it to print the wrong value or prevent the program from terminating. Things can
get even more complicated with stale values of object references, such as the link pointers in a linked list
implementation. Stale data can cause serious and confusing failures such as unexpected exceptions, corrupted data
structures, inaccurate computations, and infinite loops.
[2] Reading data without synchronization is analogous to using the READ_UNCOMMITTED isolation level in a database, where you are willing to
trade accuracy for performance. However, in the case of unsynchronized reads, you are trading away a greater degree of accuracy, since the visible
value for a shared variable can be arbitrarily stale.
MutableInteger in Listing 3.2 is not thread‐safe because the value field is accessed from both get and set without
synchronization. Among other hazards, it is susceptible to stale values: if one thread calls set, other threads calling get
may or may not see that update.
We can make MutableInteger thread safe by synchronizing the getter and setter as shown in SynchronizedInteger in
Listing 3.3. Synchronizing only the setter would not be sufficient: threads calling get would still be able to see stale
values.
Listing 3.2. Nonthreadsafe
Mutable Integer Holder.
@NotThreadSafe
public class MutableInteger {
private int value;
public int get() { return value; }
public void set(int value) { this.value = value; }
}
Listing 3.3. Threadsafe
Mutable Integer Holder.
@ThreadSafe
public class SynchronizedInteger {
@GuardedBy("this") private int value;
public synchronized int get() { return value; }
public synchronized void set(int value) { this.value = value; }
}
3.1.2. Nonatomic
64bit
Operations
When a thread reads a variable without synchronization, it may see a stale value, but at least it sees a value that was
actually placed there by some thread rather than some random value. This safety guarantee is called out‐of‐thin‐air
safety.
Out‐of‐thin‐air safety applies to all variables, with one exception: 64‐bit numeric variables (double and long) that are
not declared volatile (see Section 3.1.4). The Java Memory Model requires fetch and store operations to be atomic,
but for nonvolatile long and double variables, the JVM is permitted to treat a 64‐bit read or write as two separate 32‐
bit operations. If the reads and writes occur in different threads, it is therefore possible to read a nonvolatile long and
25
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 15BChapter 3. Sharing
Objects
get back the high 32 bits of one value and the low 32 bits of another.[3] Thus, even if you don't care about stale values, it
is not safe to use shared mutable long and double variables in multithreaded programs unless they are declared
volatile or guarded by a lock.
[3] When the Java Virtual Machine Specification was written, many widely used processor architectures could not efficiently provide atomic 64‐bit
arithmetic operations.
3.1.3. Locking and Visibility
Intrinsic locking can be used to guarantee that one thread sees the effects of another in a predictable manner, as
illustrated by Figure 3.1. When thread A executes a synchronized block, and subsequently thread B enters a
synchronized block guarded by the same lock, the values of variables that were visible to A prior to releasing the lock
are guaranteed to be visible to B upon acquiring the lock. In other words, everything A did in or prior to a synchronized
block is visible to B when it executes a synchronized block guarded by the same lock. Without synchronization, there is
no such guarantee.
Figure 3.1. Visibility Guarantees for Synchronization.
We can now give the other reason for the rule requiring all threads to synchronize on the same lock when accessing a
shared mutable variable ‐ to guarantee that values written by one thread are made visible to other threads. Otherwise,
if a thread reads a variable without holding the appropriate lock, it might see a stale value.
Locking is not just about mutual exclusion; it is also about memory visibility. To ensure that all threads see the most upto‐
date values of shared mutable variables, the reading and writing threads must synchronize on a common lock.
3.1.4. Volatile Variables
The Java language also provides an alternative, weaker form of synchronization, volatile variables, to ensure that
updates to a variable are propagated predictably to other threads. When a field is declared volatile, the compiler and
runtime are put on notice that this variable is shared and that operations on it should not be reordered with other
memory operations. Volatile variables are not cached in registers or in caches where they are hidden from other
processors, so a read of a volatile variable always returns the most recent write by any thread.
A good way to think about volatile variables is to imagine that they behave roughly like the SynchronizedInteger class
in Listing 3.3, replacing reads and writes of the volatile variable with calls to get and set.[4] Yet accessing a volatile
variable performs no locking and so cannot cause the executing thread to block, making volatile variables a lighterweight
synchronization mechanism than synchronized.[5]
26 Java Concurrency In Practice
[4] This analogy is not exact; the memory visibility effects of SynchronizedInteger are actually slightly stronger than those of volatile variables. See
Chapter 16.
[5] Volatile reads are only slightly more expensive than nonvolatile reads on most current processor architectures.
The visibility effects of volatile variables extend beyond the value of the volatile variable itself. When thread A writes to
a volatile variable and subsequently thread B reads that same variable, the values of all variables that were visible to A
prior to writing to the volatile variable become visible to B after reading the volatile variable. So from a memory visibility
perspective, writing a volatile variable is like exiting a synchronized block and reading a volatile variable is like entering
a synchronized block. However, we do not recommend relying too heavily on volatile variables for visibility; code that
relies on volatile variables for visibility of arbitrary state is more fragile and harder to understand than code that uses
locking.
Use volatile variables only when they simplify implementing and verifying your synchronization policy; avoid using
volatile variables when verifying correctness would require subtle reasoning about visibility. Good uses of volatile
variables include ensuring the visibility of their own state, that of the object they refer to, or indicating that an
important lifecycle event (such as initialization or shutdown) has occurred.
Listing 3.4 illustrates a typical use of volatile variables: checking a status flag to determine when to exit a loop. In this
example, our anthropomorphized thread is trying to get to sleep by the time‐honored method of counting sheep. For
this example to work, the asleep flag must be volatile. Otherwise, the thread might not notice when asleep has been
set by another thread.[6] We could instead have used locking to ensure visibility of changes to asleep, but that would
have made the code more cumbersome.
[6] Debugging tip: For server applications, be sure to always specify the -server JVM command line switch when invoking the JVM, even for
development and testing. The server JVM performs more optimization than the client JVM, such as hoisting variables out of a loop that are not
modified in the loop; code that might appear to work in the development environment (client JVM) can break in the deployment environment
(server JVM). For example, had we "forgotten" to declare the variable asleep as volatile in Listing 3.4, the server JVM could hoist the test out
of the loop (turning it into an infinite loop), but the client JVM would not. An infinite loop that shows up in development is far less costly than one
that only shows up in production.
Listing 3.4. Counting Sheep.
volatile boolean asleep;
...
while (!asleep)
countSomeSheep();
Volatile variables are convenient, but they have limitations. The most common use for volatile variables is as a
completion, interruption, or status flag, such as the asleep flag in Listing 3.4. Volatile variables can be used for other
kinds of state information, but more care is required when attempting this. For example, the semantics of volatile are
not strong enough to make the increment operation (count++) atomic, unless you can guarantee that the variable is
written only from a single thread. (Atomic variables do provide atomic read‐modify‐write support and can often be used
as "better volatile variables"; see Chapter 15.)
Locking can guarantee both visibility and atomicity; volatile variables can only guarantee visibility.
You can use volatile variables only when all the following criteria are met:
• Writes to the variable do not depend on its current value, or you can ensure that only a single thread ever
updates the value;
• The variable does not participate in invariants with other state variables; and
• Locking is not required for any other reason while the variable is being accessed.
3.2. Publication and Escape
Publishing an object means making it available to code outside of its current scope, such as by storing a reference to it
where other code can find it, returning it from a non‐private method, or passing it to a method in another class. In many
situations, we want to ensure that objects and their internals are not published. In other situations, we do want to
publish an object for general use, but doing so in a thread‐safe manner may require synchronization. Publishing internal
state variables can compromise encapsulation and make it more difficult to preserve invariants; publishing objects
before they are fully constructed can compromise thread safety. An object that is published when it should not have
been is said to have escaped. Section 3.5 covers idioms for safe publication; right now, we look at how an object can
escape.
27
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 15BChapter 3. Sharing
Objects
The most blatant form of publication is to store a reference in a public static field, where any class and thread could see
it, as in Listing 3.5. The initialize method instantiates a new HashSet and publishes it by storing a reference to it into
knownSecrets.
Listing 3.5. Publishing an Object.
public static Set<Secret> knownSecrets;
public void initialize() {
knownSecrets = new HashSet<Secret>();
}
Publishing one object may indirectly publish others. If you add a Secret to the published knownSecrets set, you've also
published that Secret, because any code can iterate the Set and obtain a reference to the new Secret. Similarly,
returning a reference from a non‐private method also publishes the returned object. UnsafeStates in Listing 3.6
publishes the supposedly private array of state abbreviations.
Listing 3.6. Allowing Internal Mutable State to Escape. Don't Do this.
class UnsafeStates {
private String[] states = new String[] {
"AK", "AL" ...
};
public String[] getStates() { return states; }
}
Publishing states in this way is problematic because any caller can modify its contents. In this case, the states array
has escaped its intended scope, because what was supposed to be private state has been effectively made public.
Publishing an object also publishes any objects referred to by its non‐private fields. More generally, any object that is
reachable from a published object by following some chain of non‐private field references and method calls has also
been published.
From the perspective of a class C, an alien method is one whose behavior is not fully specified by C. This includes
methods in other classes as well as overrideable methods (neither private nor final) in C itself. Passing an object to an
alien method must also be considered publishing that object. Since you can't know what code will actually be invoked,
you don't know that the alien method won't publish the object or retain a reference to it that might later be used from
another thread.
Whether another thread actually does something with a published reference doesn't really matter, because the risk of
misuse is still present.[7] Once an object escapes, you have to assume that another class or thread may, maliciously or
carelessly, misuse it. This is a compelling reason to use encapsulation: it makes it practical to analyze programs for
correctness and harder to violate design constraints accidentally.
[7] If someone steals your password and posts it on the alt.free‐passwords newsgroup, that information has escaped: whether or not someone has
(yet) used those credentials to create mischief, your account has still been compromised. Publishing a reference poses the same sort of risk.
A final mechanism by which an object or its internal state can be published is to publish an inner class instance, as
shown in ThisEscape in Listing 3.7. When ThisEscape publishes the EventListener, it implicitly publishes the
enclosing ThisEscape instance as well, because inner class instances contain a hidden reference to the enclosing
instance.
28 Java Concurrency In Practice
Listing 3.7. Implicitly Allowing the this Reference to Escape. Don't Do this.
public class ThisEscape {
public ThisEscape(EventSource source) {
source.registerListener(
new EventListener() {
public void onEvent(Event e) {
doSomething(e);
}
});
}
}
3.2.1. Safe Construction Practices
ThisEscape illustrates an important special case of escape ‐ when the this references escapes during construction.
When the inner EventListener instance is published, so is the enclosing ThisEscape instance. But an object is in a
predictable, consistent state only after its constructor returns, so publishing an object from within its constructor can
publish an incompletely constructed object. This is true even if the publication is the last statement in the constructor. If
the this reference escapes during construction, the object is considered not properly constructed.[8]
[8] More specifically, the this reference should not escape from the thread until after the constructor returns. The this reference can be stored
somewhere by the constructor as long as it is not used by another thread until after construction. SafeListener in Listing 3.8 uses this technique.
Do not allow the this reference to escape during construction.
A common mistake that can let the this reference escape during construction is to start a thread from a constructor.
When an object creates a thread from its constructor, it almost always shares its this reference with the new thread,
either explicitly (by passing it to the constructor) or implicitly (because the Thread or Runnable is an inner class of the
owning object). The new thread might then be able to see the owning object before it is fully constructed. There's
nothing wrong with creating a thread in a constructor, but it is best not to start the thread immediately. Instead, expose
a start or initialize method that starts the owned thread. (See Chapter 7 for more on service lifecycle issues.)
Calling an overrideable instance method (one that is neither private nor final) from the constructor can also allow the
this reference to escape.
If you are tempted to register an event listener or start a thread from a constructor, you can avoid the improper
construction by using a private constructor and a public factory method, as shown in SafeListener in Listing 3.8.
Listing 3.8. Using a Factory Method to Prevent the this Reference from Escaping During Construction.
public class SafeListener {
private final EventListener listener;
private SafeListener() {
listener = new EventListener() {
public void onEvent(Event e) {
doSomething(e);
}
};
}
public static SafeListener newInstance(EventSource source) {
SafeListener safe = new SafeListener();
source.registerListener(safe.listener);
return safe;
}
}
3.3. Thread Confinement
Accessing shared, mutable data requires using synchronization; one way to avoid this requirement is to not share. If
data is only accessed from a single thread, no synchronization is needed. This technique, thread confinement, is one of
the simplest ways to achieve thread safety. When an object is confined to a thread, such usage is automatically threadsafe
even if the confined object itself is not [CPJ 2.3.2].
Swing uses thread confinement extensively. The Swing visual components and data model objects are not thread safe;
instead, safety is achieved by confining them to the Swing event dispatch thread. To use Swing properly, code running in
threads other than the event thread should not access these objects. (To make this easier, Swing provides the
29
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 15BChapter 3. Sharing
Objects
invokeLater mechanism to schedule a Runnable for execution in the event thread.) Many concurrency errors in Swing
applications stem from improper use of these confined objects from another thread.
Another common application of thread confinement is the use of pooled JDBC (Java Database Connectivity) Connection
objects. The JDBC specification does not require that Connection objects be thread‐safe.[9] In typical server applications,
a thread acquires a connection from the pool, uses it for processing a single request, and returns it. Since most requests,
such as servlet requests or EJB (Enterprise JavaBeans) calls, are processed synchronously by a single thread, and the
pool will not dispense the same connection to another thread until it has been returned, this pattern of connection
management implicitly confines the Connection to that thread for the duration of the request.
[9] The connection pool implementations provided by application servers are thread‐safe; connection pools are necessarily accessed from multiple
threads, so a non‐thread‐safe implementation would not make sense.
Just as the language has no mechanism for enforcing that a variable is guarded by a lock, it has no means of confining an
object to a thread. Thread confinement is an element of your program's design that must be enforced by its
implementation. The language and core libraries provide mechanisms that can help in maintaining thread confinement ‐
local variables and the ThreadLocal class ‐ but even with these, it is still the programmer's responsibility to ensure that
thread‐confined objects do not escape from their intended thread.
3.3.1. Adhoc
Thread Confinement
Ad‐hoc thread confinement describes when the responsibility for maintaining thread confinement falls entirely on the
implementation. Ad‐hoc thread confinement can be fragile because none of the language features, such as visibility
modifiers or local variables, helps confine the object to the target thread. In fact, references to thread‐confined objects
such as visual components or data models in GUI applications are often held in public fields.
The decision to use thread confinement is often a consequence of the decision to implement a particular subsystem,
such as the GUI, as a single‐threaded subsystem. Single‐threaded subsystems can sometimes offer a simplicity benefit
that outweighs the fragility of ad‐hoc thread confinement.[10]
[10] Another reason to make a subsystem single‐threaded is deadlock avoidance; this is one of the primary reasons most GUI frameworks are
single‐threaded. Single‐threaded subsystems are covered in Chapter 9.
A special case of thread confinement applies to volatile variables. It is safe to perform read‐modify‐write operations on
shared volatile variables as long as you ensure that the volatile variable is only written from a single thread. In this case,
you are confining the modification to a single thread to prevent race conditions, and the visibility guarantees for volatile
variables ensure that other threads see the most up‐to‐date value.
Because of its fragility, ad‐hoc thread confinement should be used sparingly; if possible, use one of the stronger forms of
thread confinement (stack confinement or ThreadLocal) instead.
3.3.2. Stack Confinement
Stack confinement is a special case of thread confinement in which an object can only be reached through local
variables. Just as encapsulation can make it easier to preserve invariants, local variables can make it easier to confine
objects to a thread. Local variables are intrinsically confined to the executing thread; they exist on the executing
thread's stack, which is not accessible to other threads. Stack confinement (also called within‐thread or thread‐local
usage, but not to be confused with the ThreadLocal library class) is simpler to maintain and less fragile than ad‐hoc
thread confinement.
For primitively typed local variables, such as numPairs in loadTheArk in Listing 3.9, you cannot violate stack
confinement even if you tried. There is no way to obtain a reference to a primitive variable, so the language semantics
ensure that primitive local variables are always stack confined.
30 Java Concurrency In Practice
Listing 3.9. Thread Confinement of Local Primitive and Reference Variables.
public int loadTheArk(Collection<Animal> candidates) {
SortedSet<Animal> animals;
int numPairs = 0;
Animal candidate = null;
// animals confined to method, don't let them escape!
animals = new TreeSet<Animal>(new SpeciesGenderComparator());
animals.addAll(candidates);
for (Animal a : animals) {
if (candidate == null || !candidate.isPotentialMate(a))
candidate = a;
else {
ark.load(new AnimalPair(candidate, a));
++numPairs;
candidate = null;
}
}
return numPairs;
}
Maintaining stack confinement for object references requires a little more assistance from the programmer to ensure
that the referent does not escape. In loadTheArk, we instantiate a treeSet and store a reference to it in animals. At
this point, there is exactly one reference to the Set, held in a local variable and therefore confined to the executing
thread. However, if we were to publish a reference to the Set (or any of its internals), the confinement would be
violated and the animals would escape.
Using a non‐thread‐safe object in a within‐thread context is still thread‐safe. However, be careful: the design
requirement that the object be confined to the executing thread, or the awareness that the confined object is not
thread‐safe, often exists only in the head of the developer when the code is written. If the assumption of within‐thread
usage is not clearly documented, future maintainers might mistakenly allow the object to escape.
3.3.3. ThreadLocal
A more formal means of maintaining thread confinement is ThreadLocal, which allows you to associate a per‐thread
value with a value‐holding object. Thread-Local provides get and set accessor methods that maintain a separate copy
of the value for each thread that uses it, so a get returns the most recent value passed to set from the currently
executing thread.
Thread‐local variables are often used to prevent sharing in designs based on mutable Singletons or global variables. For
example, a single‐threaded application might maintain a global database connection that is initialized at startup to avoid
having to pass a Connection to every method. Since JDBC connections may not be thread‐safe, a multithreaded
application that uses a global connection without additional coordination is not thread‐safe either. By using a
ThreadLocal to store the JDBC connection, as in ConnectionHolder in Listing 3.10, each thread will have its own
connection.
Listing 3.10. Using ThreadLocal to Ensure thread Confinement.
private static ThreadLocal<Connection> connectionHolder
= new ThreadLocal<Connection>() {
public Connection initialValue() {
return DriverManager.getConnection(DB_URL);
}
};
public static Connection getConnection() {
return connectionHolder.get();
}
This technique can also be used when a frequently used operation requires a temporary object such as a buffer and
wants to avoid reallocating the temporary object on each invocation. For example, before Java 5.0, Integer.toString
used a ThreadLocal to store the 12‐byte buffer used for formatting its result, rather than using a shared static buffer
(which would require locking) or allocating a new buffer for each invocation.[11]
[11] This technique is unlikely to be a performance win unless the operation is performed very frequently or the allocation is unusually expensive.
In Java 5.0, it was replaced with the more straightforward approach of allocating a new buffer for every invocation, suggesting that for something
as mundane as a temporary buffer, it is not a performance win.
When a thread calls ThreadLocal.get for the first time, initialValue is consulted to provide the initial value for that
thread. Conceptually, you can think of a ThreadLocal<T> as holding a Map<Thread,T> that stores the thread‐specific
values, though this is not how it is actually implemented. The thread‐specific values are stored in the Thread object
itself; when the thread terminates, the thread‐specific values can be garbage collected.
31
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 15BChapter 3. Sharing
Objects
If you are porting a single‐threaded application to a multithreaded environment, you can preserve thread safety by
converting shared global variables into ThreadLocals, if the semantics of the shared globals permits this; an applicationwide
cache would not be as useful if it were turned into a number of thread‐local caches.
ThreadLocal is widely used in implementing application frameworks. For example, J2EE containers associate a
transaction context with an executing thread for the duration of an EJB call. This is easily implemented using a static
Thread-Local holding the transaction context: when framework code needs to determine what transaction is currently
running, it fetches the transaction context from this ThreadLocal. This is convenient in that it reduces the need to pass
execution context information into every method, but couples any code that uses this mechanism to the framework.
It is easy to abuse ThreadLocal by treating its thread confinement property as a license to use global variables or as a
means of creating "hidden" method arguments. Like global variables, thread‐local variables can detract from reusability
and introduce hidden couplings among classes, and should therefore be used with care.
3.4. Immutability
The other end‐run around the need to synchronize is to use immutable objects [EJ Item 13]. Nearly all the atomicity and
visibility hazards we've described so far, such as seeing stale values, losing updates, or observing an object to be in an
inconsistent state, have to do with the vagaries of multiple threads trying to access the same mutable state at the same
time. If an object's state cannot be modified, these risks and complexities simply go away.
An immutable object is one whose state cannot be changed after construction. Immutable objects are inherently
thread‐safe; their invariants are established by the constructor, and if their state cannot be changed, these invariants
always hold.
Immutable objects are always thread‐safe.
Immutable objects are simple. They can only be in one state, which is carefully controlled by the constructor. One of the
most difficult elements of program design is reasoning about the possible states of complex objects. Reasoning about
the state of immutable objects, on the other hand, is trivial.
Immutable objects are also safer. Passing a mutable object to untrusted code, or otherwise publishing it where
untrusted code could find it, is dangerous ‐ the untrusted code might modify its state, or, worse, retain a reference to it
and modify its state later from another thread. On the other hand, immutable objects cannot be subverted in this
manner by malicious or buggy code, so they are safe to share and publish freely without the need to make defensive
copies [EJ Item 24].
Neither the Java Language Specification nor the Java Memory Model formally defines immutability, but immutability is
not equivalent to simply declaring all fields of an object final. An object whose fields are all final may still be mutable,
since final fields can hold references to mutable objects.
An object is immutable if:
• Its state cannot be modified after construction;
• All its fields are final;[12] and
• It is properly constructed (the this reference does not escape during construction).
[12] It is technically possible to have an immutable object without all fields being final. String is such a class ‐ but this relies on delicate reasoning
about benign data races that requires a deep understanding of the Java Memory Model. (For the curious: String lazily computes the hash code the
first time hashCode is called and caches it in a non‐final field, but this works only because that field can take on only one non‐default value that is
the same every time it is computed because it is derived deterministically from immutable state. Don't try this at home.)
Immutable objects can still use mutable objects internally to manage their state, as illustrated by ThreeStooges in
Listing 3.11. While the Set that stores the names is mutable, the design of ThreeStooges makes it impossible to modify
that Set after construction. The stooges reference is final, so all object state is reached through a final field. The last
requirement, proper construction, is easily met since the constructor does nothing that would cause the this reference
to become accessible to code other than the constructor and its caller.
32 Java Concurrency In Practice
Listing 3.11. Immutable Class Built Out of Mutable Underlying Objects.
@Immutable
public final class ThreeStooges {
private final Set<String> stooges = new HashSet<String>();
public ThreeStooges() {
stooges.add("Moe");
stooges.add("Larry");
stooges.add("Curly");
}
public boolean isStooge(String name) {
return stooges.contains(name);
}
}
Because program state changes all the time, you might be tempted to think that immutable objects are of limited use,
but this is not the case. There is a difference between an object being immutable and the reference to it being
immutable. Program state stored in immutable objects can still be updated by "replacing" immutable objects with a new
instance holding new state; the next section offers an example of this technique.[13]
[13] Many developers fear that this approach will create performance problems, but these fears are usually unwarranted. Allocation is cheaper
than you might think, and immutable objects offer additional performance advantages such as reduced need for locking or defensive copies and
reduced impact on generational garbage collection.
3.4.1. Final Fields
The final keyword, a more limited version of the const mechanism from C++, supports the construction of immutable
objects. Final fields can't be modified (although the objects they refer to can be modified if they are mutable), but they
also have special semantics under the Java Memory Model. It is the use of final fields that makes possible the guarantee
of initialization safety (see Section 3.5.2) that lets immutable objects be freely accessed and shared without
synchronization.
Even if an object is mutable, making some fields final can still simplify reasoning about its state, since limiting the
mutability of an object restricts its set of possible states. An object that is "mostly immutable" but has one or two
mutable state variables is still simpler than one that has many mutable variables. Declaring fields final also documents
to maintainers that these fields are not expected to change.
Just as it is a good practice to make all fields private unless they need greater visibility [EJ Item 12], it is a good practice
to make all fields final unless they need to be mutable.
3.4.2. Example: Using Volatile to Publish Immutable Objects
In UnsafeCachingFactorizer on page 24,we tried to use two AtomicReferences to store the last number and last
factors, but this was not thread‐safe because we could not fetch or update the two related values atomically. Using
volatile variables for these values would not be thread‐safe for the same reason. However, immutable objects can
sometimes provide a weak form of atomicity.
The factoring servlet performs two operations that must be atomic: updating the cached result and conditionally
fetching the cached factors if the cached number matches the requested number. Whenever a group of related data
items must be acted on atomically, consider creating an immutable holder class for them, such as OneValueCache
[14] in
Listing 3.12.
[14] OneValueCache wouldn't be immutable without the copyOf calls in the constructor and getter. Arrays.copyOf was added as a convenience in
Java 6; clone would also work.
Race conditions in accessing or updating multiple related variables can be eliminated by using an immutable object to
hold all the variables. With a mutable holder object, you would have to use locking to ensure atomicity; with an
immutable one, once a thread acquires a reference to it, it need never worry about another thread modifying its state. If
the variables are to be updated, a new holder object is created, but any threads working with the previous holder still
see it in a consistent state.
33
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 15BChapter 3. Sharing
Objects
Listing 3.12. Immutable Holder for Caching a Number and its Factors.
@Immutable
class OneValueCache {
private final BigInteger lastNumber;
private final BigInteger[] lastFactors;
public OneValueCache(BigInteger i,
BigInteger[] factors) {
lastNumber = i;
lastFactors = Arrays.copyOf(factors, factors.length);
}
public BigInteger[] getFactors(BigInteger i) {
if (lastNumber == null || !lastNumber.equals(i))
return null;
else
return Arrays.copyOf(lastFactors, lastFactors.length);
}
}
VolatileCachedFactorizer in Listing 3.13 uses a OneValueCache to store the cached number and factors. When a
thread sets the volatile cache field to reference a new OneValueCache, the new cached data becomes immediately
visible to other threads.
The cache‐related operations cannot interfere with each other because One-ValueCache is immutable and the cache
field is accessed only once in each of the relevant code paths. This combination of an immutable holder object for
multiple state variables related by an invariant, and a volatile reference used to ensure its timely visibility, allows
VolatileCachedFactorizer to be thread‐safe even though it does no explicit locking.
3.5. Safe Publication
So far we have focused on ensuring that an object not be published, such as when it is supposed to be confined to a
thread or within another object. Of course, sometimes we do want to share objects across threads, and in this case we
must do so safely. Unfortunately, simply storing a reference to an object into a public field, as in Listing 3.14, is not
enough to publish that object safely.
Listing 3.13. Caching the Last Result Using a Volatile Reference to an Immutable Holder Object.
@ThreadSafe
public class VolatileCachedFactorizer implements Servlet {
private volatile OneValueCache cache =
new OneValueCache(null, null);
public void service(ServletRequest req, ServletResponse resp) {
BigInteger i = extractFromRequest(req);
BigInteger[] factors = cache.getFactors(i);
if (factors == null) {
factors = factor(i);
cache = new OneValueCache(i, factors);
}
encodeIntoResponse(resp, factors);
}
}
Listing 3.14. Publishing an Object without Adequate Synchronization. Don't Do this.
// Unsafe publication
public Holder holder;
public void initialize() {
holder = new Holder(42);
}
You may be surprised at how badly this harmless‐looking example could fail. Because of visibility problems, the Holder
could appear to another thread to be in an inconsistent state, even though its invariants were properly established by its
constructor! This improper publication could allow another thread to observe a partially constructed object.
3.5.1. Improper Publication: When Good Objects Go Bad
You cannot rely on the integrity of partially constructed objects. An observing thread could see the object in an
inconsistent state, and then later see its state suddenly change, even though it has not been modified since publication.
34 Java Concurrency In Practice
In fact, if the Holder in Listing 3.15 is published using the unsafe publication idiom in Listing 3.14, and a thread other
than the publishing thread were to call assertSanity, it could throw AssertionError![15]
[15] The problem here is not the Holder class itself, but that the Holder is not properly published. However, Holder can be made immune to
improper publication by declaring the n field to be final, which would make Holder immutable; see Section 3.5.2.
Listing 3.15. Class at Risk of Failure if Not Properly Published.
public class Holder {
private int n;
public Holder(int n) { this.n = n; }
public void assertSanity() {
if (n != n)
throw new AssertionError("This statement is false.");
}
}
Because synchronization was not used to make the Holder visible to other threads, we say the Holder was not properly
published. Two things can go wrong with improperly published objects. Other threads could see a stale value for the
holder field, and thus see a null reference or other older value even though a value has been placed in holder. But far
worse, other threads could see an up to date value for the holder reference, but stale values for the state of the
Holder.[16] To make things even less predictable, a thread may see a stale value the first time it reads a field and then a
more up‐to‐date value the next time, which is why assertSanity can throw AssertionError.
[16] While it may seem that field values set in a constructor are the first values written to those fields and therefore that there are no "older"
values to see as stale values, the Object constructor first writes the default values to all fields before subclass constructors run. It is therefore
possible to see the default value for a field as a stale value.
At the risk of repeating ourselves, some very strange things can happen when data is shared across threads without
sufficient synchronization.
3.5.2. Immutable Objects and Initialization Safety
Because immutable objects are so important, the JavaMemory Model offers a special guarantee of initialization safety
for sharing immutable objects. As we've seen, that an object reference becomes visible to another thread does not
necessarily mean that the state of that object is visible to the consuming thread. In order to guarantee a consistent view
of the object's state, synchronization is needed.
Immutable objects, on the other hand, can be safely accessed even when synchronization is not used to publish the
object reference. For this guarantee of initialization safety to hold, all of the requirements for immutability must be met:
unmodifiable state, all fields are final, and proper construction. (If Holder in Listing 3.15 were immutable,
assertSanity could not throw AssertionError, even if the Holder was not properly published.)
Immutable objects can be used safely by any thread without additional synchronization, even when synchronization is
not used to publish them.
This guarantee extends to the values of all final fields of properly constructed objects; final fields can be safely accessed
without additional synchronization. However, if final fields refer to mutable objects, synchronization is still required to
access the state of the objects they refer to.
3.5.3. Safe Publication Idioms
Objects that are not immutable must be safely published, which usually entails synchronization by both the publishing
and the consuming thread. For the moment, let's focus on ensuring that the consuming thread can see the object in its
as published state; we'll deal with visibility of modifications made after publication soon.
To publish an object safely, both the reference to the object and the object's state must be made visible to other
threads at the same time. A properly constructed object can be safely published by:
• Initializing an object reference from a static initializer;
• Storing a reference to it into a volatile field or AtomicReference;
• Storing a reference to it into a final field of a properly constructed object; or
• Storing a reference to it into a field that is properly guarded by a lock.
35
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 15BChapter 3. Sharing
Objects
The internal synchronization in thread‐safe collections means that placing an object in a thread‐safe collection, such as a
Vector or synchronizedList, fulfills the last of these requirements. If thread A places object X in a thread‐safe
collection and thread B subsequently retrieves it, B is guaranteed to see the state of X as A left it, even though the
application code that hands X off in this manner has no explicit synchronization. The thread‐safe library collections offer
the following safe publication guarantees, even if the Javadoc is less than clear on the subject:
• Placing a key or value in a Hashtable, synchronizedMap, or Concurrent-Map safely publishes it to any thread
that retrieves it from the Map (whether directly or via an iterator);
• Placing an element in a Vector, CopyOnWriteArrayList, CopyOnWrite-ArraySet, synchronizedList, or
synchronizedSet safely publishes it to any thread that retrieves it from the collection;
• Placing an element on a BlockingQueue or a ConcurrentLinkedQueue safely publishes it to any thread that
retrieves it from the queue.
Other handoff mechanisms in the class library (such as Future and Exchanger) also constitute safe publication; we will
identify these as providing safe publication as they are introduced.
Using a static initializer is often the easiest and safest way to publish objects that can be statically constructed:
public static Holder holder = new Holder(42);
Static initializers are executed by the JVM at class initialization time; because of internal synchronization in the JVM, this
mechanism is guaranteed to safely publish any objects initialized in this way [JLS 12.4.2].
3.5.4. Effectively Immutable Objects
Safe publication is sufficient for other threads to safely access objects that are not going to be modified after publication
without additional synchronization. The safe publication mechanisms all guarantee that the as‐published state of an
object is visible to all accessing threads as soon as the reference to it is visible, and if that state is not going to be
changed again, this is sufficient to ensure that any access is safe.
Objects that are not technically immutable, but whose state will not be modified after publication, are called effectively
immutable. They do not need to meet the strict definition of immutability in Section 3.4; they merely need to be treated
by the program as if they were immutable after they are published. Using effectively immutable objects can simplify
development and improve performance by reducing the need for synchronization.
Safely published effectively immutable objects can be used safely by any thread without additional synchronization.
For example, Date is mutable,[17] but if you use it as if it were immutable, you may be able to eliminate the locking that
would otherwise be required when shared a Date across threads. Suppose you want to maintain a Map storing the last
login time of each user:
[17] This was probably a mistake in the class library design.
public Map<String, Date> lastLogin =
Collections.synchronizedMap(new HashMap<String, Date>());
If the Date values are not modified after they are placed in the Map, then the synchronization in the synchronizedMap
implementation is sufficient to publish the Date values safely, and no additional synchronization is needed when
accessing them.
3.5.5. Mutable Objects
If an object may be modified after construction, safe publication ensures only the visibility of the as‐published state.
Synchronization must be used not only to publish a mutable object, but also every time the object is accessed to ensure
visibility of subsequent modifications. To share mutable objects safely, they must be safely published and be either
thread‐safe or guarded by a lock.
The publication requirements for an object depend on its mutability:
• Immutable objects can be published through any mechanism;
• Effectively immutable objects must be safely published;
• Mutable objects must be safely published, and must be either thread‐safe or guarded by a lock.
36 Java Concurrency In Practice
3.5.6. Sharing Objects Safely
Whenever you acquire a reference to an object, you should know what you are allowed to do with it. Do you need to
acquire a lock before using it? Are you allowed to modify its state, or only to read it? Many concurrency errors stem
from failing to understand these "rules of engagement" for a shared object. When you publish an object, you should
document how the object can be accessed.
The most useful policies for using and sharing objects in a concurrent program are:
Thread‐confined. A thread‐confined object is owned exclusively by and confined to one thread, and can be modified by
its owning thread.
Shared read‐only. A shared read‐only object can be accessed concurrently by multiple threads without additional
synchronization, but cannot be modified by any thread. Shared read‐only objects include immutable and effectively
immutable objects.
Shared thread‐safe. A thread‐safe object performs synchronization internally, so multiple threads can freely access it
through its public interface without further synchronization.
Guarded. A guarded object can be accessed only with a specific lock held. Guarded objects include those that are
encapsulated within other thread‐safe objects and published objects that are known to be guarded by a specific lock.
37
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 16BChapter 4.
Composing Objects
Chapter 4. Composing Objects
So far, we've covered the low‐level basics of thread safety and synchronization. But we don't want to have to analyze
each memory access to ensure that our program is thread‐safe; we want to be able to take thread‐safe components and
safely compose them into larger components or programs. This chapter covers patterns for structuring classes that can
make it easier to make them thread‐safe and to maintain them without accidentally undermining their safety
guarantees.
4.1. Designing a Threadsafe
Class
While it is possible to write a thread‐safe program that stores all its state in public static fields, it is a lot harder to verify
its thread safety or to modify it so that it remains thread‐safe than one that uses encapsulation appropriately.
Encapsulation makes it possible to determine that a class is thread‐safe without having to examine the entire program.
The design process for a thread‐safe class should include these three basic elements:
• Identify the variables that form the object's state;
• Identify the invariants that constrain the state variables;
• Establish a policy for managing concurrent access to the object's state.
An object's state starts with its fields. If they are all of primitive type, the fields comprise the entire state. Counter in
Listing 4.1 has only one field, so the value field comprises its entire state. The state of an object with n primitive fields is
just the n‐tuple of its field values; the state of a 2D Point is its (x, y) value. If the object has fields that are references to
other objects, its state will encompass fields from the referenced objects as well. For example, the state of a
LinkedList includes the state of all the link node objects belonging to the list.
The synchronization policy defines how an object coordinates access to its state without violating its invariants or postconditions.
It specifies what combination of immutability, thread confinement, and locking is used to maintain thread
safety, and which variables are guarded by which locks. To ensure that the class can be analyzed and maintained,
document the synchronization policy.
Listing 4.1. Simple Threadsafe
Counter Using the Java Monitor Pattern.
@ThreadSafe
public final class Counter {
@GuardedBy("this") private long value = 0;
public synchronized long getValue() {
return value;
}
public synchronized long increment() {
if (value == Long.MAX_VALUE)
throw new IllegalStateException("counter overflow");
return ++value;
}
}
4.1.1. Gathering Synchronization Requirements
Making a class thread‐safe means ensuring that its invariants hold under concurrent access; this requires reasoning
about its state. Objects and variables have a state space: the range of possible states they can take on. The smaller this
state space, the easier it is to reason about. By using final fields wherever practical, you make it simpler to analyze the
possible states an object can be in. (In the extreme case, immutable objects can only be in a single state.)
Many classes have invariants that identify certain states as valid or invalid. The value field in Counter is a long. The
state space of a long ranges from Long.MIN_VALUE to Long.MAX_VALUE, but Counter places constraints on value;
negative values are not allowed.
Similarly, operations may have post‐conditions that identify certain state transitions as invalid. If the current state of a
Counter is 17, the only valid next state is 18. When the next state is derived from the current state, the operation is
necessarily a compound action. Not all operations impose state transition constraints; when updating a variable that
holds the current temperature, its previous state does not affect the computation.
Constraints placed on states or state transitions by invariants and post‐conditions create additional synchronization or
encapsulation requirements. If certain states are invalid, then the underlying state variables must be encapsulated,
otherwise client code could put the object into an invalid state. If an operation has invalid state transitions, it must be
38 Java Concurrency In Practice
made atomic. On the other hand, if the class does not impose any such constraints, we may be able to relax
encapsulation or serialization requirements to obtain greater flexibility or better performance.
A class can also have invariants that constrain multiple state variables. A number range class, like NumberRange in Listing
4.10, typically maintains state variables for the lower and upper bounds of the range. These variables must obey the
constraint that the lower bound be less than or equal to the upper bound. Multivariable invariants like this one create
atomicity requirements: related variables must be fetched or updated in a single atomic operation. You cannot update
one, release and reacquire the lock, and then update the others, since this could involve leaving the object in an invalid
state when the lock was released. When multiple variables participate in an invariant, the lock that guards them must
be held for the duration of any operation that accesses the related variables.
You cannot ensure thread safety without understanding an object's invariants and post‐conditions. Constraints on the
valid values or state transitions for state variables can create atomicity and encapsulation requirements.
4.1.2. Statedependent
Operations
Class invariants and method post‐conditions constrain the valid states and state transitions for an object. Some objects
also have methods with state‐based preconditions. For example, you cannot remove an item from an empty queue; a
queue must be in the "nonempty" state before you can remove an element. Operations with state‐based preconditions
are called state‐dependent [CPJ 3].
In a single‐threaded program, if a precondition does not hold, the operation has no choice but to fail. But in a
concurrent program, the precondition may become true later due to the action of another thread. Concurrent programs
add the possibility of waiting until the precondition becomes true, and then proceeding with the operation.
The built‐in mechanisms for efficiently waiting for a condition to become true ‐ wait and notify ‐ are tightly bound to
intrinsic locking, and can be difficult to use correctly. To create operations that wait for a precondition to become true
before proceeding, it is often easier to use existing library classes, such as blocking queues or semaphores, to provide
the desired state‐dependent behavior. Blocking library classes such as BlockingQueue, Semaphore, and other
synchronizers are covered in Chapter 5; creating state‐dependent classes using the low‐level mechanisms provided by
the platform and class library is covered in Chapter 14.
4.1.3. State Ownership
We implied in Section 4.1 that an object's state could be a subset of the fields in the object graph rooted at that object.
Why might it be a subset? Under what conditions are fields reachable from a given object not part of that object's state?
When defining which variables form an object's state, we want to consider only the data that object owns. Ownership is
not embodied explicitly in the language, but is instead an element of class design. If you allocate and populate a
HashMap, you are creating multiple objects: the HashMap object, a number of Map.Entry objects used by the
implementation of HashMap, and perhaps other internal objects as well. The logical state of a HashMap includes the state
of all its Map.Entry and internal objects, even though they are implemented as separate objects.
For better or worse, garbage collection lets us avoid thinking carefully about ownership. When passing an object to a
method in C++, you have to think fairly carefully about whether you are transferring ownership, engaging in a shortterm
loan, or envisioning long‐term joint ownership. In Java, all these same ownership models are possible, but the
garbage collector reduces the cost of many of the common errors in reference sharing, enabling less‐than‐precise
thinking about ownership.
In many cases, ownership and encapsulation go together ‐ the object encapsulates the state it owns and owns the state
it encapsulates. It is the owner of a given state variable that gets to decide on the locking protocol used to maintain the
integrity of that variable's state. Ownership implies control, but once you publish a reference to a mutable object, you
no longer have exclusive control; at best, you might have "shared ownership". A class usually does not own the objects
passed to its methods or constructors, unless the method is designed to explicitly transfer ownership of objects passed
in (such as the synchronized collection wrapper factory methods).
Collection classes often exhibit a form of "split ownership", in which the collection owns the state of the collection
infrastructure, but client code owns the objects stored in the collection. An example is ServletContext from the servlet
framework. ServletContext provides a Map‐like object container service to servlets where they can register and
retrieve application objects by name with setAttribute and getAttribute. The ServletContext object implemented
by the servlet container must be thread‐safe, because it will necessarily be accessed by multiple threads. Servlets need
not use synchronization when calling set-Attribute and getAttribute, but they may have to use synchronization
when using the objects stored in the ServletContext. These objects are owned by the application; they are being
39
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 16BChapter 4.
Composing Objects
stored for safekeeping by the servlet container on the application's behalf. Like all shared objects, they must be shared
safely; in order to prevent interference from multiple threads accessing the same object concurrently, they should
either be thread‐safe, effectively immutable, or explicitly guarded by a lock.[1]
[1] Interestingly, the HttpSession object, which performs a similar function in the servlet framework, may have stricter requirements. Because
the servlet container may access the objects in the HttpSession so they can be serialized for replication or passivation, they must be threadsafe
because the container will be accessing them as well as the web application. (We say "may have" since replication and passivation is outside
of the servlet specification but is a common feature of servlet containers.)
4.2. Instance Confinement
If an object is not thread‐safe, several techniques can still let it be used safely in a multithreaded program. You can
ensure that it is only accessed from a single thread (thread confinement), or that all access to it is properly guarded by a
lock.
Encapsulation simplifies making classes thread‐safe by promoting instance confinement, often just called confinement
[CPJ 2.3.3]. When an object is encapsulated within another object, all code paths that have access to the encapsulated
object are known and can be therefore be analyzed more easily than if that object were accessible to the entire
program. Combining confinement with an appropriate locking discipline can ensure that otherwise non‐thread‐safe
objects are used in a thread‐safe manner.
Encapsulating data within an object confines access to the data to the object's methods, making it easier to ensure that
the data is always accessed with the appropriate lock held.
Confined objects must not escape their intended scope. An object may be confined to a class instance (such as a private
class member), a lexical scope (such as a local variable), or a thread (such as an object that is passed from method to
method within a thread, but not supposed to be shared across threads). Objects don't escape on their own, of course ‐
they need help from the developer, who assists by publishing the object beyond its intended scope.
PersonSet in Listing 4.2 illustrates how confinement and locking can work together to make a class thread‐safe even
when its component state variables are not. The state of PersonSet is managed by a HashSet, which is not thread‐safe.
But because mySet is private and not allowed to escape, the HashSet is confined to the PersonSet. The only code paths
that can access mySet are addPerson and containsPerson, and each of these acquires the lock on the PersonSet. All its
state is guarded by its intrinsic lock, making PersonSet thread‐safe.
Listing 4.2. Using Confinement to Ensure Thread Safety.
@ThreadSafe
public class PersonSet {
@GuardedBy("this")
private final Set<Person> mySet = new HashSet<Person>();
public synchronized void addPerson(Person p) {
mySet.add(p);
}
public synchronized boolean containsPerson(Person p) {
return mySet.contains(p);
}
}
This example makes no assumptions about the thread‐safety of Person, but if it is mutable, additional synchronization
will be needed when accessing a Person retrieved from a PersonSet. The most reliable way to do this would be to make
Person thread‐safe; less reliable would be to guard the Person objects with a lock and ensure that all clients follow the
protocol of acquiring the appropriate lock before accessing the Person.
Instance confinement is one of the easiest ways to build thread‐safe classes. It also allows flexibility in the choice of
locking strategy; PersonSet happened to use its own intrinsic lock to guard its state, but any lock, consistently used,
would do just as well. Instance confinement also allows different state variables to be guarded by different locks. (For an
example of a class that uses multiple lock objects to guard its state, see ServerStatus on 236.)
There are many examples of confinement in the platform class libraries, including some classes that exist solely to turn
non‐thread‐safe classes into thread‐safe ones. The basic collection classes such as ArrayList and HashMap are not
thread‐safe, but the class library provides wrapper factory methods (Collections.synchronizedList and friends) so
they can be used safely in multithreaded environments. These factories use the Decorator pattern (Gamma et al., 1995)
to wrap the collection with a synchronized wrapper object; the wrapper implements each method of the appropriate
40 Java Concurrency In Practice
interface as a synchronized method that forwards the request to the underlying collection object. So long as the
wrapper object holds the only reachable reference to the underlying collection (i.e., the underlying collection is confined
to the wrapper), the wrapper object is then thread‐safe. The Javadoc for these methods warns that all access to the
underlying collection must be made through the wrapper.
Of course, it is still possible to violate confinement by publishing a supposedly confined object; if an object is intended
to be confined to a specific scope, then letting it escape from that scope is a bug. Confined objects can also escape by
publishing other objects such as iterators or inner class instances that may indirectly publish the confined objects.
Confinement makes it easier to build thread‐safe classes because a class that confines its state can be analyzed for
thread safety without having to examine the whole program.
4.2.1. The Java Monitor Pattern
Following the principle of instance confinement to its logical conclusion leads you to the Java monitor pattern.[2] An
object following the Java monitor pattern encapsulates all its mutable state and guards it with the object's own intrinsic
lock.
[2] The Java monitor pattern is inspired by Hoare's work on monitors (Hoare, 1974), though there are significant differences between this pattern
and a true monitor. The bytecode instructions for entering and exiting a synchronized block are even called monitorenter and monitorexit, and
Java's built‐in (intrinsic) locks are sometimes called monitor locks or monitors.
Counter in Listing 4.1 shows a typical example of this pattern. It encapsulates one state variable, value, and all access
to that state variable is through the methods of Counter, which are all synchronized.
The Java monitor pattern is used by many library classes, such as Vector and Hashtable. Sometimes a more
sophisticated synchronization policy is needed; Chapter 11 shows how to improve scalability through finer‐grained
locking strategies. The primary advantage of the Java monitor pattern is its simplicity.
The Java monitor pattern is merely a convention; any lock object could be used to guard an object's state so long as it is
used consistently. Listing 4.3 illustrates a class that uses a private lock to guard its state.
Listing 4.3. Guarding State with a Private Lock.
public class PrivateLock {
private final Object myLock = new Object();
@GuardedBy("myLock") Widget widget;
void someMethod() {
synchronized(myLock) {
// Access or modify the state of widget
}
}
}
There are advantages to using a private lock object instead of an object's intrinsic lock (or any other publicly accessible
lock). Making the lock object private encapsulates the lock so that client code cannot acquire it, whereas a publicly
accessible lock allows client code to participate in its synchronization policy ‐ correctly or incorrectly. Clients that
improperly acquire another object's lock could cause liveness problems, and verifying that a publicly accessible lock is
properly used requires examining the entire program rather than a single class.
4.2.2. Example: Tracking Fleet Vehicles
Counter in Listing 4.1 is a concise, but trivial, example of the Java monitor pattern. Let's build a slightly less trivial
example: a "vehicle tracker" for dispatching fleet vehicles such as taxicabs, police cars, or delivery trucks. We'll build it
first using the monitor pattern, and then see how to relax some of the encapsulation requirements while retaining
thread safety.
Each vehicle is identified by a String and has a location represented by (x, y) coordinates. The VehicleTracker classes
encapsulate the identity and locations of the known vehicles, making them well‐suited as a data model in a modelviewcontroller
GUI application where it might be shared by a view thread and multiple updater threads. The view thread
would fetch the names and locations of the vehicles and render them on a display:
Map<String, Point> locations = vehicles.getLocations();
for (String key : locations.keySet())
renderVehicle(key, locations.get(key));
41
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 16BChapter 4.
Composing Objects
Similarly, the updater threads would modify vehicle locations with data received from GPS devices or entered manually
by a dispatcher through a GUI interface:
void vehicleMoved(VehicleMovedEvent evt) {
Point loc = evt.getNewLocation();
vehicles.setLocation(evt.getVehicleId(), loc.x, loc.y);
}
Since the view thread and the updater threads will access the data model concurrently, it must be thread‐safe. Listing
4.4 shows an implementation of the vehicle tracker using the Java monitor pattern that uses MutablePoint in Listing 4.5
for representing the vehicle locations.
Even though MutablePoint is not thread‐safe, the tracker class is. Neither the map nor any of the mutable points it
contains is ever published. When we need to a return vehicle locations to callers, the appropriate values are copied
using either the MutablePoint copy constructor or deepCopy, which creates a new Map whose values are copies of the
keys and values from the old Map.[3]
[3] Note that deepCopy can't just wrap the Map with an unmodifiableMap, because that protects only the collection from modification; it does not
prevent callers from modifying the mutable objects stored in it. For the same reason, populating the HashMap in deepCopy via a copy constructor
wouldn't work either, because only the references to the points would be copied, not the point objects themselves.
This implementation maintains thread safety in part by copying mutable data before returning it to the client. This is
usually not a performance issue, but could become one if the set of vehicles is very large.[4] Another consequence of
copying the data on each call to getLocation is that the contents of the returned collection do not change even if the
underlying locations change. Whether this is good or bad depends on your requirements. It could be a benefit if there
are internal consistency requirements on the location set, in which case returning a consistent snapshot is critical, or a
drawback if callers require up‐to‐date information for each vehicle and therefore need to refresh their snapshot more
often.
[4] Because deepCopy is called from a synchronized method, the tracker's intrinsic lock is held for the duration of what might be a long‐running
copy operation, and this could degrade the responsiveness of the user interface when many vehicles are being tracked.
4.3. Delegating Thread Safety
All but the most trivial objects are composite objects. The Java monitor pattern is useful when building classes from
scratch or composing classes out of objects that are not thread‐safe. But what if the components of our class are already
thread‐safe? Do we need to add an additional layer of thread safety? The answer is . . . "it depends". In some cases a
composite made of thread‐safe components is thread‐safe (Listings 4.7 and 4.9), and in others it is merely a good start
(4.10).
In CountingFactorizer on page 23, we added an AtomicLong to an otherwise stateless object, and the resulting
composite object was still thread‐safe. Since the state of CountingFactorizer is the state of the thread‐safe
AtomicLong, and since CountingFactorizer imposes no additional validity constraints on the state of the counter, it is
easy to see that CountingFactorizer is thread‐safe. We could say that CountingFactorizer delegates its thread
safety responsibilities to the AtomicLong: CountingFactorizer is thread‐safe because AtomicLong is.[5]
[5] If count were not final, the thread safety analysis of CountingFactorizer would be more complicated. If CountingFactorizer could modify count
to reference a different AtomicLong, we would then have to ensure that this update was visible to all threads that might access the count, and that
there were no race conditions regarding the value of the count reference. This is another good reason to use final fields wherever practical.
42 Java Concurrency In Practice
Listing 4.4. Monitorbased
Vehicle Tracker Implementation.
@ThreadSafe
public class MonitorVehicleTracker {
@GuardedBy("this")
private final Map<String, MutablePoint> locations;
public MonitorVehicleTracker(
Map<String, MutablePoint> locations) {
this.locations = deepCopy(locations);
}
public synchronized Map<String, MutablePoint> getLocations() {
return deepCopy(locations);
}
public synchronized MutablePoint getLocation(String id) {
MutablePoint loc = locations.get(id);
return loc == null ? null : new MutablePoint(loc);
}
public synchronized void setLocation(String id, int x, int y) {
MutablePoint loc = locations.get(id);
if (loc == null)
throw new IllegalArgumentException("No such ID: " + id);
loc.x = x;
loc.y = y;
}
private static Map<String, MutablePoint> deepCopy(
Map<String, MutablePoint> m) {
Map<String, MutablePoint> result =
new HashMap<String, MutablePoint>();
for (String id : m.keySet())
result.put(id, new MutablePoint(m.get(id)));
return Collections.unmodifiableMap(result);
}
}
public class MutablePoint { /* Listing 4.5 */ }
Listing 4.5. Mutable Point Class Similar to Java.awt.Point.
@NotThreadSafe
public class MutablePoint {
public int x, y;
public MutablePoint() { x = 0; y = 0; }
public MutablePoint(MutablePoint p) {
this.x = p.x;
this.y = p.y;
}
}
4.3.1. Example: Vehicle Tracker Using Delegation
As a more substantial example of delegation, let's construct a version of the vehicle tracker that delegates to a threadsafe
class. We store the locations in a Map, so we start with a thread‐safe Map implementation, ConcurrentHashMap. We
also store the location using an immutable Point class instead of MutablePoint, shown in Listing 4.6.
Listing 4.6. Immutable Point class used by DelegatingVehicleTracker.
@Immutable
public class Point {
public final int x, y;
public Point(int x, int y) {
this.x = x;
this.y = y;
}
}
Point is thread‐safe because it is immutable. Immutable values can be freely shared and published, so we no longer
need to copy the locations when returning them.
DelegatingVehicleTracker in Listing 4.7 does not use any explicit synchronization; all access to state is managed by
ConcurrentHashMap, and all the keys and values of the Map are immutable.
43
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 16BChapter 4.
Composing Objects
If we had used the original MutablePoint class instead of Point, we would be breaking encapsulation by letting
getLocations publish a reference to mutable state that is not thread‐safe. Notice that we've changed the behavior of
the vehicle tracker class slightly; while the monitor version returned a snapshot of the locations, the delegating version
returns an unmodifiable but "live" view of the vehicle locations. This means that if thread A calls getLocations and
thread B later modifies the location of some of the points, those changes are reflected in the Map returned to thread A.
As we remarked earlier, this can be a benefit (more up‐to‐date data) or a liability (potentially inconsistent view of the
fleet), depending on your requirements.
If an unchanging view of the fleet is required, getLocations could instead return a shallow copy of the locations map.
Since the contents of the Map are immutable, only the structure of the Map, not the contents, must be copied, as shown
in Listing 4.8 (which returns a plain HashMap, since getLocations did not promise to return a thread‐safe Map).
Listing 4.7. Delegating Thread Safety to a ConcurrentHashMap.
@ThreadSafe
public class DelegatingVehicleTracker {
private final ConcurrentMap<String, Point> locations;
private final Map<String, Point> unmodifiableMap;
public DelegatingVehicleTracker(Map<String, Point> points) {
locations = new ConcurrentHashMap<String, Point>(points);
unmodifiableMap = Collections.unmodifiableMap(locations);
}
public Map<String, Point> getLocations() {
return unmodifiableMap;
}
public Point getLocation(String id) {
return locations.get(id);
}
public void setLocation(String id, int x, int y) {
if (locations.replace(id, new Point(x, y)) == null)
throw new IllegalArgumentException(
"invalid vehicle name: " + id);
}
}
Listing 4.8. Returning a Static Copy of the Location Set Instead of a "Live" One.
public Map<String, Point> getLocations() {
return Collections.unmodifiableMap(
new HashMap<String, Point>(locations));
}
4.3.2. Independent State Variables
The delegation examples so far delegate to a single, thread‐safe state variable. We can also delegate thread safety to
more than one underlying state variable as long as those underlying state variables are independent, meaning that the
composite class does not impose any invariants involving the multiple state variables.
VisualComponent in Listing 4.9 is a graphical component that allows clients to register listeners for mouse and
keystroke events. It maintains a list of registered listeners of each type, so that when an event occurs the appropriate
listeners can be invoked. But there is no relationship between the set of mouse listeners and key listeners; the two are
independent, and therefore VisualComponent can delegate its thread safety obligations to two underlying thread‐safe
lists.
VisualComponent uses a CopyOnWriteArrayList to store each listener list; this is a thread‐safe List implementation
particularly suited for managing listener lists (see Section 5.2.3). Each List is thread‐safe, and because there are no
constraints coupling the state of one to the state of the other, VisualComponent can delegate its thread safety
responsibilities to the underlying mouseListeners and keyListeners objects.
44 Java Concurrency In Practice
Listing 4.9. Delegating Thread Safety to Multiple Underlying State Variables.
public class VisualComponent {
private final List<KeyListener> keyListeners
= new CopyOnWriteArrayList<KeyListener>();
private final List<MouseListener> mouseListeners
= new CopyOnWriteArrayList<MouseListener>();
public void addKeyListener(KeyListener listener) {
keyListeners.add(listener);
}
public void addMouseListener(MouseListener listener) {
mouseListeners.add(listener);
}
public void removeKeyListener(KeyListener listener) {
keyListeners.remove(listener);
}
public void removeMouseListener(MouseListener listener) {
mouseListeners.remove(listener);
}
}
4.3.3. When Delegation Fails
Most composite classes are not as simple as VisualComponent: they have invariants that relate their component state
variables. NumberRange in Listing 4.10 uses two AtomicIntegers to manage its state, but imposes an additional
constraint ‐ that the first number be less than or equal to the second.
NumberRange is not thread‐safe; it does not preserve the invariant that constrains lower and upper. The setLower and
setUpper methods attempt to respect this invariant, but do so poorly. Both setLower and setUpper are check‐then‐act
sequences, but they do not use sufficient locking to make them atomic. If the number range holds (0, 10), and one
thread calls setLower(5) while another thread calls setUpper(4), with some unlucky timing both will pass the checks
in the setters and both modifications will be applied. The result is that the range now holds (5, 4)an invalid state. So
while the underlying AtomicIntegers are thread‐safe, the composite class is not. Because the underlying state variables
lower and upper are not independent, NumberRange cannot simply delegate thread safety to its thread‐safe state
variables.
NumberRange could be made thread‐safe by using locking to maintain its invariants, such as guarding lower and upper
with a common lock. It must also avoid publishing lower and upper to prevent clients from subverting its invariants.
If a class has compound actions, as NumberRange does, delegation alone is again not a suitable approach for thread
safety. In these cases, the class must provide its own locking to ensure that compound actions are atomic, unless the
entire compound action can also be delegated to the underlying state variables.
If a class is composed of multiple independent thread‐safe state variables and has no operations that have any invalid
state transitions, then it can delegate thread safety to the underlying state variables.
45
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 16BChapter 4.
Composing Objects
Listing 4.10. Number Range Class that does Not Sufficiently Protect Its Invariants. Don't Do this.
public class NumberRange {
// INVARIANT: lower <= upper
private final AtomicInteger lower = new AtomicInteger(0);
private final AtomicInteger upper = new AtomicInteger(0);
public void setLower(int i) {
// Warning -- unsafe check-then-act
if (i > upper.get())
throw new IllegalArgumentException(
"can't set lower to " + i + " > upper");
lower.set(i);
}
public void setUpper(int i) {
// Warning -- unsafe check-then-act
if (i < lower.get())
throw new IllegalArgumentException(
"can't set upper to " + i + " < lower");
upper.set(i);
}
public boolean isInRange(int i) {
return (i >= lower.get() && i <= upper.get());
}
}
The problem that prevented NumberRange from being thread‐safe even though its state components were thread‐safe is
very similar to one of the rules about volatile variables described in Section 3.1.4: a variable is suitable for being
declared volatile only if it does not participate in invariants involving other state variables.
4.3.4. Publishing Underlying State Variables
When you delegate thread safety to an object's underlying state variables, under what conditions can you publish those
variables so that other classes can modify them as well? Again, the answer depends on what invariants your class
imposes on those variables. While the underlying value field in Counter could take on any integer value, Counter
constrains it to take on only positive values, and the increment operation constrains the set of valid next states given
any current state. If you were to make the value field public, clients could change it to an invalid value, so publishing it
would render the class incorrect. On the other hand, if a variable represents the current temperature or the ID of the
last user to log on, then having another class modify this value at any time probably would not violate any invariants, so
publishing this variable might be acceptable. (It still may not be a good idea, since publishing mutable variables
constrains future development and opportunities for subclassing, but it would not necessarily render the class not
thread‐safe.)
If a state variable is thread‐safe, does not participate in any invariants that constrain its value, and has no prohibited
state transitions for any of its operations, then it can safely be published.
For example, it would be safe to publish mouseListeners or keyListeners in VisualComponent. Because
VisualComponent does not impose any constraints on the valid states of its listener lists, these fields could be made
public or otherwise published without compromising thread safety.
4.3.5. Example: Vehicle Tracker that Publishes Its State
Let's construct another version of the vehicle tracker that publishes its underlying mutable state. Again, we need to
modify the interface a little bit to accommodate this change, this time using mutable but thread‐safe points.
46 Java Concurrency In Practice
Listing 4.11. Threadsafe
Mutable Point Class.
@ThreadSafe
public class SafePoint {
@GuardedBy("this") private int x, y;
private SafePoint(int[] a) { this(a[0], a[1]); }
public SafePoint(SafePoint p) { this(p.get()); }
public SafePoint(int x, int y) {
this.x = x;
this.y = y;
}
public synchronized int[] get() {
return new int[] { x, y };
}
public synchronized void set(int x, int y) {
this.x = x;
this.y = y;
}
}
SafePoint in Listing 4.11 provides a getter that retrieves both the x and y values at once by returning a two‐element
array.[6] If we provided separate getters for x and y, then the values could change between the time one coordinate is
retrieved and the other, resulting in a caller seeing an inconsistent value: an (x, y) location where the vehicle never was.
Using SafePoint, we can construct a vehicle tracker that publishes the underlying mutable state without undermining
thread safety, as shown in the PublishingVehicleTracker class in Listing 4.12.
[6] The private constructor exists to avoid the race condition that would occur if the copy constructor were implemented as this(p.x, p.y); this is an
example of the private constructor capture idiom (Bloch and Gafter, 2005).
PublishingVehicleTracker derives its thread safety from delegation to an underlying ConcurrentHashMap, but this
time the contents of the Map are thread‐safe mutable points rather than immutable ones. The getLocation method
returns an unmodifiable copy of the underlying Map. Callers cannot add or remove vehicles, but could change the
location of one of the vehicles by mutating the SafePoint values in the returned Map. Again, the "live" nature of the Map
may be a benefit or a drawback, depending on the requirements. PublishingVehicleTracker is thread‐safe, but would
not be so if it imposed any additional constraints on the valid values for vehicle locations. If it needed to be able to
"veto" changes to vehicle locations or to take action when a location changes, the approach taken by
PublishingVehicleTracker would not be appropriate.
Listing 4.12. Vehicle Tracker that Safely Publishes Underlying State.
@ThreadSafe
public class PublishingVehicleTracker {
private final Map<String, SafePoint> locations;
private final Map<String, SafePoint> unmodifiableMap;
public PublishingVehicleTracker(
Map<String, SafePoint> locations) {
this.locations
= new ConcurrentHashMap<String, SafePoint>(locations);
this.unmodifiableMap
= Collections.unmodifiableMap(this.locations);
}
public Map<String, SafePoint> getLocations() {
return unmodifiableMap;
}
public SafePoint getLocation(String id) {
return locations.get(id);
}
public void setLocation(String id, int x, int y) {
if (!locations.containsKey(id))
throw new IllegalArgumentException(
"invalid vehicle name: " + id);
locations.get(id).set(x, y);
}
}
47
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 16BChapter 4.
Composing Objects
4.4. Adding Functionality to Existing Threadsafe
Classes
The Java class library contains many useful "building block" classes. Reusing existing classes is often preferable to
creating new ones: reuse can reduce development effort, development risk (because the existing components are
already tested), and maintenance cost. Sometimes a thread‐safe class that supports all of the operations we want
already exists, but often the best we can find is a class that supports almost all the operations we want, and then we
need to add a new operation to it without undermining its thread safety.
As an example, let's say we need a thread‐safe List with an atomic put‐if‐absent operation. The synchronized List
implementations nearly do the job, since they provide the contains and add methods from which we can construct a
put‐if‐absent operation.
The concept of put‐if‐absent is straightforward enough ‐ check to see if an element is in the collection before adding it,
and do not add it if it is already there. (Your "check‐then‐act" warning bells should be going off now.) The requirement
that the class be thread‐safe implicitly adds another requirement ‐ that operations like put‐if‐absent be atomic. Any
reasonable interpretation suggests that, if you take a List that does not contain object X, and add X twice with put‐ifabsent,
the resulting collection contains only one copy of X. But, if put‐if‐absent were not atomic, with some unlucky
timing two threads could both see that X was not present and both add X, resulting in two copies of X.
The safest way to add a new atomic operation is to modify the original class to support the desired operation, but this is
not always possible because you may not have access to the source code or may not be free to modify it. If you can
modify the original class, you need to understand the implementation's synchronization policy so that you can enhance
it in a manner consistent with its original design. Adding the new method directly to the class means that all the code
that implements the synchronization policy for that class is still contained in one source file, facilitating easier
comprehension and maintenance.
Another approach is to extend the class, assuming it was designed for extension. BetterVector in Listing 4.13 extends
Vector to add a putIfAbsent method. Extending Vector is straightforward enough, but not all classes expose enough
of their state to subclasses to admit this approach.
Extension is more fragile than adding code directly to a class, because the implementation of the synchronization policy
is now distributed over multiple, separately maintained source files. If the underlying class were to change its
synchronization policy by choosing a different lock to guard its state variables, the subclass would subtly and silently
break, because it no longer used the right lock to control concurrent access to the base class's state. (The
synchronization policy of Vector is fixed by its specification, so BetterVector would not suffer from this problem.)
Listing 4.13. Extending Vector to have a Putifabsent
Method.
@ThreadSafe
public class BetterVector<E> extends Vector<E> {
public synchronized boolean putIfAbsent(E x) {
boolean absent = !contains(x);
if (absent)
add(x);
return absent;
}
}
4.4.1. Clientside
Locking
For an ArrayList wrapped with a Collections.synchronizedList wrapper, neither of these approaches ‐ adding a
method to the original class or extending the class ‐ works because the client code does not even know the class of the
List object returned from the synchronized wrapper factories. A third strategy is to extend the functionality of the class
without extending the class itself by placing extension code in a "helper" class.
Listing 4.14 shows a failed attempt to create a helper class with an atomic put‐if‐absent operation for operating on a
thread‐safe List.
48 Java Concurrency In Practice
Listing 4.14. Nonthreadsafe
Attempt to Implement Putifabsent.
Don't Do this.
@NotThreadSafe
public class ListHelper<E> {
public List<E> list =
Collections.synchronizedList(new ArrayList<E>());
...
public synchronized boolean putIfAbsent(E x) {
boolean absent = !list.contains(x);
if (absent)
list.add(x);
return absent;
}
}
Why wouldn't this work? After all, putIfAbsent is synchronized, right? The problem is that it synchronizes on the
wrong lock. Whatever lock the List uses to guard its state, it sure isn't the lock on the ListHelper. ListHelper
provides only the illusion of synchronization; the various list operations, while all synchronized, use different locks,
which means that putIfAbsent is not atomic relative to other operations on the List. So there is no guarantee that
another thread won't modify the list while putIfAbsent is executing.
To make this approach work, we have to use the same lock that the List uses by using client‐side locking or external
locking. Client‐side locking entails guarding client code that uses some object X with the lock X uses to guard its own
state. In order to use client‐side locking, you must know what lock X uses.
The documentation for Vector and the synchronized wrapper classes states, albeit obliquely, that they support clientside
locking, by using the intrinsic lock for the Vector or the wrapper collection (not the wrapped collection). Listing
4.15 shows a putIfAbsent operation on a thread‐safe List that correctly uses client‐side locking.
Listing 4.15. Implementing Putifabsent
with Clientside
Locking.
@ThreadSafe
public class ListHelper<E> {
public List<E> list =
Collections.synchronizedList(new ArrayList<E>());
...
public boolean putIfAbsent(E x) {
synchronized (list) {
boolean absent = !list.contains(x);
if (absent)
list.add(x);
return absent;
}
}
}
If extending a class to add another atomic operation is fragile because it distributes the locking code for a class over
multiple classes in an object hierarchy, client‐side locking is even more fragile because it entails putting locking code for
class C into classes that are totally unrelated to C. Exercise care when using client‐side locking on classes that do not
commit to their locking strategy.
Client‐side locking has a lot in common with class extension ‐ they both couple the behavior of the derived class to the
implementation of the base class. Just as extension violates encapsulation of implementation [EJ Item 14], client‐side
locking violates encapsulation of synchronization policy.
4.4.2. Composition
There is a less fragile alternative for adding an atomic operation to an existing class: composition. ImprovedList in
Listing 4.16 implements the List operations by delegating them to an underlying List instance, and adds an atomic
putIfAbsent method. (Like Collections.synchronizedList and other collections wrappers, ImprovedList assumes
that once a list is passed to its constructor, the client will not use the underlying list directly again, accessing it only
through the ImprovedList.)
ImprovedList adds an additional level of locking using its own intrinsic lock. It does not care whether the underlying
List is thread‐safe, because it provides its own consistent locking that provides thread safety even if the List is not
thread‐safe or changes its locking implementation. While the extra layer of synchronization may add some small
performance penalty,[7] the implementation in ImprovedList is less fragile than attempting to mimic the locking
49
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 16BChapter 4.
Composing Objects
strategy of another object. In effect, we've used the Java monitor pattern to encapsulate an existing List, and this is
guaranteed to provide thread safety so long as our class holds the only outstanding reference to the underlying List.
Listing 4.16. Implementing Putifabsent
Using Composition.
@ThreadSafe
public class ImprovedList<T> implements List<T> {
private final List<T> list;
public ImprovedList(List<T> list) { this.list = list; }
public synchronized boolean putIfAbsent(T x) {
boolean contains = list.contains(x);
if (contains)
list.add(x);
return !contains;
}
public synchronized void clear() { list.clear(); }
// ... similarly delegate other List methods
}
[7] The penalty will be small because the synchronization on the underlying List is guaranteed to be uncontended and therefore fast; see Chapter
11.
4.5. Documenting Synchronization Policies
Documentation is one of the most powerful (and, sadly, most underutilized) tools for managing thread safety. Users
look to the documentation to find out if a class is thread‐safe, and maintainers look to the documentation to understand
the implementation strategy so they can maintain it without inadvertently compromising safety. Unfortunately, both of
these constituencies usually find less information in the documentation than they'd like.
Document a class's thread safety guarantees for its clients; document its synchronization policy for its maintainers.
Each use of synchronized, volatile, or any thread‐safe class reflects a synchronization policy defining a strategy for
ensuring the integrity of data in the face of concurrent access. That policy is an element of your program's design, and
should be documented. Of course, the best time to document design decisions is at design time. Weeks or months later,
the details may be a blur ‐ so write it down before you forget.
Crafting a synchronization policy requires a number of decisions: which variables to make volatile, which variables to
guard with locks, which lock(s) guard which variables, which variables to make immutable or confine to a thread, which
operations must be atomic, etc. Some of these are strictly implementation details and should be documented for the
sake of future maintainers, but some affect the publicly observable locking behavior of your class and should be
documented as part of its specification.
At the very least, document the thread safety guarantees made by a class. Is it thread‐safe? Does it make callbacks with
a lock held? Are there any specific locks that affect its behavior? Don't force clients to make risky guesses. If you don't
want to commit to supporting client‐side locking, that's fine, but say so. If you want clients to be able to create new
atomic operations on your class, as we did in Section 4.4, you need to document which locks they should acquire to do
so safely. If you use locks to guard state, document this for future maintainers, because it's so easy ‐ the @GuardedBy
annotation will do the trick. If you use more subtle means to maintain thread safety, document them because they may
not be obvious to maintainers.
The current state of affairs in thread safety documentation, even in the platform library classes, is not encouraging. How
many times have you looked at the Javadoc for a class and wondered whether it was thread‐safe?[8] Most classes don't
offer any clue either way. Many official Java technology specifications, such as servlets and JDBC, woefully
underdocument their thread safety promises and requirements.
[8] If you've never wondered this, we admire your optimism.
While prudence suggests that we not assume behaviors that aren't part of the specification, we have work to get done,
and we are often faced with a choice of bad assumptions. Should we assume an object is thread‐safe because it seems
that it ought to be? Should we assume that access to an object can be made thread‐safe by acquiring its lock first? (This
risky technique works only if we control all the code that accesses that object; otherwise, it provides only the illusion of
thread safety.) Neither choice is very satisfying.
50 Java Concurrency In Practice
To make matters worse, our intuition may often be wrong on which classes are "probably thread‐safe" and which are
not. As an example, java.text.SimpleDateFormat isn't thread‐safe, but the Javadoc neglected to mention this until
JDK 1.4. That this particular class isn't thread‐safe comes as a surprise to many developers. How many programs
mistakenly create a shared instance of a non‐thread‐safe object and used it from multiple threads, unaware that this
might cause erroneous results under heavy load?
The problem with SimpleDateFormat could be avoided by not assuming a class is thread‐safe if it doesn't say so. On the
other hand, it is impossible to develop a servlet‐based application without making some pretty questionable
assumptions about the thread safety of container‐provided objects like HttpSession. Don't make your customers or
colleagues have to make guesses like this.
4.5.1. Interpreting Vague Documentation
Many Java technology specifications are silent, or at least unforthcoming, about thread safety guarantees and
requirements for interfaces such as ServletContext, HttpSession, or DataSource.[9] Since these interfaces are
implemented by your container or database vendor, you often can't look at the code to see what it does. Besides, you
don't want to rely on the implementation details of one particular JDBC driver ‐ you want to be compliant with the
standard so your code works properly with any JDBC driver. But the words "thread" and "concurrent" do not appear at
all in the JDBC specification, and appear frustratingly rarely in the servlet specification. So what do you do?
[9] We find it particularly frustrating that these omissions persist despite multiple major revisions of the specifications.
You are going to have to guess. One way to improve the quality of your guess is to interpret the specification from the
perspective of someone who will implement it (such as a container or database vendor), as opposed to someone who
will merely use it. Servlets are always called from a container‐managed thread, and it is safe to assume that if there is
more than one such thread, the container knows this. The servlet container makes available certain objects that provide
service to multiple servlets, such as HttpSession or ServletContext. So the servlet container should expect to have
these objects accessed concurrently, since it has created multiple threads and called methods like Servlet.service
from them that could reasonably be expected to access the ServletContext.
Since it is impossible to imagine a single‐threaded context in which these objects would be useful, one has to assume
that they have been made thread‐safe, even though the specification does not explicitly require this. Besides, if they
required client‐side locking, on what lock should the client code synchronize? The documentation doesn't say, and it
seems absurd to guess. This "reasonable assumption" is further bolstered by the examples in the specification and
official tutorials that show how to access ServletContext or HttpSession and do not use any client‐side
synchronization.
On the other hand, the objects placed in the ServletContext or HttpSession with setAttribute are owned by the
web application, not the servlet container. The servlet specification does not suggest any mechanism for coordinating
concurrent access to shared attributes. So attributes stored by the container on behalf of the web application should be
thread‐safe or effectively immutable. If all the container did was store these attributes on behalf of the web application,
another option would be to ensure that they are consistently guarded by a lock when accessed from servlet application
code. But because the container may want to serialize objects in the HttpSession for replication or passivation
purposes, and the servlet container can't possibly know your locking protocol, you should make them thread‐safe.
One can make a similar inference about the JDBC DataSource interface, which represents a pool of reusable database
connections. A DataSource provides service to an application, and it doesn't make much sense in the context of a
single‐threaded application. It is hard to imagine a use case that doesn't involve calling getConnection from multiple
threads. And, as with servlets, the examples in the JDBC specification do not suggest the need for any client‐side locking
in the many code examples using DataSource. So, even though the specification doesn't promise that DataSource is
thread‐safe or require container vendors to provide a thread‐safe implementation, by the same "it would be absurd if it
weren't" argument, we have no choice but to assume that DataSource.getConnection does not require additional
client‐side locking.
On the other hand, we would not make the same argument about the JDBC Connection objects dispensed by the
DataSource, since these are not necessarily intended to be shared by other activities until they are returned to the pool.
So if an activity that obtains a JDBC Connection spans multiple threads, it must take responsibility for ensuring that
access to the Connection is properly guarded by synchronization. (In most applications, activities that use a JDBC
Connection are implemented so as to confine the Connection to a specific thread anyway.)
51
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 17BChapter 5. Building
Blocks
Chapter 5. Building Blocks
The last chapter explored several techniques for constructing thread‐safe classes, including delegating thread safety to
existing thread‐safe classes. Where practical, delegation is one of the most effective strategies for creating thread‐safe
classes: just let existing thread‐safe classes manage all the state.
The platform libraries include a rich set of concurrent building blocks, such as thread‐safe collections and a variety of
synchronizers that can coordinate the control flow of cooperating threads. This chapter covers the most useful
concurrent building blocks, especially those introduced in Java 5.0 and Java 6, and some patterns for using them to
structure concurrent applications.
5.1. Synchronized Collections
The synchronized collection classes include Vector and Hashtable, part of the original JDK, as well as their cousins
added in JDK 1.2, the synchronized wrapper classes created by the Collections.synchronizedXxx factory methods.
These classes achieve thread safety by encapsulating their state and synchronizing every public method so that only one
thread at a time can access the collection state.
5.1.1. Problems with Synchronized Collections
The synchronized collections are thread‐safe, but you may sometimes need to use additional client‐side locking to guard
compound actions. Common compound actions on collections include iteration (repeatedly fetch elements until the
collection is exhausted), navigation (find the next element after this one according to some order), and conditional
operations such as put‐if‐absent (check if a Map has a mapping for key K, and if not, add the mapping (K,V)). With a
synchronized collection, these compound actions are still technically thread‐safe even without client‐side locking, but
they may not behave as you might expect when other threads can concurrently modify the collection.
Listing 5.1 shows two methods that operate on a Vector, getLast and delete-Last, both of which are check‐then‐act
sequences. Each calls size to determine the size of the array and uses the resulting value to retrieve or remove the last
element.
Listing 5.1. Compound Actions on a Vector that may Produce Confusing Results.
public static Object getLast(Vector list) {
int lastIndex = list.size() - 1;
return list.get(lastIndex);
}
public static void deleteLast(Vector list) {
int lastIndex = list.size() - 1;
list.remove(lastIndex);
}
These methods seem harmless, and in a sense they are ‐ they can't corrupt the Vector, no matter how many threads
call them simultaneously. But the caller of these methods might have a different opinion. If thread A calls getLast on a
Vector with ten elements, thread B calls deleteLast on the same Vector, and the operations are interleaved as shown
in Figure 5.1, getLast throws ArrayIndexOutOfBoundsException. Between the call to size and the subsequent call to
get in getLast, the Vector shrank and the index computed in the first step is no longer valid. This is perfectly consistent
with the specification of Vectorit throws an exception if asked for a nonexistent element. But this is not what a caller
expects getLast to do, even in the face of concurrent modification, unless perhaps the Vector was empty to begin
with.
Figure 5.1. Interleaving of Getlast and Deletelast that throws ArrayIndexOutOfBoundsException.
52 Java Concurrency In Practice
Because the synchronized collections commit to a synchronization policy that supports client‐side locking, [1] it is
possible to create new operations that are atomic with respect to other collection operations as long as we know which
lock to use. The synchronized collection classes guard each method with the lock on the synchronized collection object
itself. By acquiring the collection lock we can make getLast and deleteLast atomic, ensuring that the size of the
Vector does not change between calling size and get, as shown in Listing 5.2.
[1] This is documented only obliquely in the Java 5.0 Javadoc, as an example of the correct iteration idiom.
The risk that the size of the list might change between a call to size and the corresponding call to get is also present
when we iterate through the elements of a Vector as shown in Listing 5.3.
This iteration idiom relies on a leap of faith that other threads will not modify the Vector between the calls to size and
get. In a single‐threaded environment, this assumption is perfectly valid, but when other threads may concurrently
modify the Vector it can lead to trouble. Just as with getLast, if another thread deletes an element while you are
iterating through the Vector and the operations are interleaved unluckily, this iteration idiom throws
ArrayIndexOutOfBoundsException.
Listing 5.2. Compound Actions on Vector Using Clientside
Locking.
public static Object getLast(Vector list) {
synchronized (list) {
int lastIndex = list.size() - 1;
return list.get(lastIndex);
}
}
public static void deleteLast(Vector list) {
synchronized (list) {
int lastIndex = list.size() - 1;
list.remove(lastIndex);
}
}
Listing 5.3. Iteration that may Throw ArrayIndexOutOfBoundsException.
for (int i = 0; i < vector.size(); i++)
doSomething(vector.get(i));
Even though the iteration in Listing 5.3 can throw an exception, this doesn't mean Vector isn't thread‐safe. The state of
the Vector is still valid and the exception is in fact in conformance with its specification. However, that something as
mundane as fetching the last element or iteration throw an exception is clearly undesirable.
The problem of unreliable iteration can again be addressed by client‐side locking, at some additional cost to scalability.
By holding the Vector lock for the duration of iteration, as shown in Listing 5.4, we prevent other threads from
modifying the Vector while we are iterating it. Unfortunately, we also prevent other threads from accessing it at all
during this time, impairing concurrency.
Listing 5.4. Iteration with Clientside
Locking.
synchronized (vector) {
for (int i = 0; i < vector.size(); i++)
doSomething(vector.get(i));
}
5.1.2. Iterators and Concurrentmodificationexception
We use Vector for the sake of clarity in many of our examples, even though it is considered a "legacy" collection class.
But the more "modern" collection classes do not eliminate the problem of compound actions. The standard way to
iterate a Collection is with an Iterator, either explicitly or through the for‐each loop syntax introduced in Java 5.0,
but using iterators does not obviate the need to lock the collection during iteration if other threads can concurrently
modify it. The iterators returned by the synchronized collections are not designed to deal with concurrent modification,
and they are fail‐fast ‐ meaning that if they detect that the collection has changed since iteration began, they throw the
unchecked ConcurrentModificationException.
These fail‐fast iterators are not designed to be foolproof ‐ they are designed to catch concurrency errors on a "goodfaith‐
effort" basis and thus act only as early‐warning indicators for concurrency problems. They are implemented by
associating a modification count with the collection: if the modification count changes during iteration, hasNext or next
throws ConcurrentModificationException. However, this check is done without synchronization, so there is a risk of
seeing a stale value of the modification count and therefore that the iterator does not realize a modification has been
made. This was a deliberate design tradeoff to reduce the performance impact of the concurrent modification detection
code.[2]
53
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 17BChapter 5. Building
Blocks
[2] ConcurrentModificationException can arise in single‐threaded code as well; this happens when objects are removed from the collection directly
rather than through Iterator.remove.
Listing 5.5 illustrates iterating a collection with the for‐each loop syntax. Internally, javac generates code that uses an
Iterator, repeatedly calling hasNext and next to iterate the List. Just as with iterating the Vector, the way to prevent
ConcurrentModificationException is to hold the collection lock for the duration of the iteration.
Listing 5.5. Iterating a List with an Iterator.
List<Widget> widgetList
= Collections.synchronizedList(new ArrayList<Widget>());
...
// May throw ConcurrentModificationException
for (Widget w : widgetList)
doSomething(w);
There are several reasons, however, why locking a collection during iteration may be undesirable. Other threads that
need to access the collection will block until the iteration is complete; if the collection is large or the task performed for
each element is lengthy, they could wait a long time. Also, if the collection is locked as in Listing 5.4, doSomething is
being called with a lock held, which is a risk factor for deadlock (see Chapter 10). Even in the absence of starvation or
deadlock risk, locking collections for significant periods of time hurts application scalability. The longer a lock is held, the
more likely it is to be contended, and if many threads are blocked waiting for a lock throughput and CPU utilization can
suffer (see Chapter 11).
An alternative to locking the collection during iteration is to clone the collection and iterate the copy instead. Since the
clone is thread‐confined, no other thread can modify it during iteration, eliminating the possibility of
ConcurrentModificationException. (The collection still must be locked during the clone operation itself.) Cloning the
collection has an obvious performance cost; whether this is a favorable tradeoff depends on many factors including the
size of the collection, how much work is done for each element, the relative frequency of iteration compared to other
collection operations, and responsiveness and throughput requirements.
5.1.3. Hidden Iterators
While locking can prevent iterators from throwing ConcurrentModificationException, you have to remember to use
locking everywhere a shared collection might be iterated. This is trickier than it sounds, as iterators are sometimes
hidden, as in HiddenIterator in Listing 5.6. There is no explicit iteration in HiddenIterator, but the code in bold
entails iteration just the same. The string concatenation gets turned by the compiler into a call to
StringBuilder.append(Object), which in turn invokes the collection's toString method ‐ and the implementation of
toString in the standard collections iterates the collection and calls toString on each element to produce a nicely
formatted representation of the collection's contents.
The addTenThings method could throw ConcurrentModificationException, because the collection is being iterated
by toString in the process of preparing the debugging message. Of course, the real problem is that HiddenIterator is
not thread‐safe; the HiddenIterator lock should be acquired before using set in the println call, but debugging and
logging code commonly neglect to do this.
The real lesson here is that the greater the distance between the state and the synchronization that guards it, the more
likely that someone will forget to use proper synchronization when accessing that state. If HiddenIterator wrapped
the HashSet with a synchronizedSet, encapsulating the synchronization, this sort of error would not occur.
Just as encapsulating an object's state makes it easier to preserve its invariants, encapsulating its synchronization makes
it easier to enforce its synchronization policy.
54 Java Concurrency In Practice
Listing 5.6. Iteration Hidden within String Concatenation. Don't Do this.
public class HiddenIterator {
@GuardedBy("this")
private final Set<Integer> set = new HashSet<Integer>();
public synchronized void add(Integer i) { set.add(i); }
public synchronized void remove(Integer i) { set.remove(i); }
public void addTenThings() {
Random r = new Random();
for (int i = 0; i < 10; i++)
add(r.nextInt());
System.out.println("DEBUG: added ten elements to " + set);
}
}
Iteration is also indirectly invoked by the collection's hashCode and equals methods, which may be called if the
collection is used as an element or key of another collection. Similarly, the containsAll, removeAll, and retainAll
methods, as well as the constructors that take collections are arguments, also iterate the collection. All of these indirect
uses of iteration can cause ConcurrentModificationException.
5.2. Concurrent Collections
Java 5.0 improves on the synchronized collections by providing several concurrent collection classes. Synchronized
collections achieve their thread safety by serializing all access to the collection's state. The cost of this approach is poor
concurrency; when multiple threads contend for the collection‐wide lock, throughput suffers.
The concurrent collections, on the other hand, are designed for concurrent access from multiple threads. Java 5.0 adds
ConcurrentHashMap, a replacement for synchronized hash‐based Map implementations, and CopyOnWriteArrayList, a
replacement for synchronized List implementations for cases where traversal is the dominant operation. The new
ConcurrentMap interface adds support for common compound actions such as put‐if‐absent, replace, and conditional
remove.
Replacing synchronized collections with concurrent collections can offer dramatic scalability improvements with little
risk.
Java 5.0 also adds two new collection types, Queue and BlockingQueue. A Queue is intended to hold a set of elements
temporarily while they await processing. Several implementations are provided, including ConcurrentLinkedQueue, a
traditional FIFO queue, and PriorityQueue, a (non concurrent) priority ordered queue. Queue operations do not block;
if the queue is empty, the retrieval operation returns null. While you can simulate the behavior of a Queue with a
Listin fact, LinkedList also implements Queue ‐ the Queue classes were added because eliminating the random‐access
requirements of List admits more efficient concurrent implementations.
BlockingQueue extends Queue to add blocking insertion and retrieval operations. If the queue is empty, a retrieval
blocks until an element is available, and if the queue is full (for bounded queues) an insertion blocks until there is space
available. Blocking queues are extremely useful in producer‐consumer designs, and are covered in greater detail in
Section 5.3.
Just as ConcurrentHashMap is a concurrent replacement for a synchronized hash‐based Map, Java 6 adds
ConcurrentSkipListMap and ConcurrentSkipListSet, which are concurrent replacements for a synchronized
SortedMap or SortedSet (such as TreeMap or TreeSet wrapped with synchronizedMap).
5.2.1. ConcurrentHashMap
The synchronized collections classes hold a lock for the duration of each operation. Some operations, such as
HashMap.get or List.contains, may involve more work than is initially obvious: traversing a hash bucket or list to find
a specific object entails calling equals (which itself may involve a fair amount of computation) on a number of candidate
objects. In a hash‐based collection, if hashCode does not spread out hash values well, elements may be unevenly
distributed among buckets; in the degenerate case, a poor hash function will turn a hash table into a linked list.
Traversing a long list and calling equals on some or all of the elements can take a long time, and during that time no
other thread can access the collection.
55
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 17BChapter 5. Building
Blocks
ConcurrentHashMap is a hash‐based Map like HashMap, but it uses an entirely different locking strategy that offers better
concurrency and scalability. Instead of synchronizing every method on a common lock, restricting access to a single
thread at a time, it uses a finer‐grained locking mechanism called lock striping (see Section 11.4.3) to allow a greater
degree of shared access. Arbitrarily many reading threads can access the map concurrently, readers can access the map
concurrently with writers, and a limited number of writers can modify the map concurrently. The result is far higher
throughput under concurrent access, with little performance penalty for single‐threaded access.
ConcurrentHashMap, along with the other concurrent collections, further improve on the synchronized collection
classes by providing iterators that do not throw ConcurrentModificationException, thus eliminating the need to lock
the collection during iteration. The iterators returned by ConcurrentHashMap are weakly consistent instead of fail‐fast.
A weakly consistent iterator can tolerate concurrent modification, traverses elements as they existed when the iterator
was constructed, and may (but is not guaranteed to) reflect modifications to the collection after the construction of the
iterator.
As with all improvements, there are still a few tradeoffs. The semantics of methods that operate on the entire Map, such
as size and isEmpty, have been slightly weakened to reflect the concurrent nature of the collection. Since the result of
size could be out of date by the time it is computed, it is really only an estimate, so size is allowed to return an
approximation instead of an exact count. While at first this may seem disturbing, in reality methods like size and
isEmpty are far less useful in concurrent environments because these quantities are moving targets. So the
requirements for these operations were weakened to enable performance optimizations for the most important
operations, primarily get, put, containsKey, and remove.
The one feature offered by the synchronized Map implementations but not by ConcurrentHashMap is the ability to lock
the map for exclusive access. With Hashtable and synchronizedMap, acquiring the Map lock prevents any other thread
from accessing it. This might be necessary in unusual cases such as adding several mappings atomically, or iterating the
Map several times and needing to see the same elements in the same order. On the whole, though, this is a reasonable
tradeoff: concurrent collections should be expected to change their contents continuously.
Because it has so many advantages and so few disadvantages compared to Hashtable or synchronizedMap, replacing
synchronized Map implementations with ConcurrentHashMap in most cases results only in better scalability. Only if your
application needs to lock the map for exclusive access [3] is ConcurrentHashMap not an appropriate drop‐in
replacement.
[3] Or if you are relying on the synchronization side effects of the synchronized Map implementations.
5.2.2. Additional Atomic Map Operations
Since a ConcurrentHashMap cannot be locked for exclusive access, we cannot use client‐side locking to create new
atomic operations such as put‐if‐absent, as we did for Vector in Section 4.4.1. Instead, a number of common compound
operations such as put‐if‐absent, remove‐if‐equal, and replace‐if‐equal are implemented as atomic operations and
specified by the ConcurrentMap interface, shown in Listing 5.7. If you find yourself adding such functionality to an
existing synchronized Map implementation, it is probably a sign that you should consider using a ConcurrentMap instead.
5.2.3. CopyOnWriteArrayList
CopyOnWriteArrayList is a concurrent replacement for a synchronized List that offers better concurrency in some
common situations and eliminates the need to lock or copy the collection during iteration. (Similarly,
CopyOnWriteArraySet is a concurrent replacement for a synchronized Set.)
The copy‐on‐write collections derive their thread safety from the fact that as long as an effectively immutable object is
properly published, no further synchronization is required when accessing it. They implement mutability by creating and
republishing a new copy of the collection every time it is modified. Iterators for the copy‐on‐write collections retain a
reference to the backing array that was current at the start of iteration, and since this will never change, they need to
synchronize only briefly to ensure visibility of the array contents. As a result, multiple threads can iterate the collection
without interference from one another or from threads wanting to modify the collection. The iterators returned by the
copy‐on‐write collections do not throw ConcurrentModificationException and return the elements exactly as they
were at the time the iterator was created, regardless of subsequent modifications.
56 Java Concurrency In Practice
Listing 5.7. ConcurrentMap Interface.
public interface ConcurrentMap<K,V> extends Map<K,V> {
// Insert into map only if no value is mapped from K
V putIfAbsent(K key, V value);
// Remove only if K is mapped to V
boolean remove(K key, V value);
// Replace value only if K is mapped to oldValue
boolean replace(K key, V oldValue, V newValue);
// Replace value only if K is mapped to some value
V replace(K key, V newValue);
}
Obviously, there is some cost to copying the backing array every time the collection is modified, especially if the
collection is large; the copy‐on‐write collections are reasonable to use only when iteration is far more common than
modification. This criterion exactly describes many event‐notification systems: delivering a notification requires iterating
the list of registered listeners and calling each one of them, and in most cases registering or unregistering an event
listener is far less common than receiving an event notification. (See [CPJ 2.4.4] for more information on copy‐on‐write.)
5.3. Blocking Queues and the Producerconsumer
Pattern
Blocking queues provide blocking put and take methods as well as the timed equivalents offer and poll. If the queue
is full, put blocks until space becomes available; if the queue is empty, take blocks until an element is available. Queues
can be bounded or unbounded; unbounded queues are never full, so a put on an unbounded queue never blocks.
Blocking queues support the producer‐consumer design pattern. A producer‐consumer design separates the
identification of work to be done from the execution of that work by placing work items on a "to do" list for later
processing, rather than processing them immediately as they are identified. The producer‐consumer pattern simplifies
development because it removes code dependencies between producer and consumer classes, and simplifies workload
management by decoupling activities that may produce or consume data at different or variable rates.
In a producer‐consumer design built around a blocking queue, producers place data onto the queue as it becomes
available, and consumers retrieve data from the queue when they are ready to take the appropriate action. Producers
don't need to know anything about the identity or number of consumers, or even whether they are the only producer ‐
all they have to do is place data items on the queue. Similarly, consumers need not know who the producers are or
where the work came from. BlockingQueue simplifies the implementation of producer‐consumer designs with any
number of producers and consumers. One of the most common producer‐consumer designs is a thread pool coupled
with a work queue; this pattern is embodied in the Executor task execution framework that is the subject of Chapters 6
and 8.
The familiar division of labor for two people washing the dishes is an example of a producer‐consumer design: one
person washes the dishes and places them in the dish rack, and the other person retrieves the dishes from the rack and
dries them. In this scenario, the dish rack acts as a blocking queue; if there are no dishes in the rack, the consumer waits
until there are dishes to dry, and if the rack fills up, the producer has to stop washing until there is more space. This
analogy extends to multiple producers (though there may be contention for the sink) and multiple consumers; each
worker interacts only with the dish rack. No one needs to know how many producers or consumers there are, or who
produced a given item of work.
The labels "producer" and "consumer" are relative; an activity that acts as a consumer in one context may act as a
producer in another. Drying the dishes "consumes" clean wet dishes and "produces" clean dry dishes. A third person
wanting to help might put away the dry dishes, in which case the drier is both a consumer and a producer, and there are
now two shared work queues (each of which may block the drier from proceeding.)
Blocking queues simplify the coding of consumers, since take blocks until data is available. If the producers don't
generate work fast enough to keep the consumers busy, the consumers just wait until more work is available.
Sometimes this is perfectly acceptable (as in a server application when no client is requesting service), and sometimes it
indicates that the ratio of producer threads to consumer threads should be adjusted to achieve better utilization (as in a
web crawler or other application in which there is effectively infinite work to do).
If the producers consistently generate work faster than the consumers can process it, eventually the application will run
out of memory because work items will queue up without bound. Again, the blocking nature of put greatly simplifies
coding of producers; if we use a bounded queue, then when the queue fills up the producers block, giving the
consumers time to catch up because a blocked producer cannot generate more work.
57
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 17BChapter 5. Building
Blocks
Blocking queues also provide an offer method, which returns a failure status if the item cannot be enqueued. This
enables you to create more flexible policies for dealing with overload, such as shedding load, serializing excess work
items and writing them to disk, reducing the number of producer threads, or throttling producers in some other
manner.
Bounded queues are a powerful resource management tool for building reliable applications: they make your program
more robust to overload by throttling activities that threaten to produce more work than can be handled.
While the producer‐consumer pattern enables producer and consumer code to be decoupled from each other, their
behavior is still coupled indirectly through the shared work queue. It is tempting to assume that the consumers will
always keep up, so that you need not place any bounds on the size of work queues, but this is a prescription for
rearchitecting your system later. Build resource management into your design early using blocking queues ‐ it is a lot
easier to do this up front than to retrofit it later. Blocking queues make this easy for a number of situations, but if
blocking queues don't fit easily into your design, you can create other blocking data structures using Semaphore (see
Section 5.5.3).
The class library contains several implementations of BlockingQueue. LinkedBlockingQueue and ArrayBlockingQueue
are FIFO queues, analogous to LinkedList and ArrayList but with better concurrent performance than a synchronized
List. PriorityBlockingQueue is a priority‐ordered queue, which is useful when you want to process elements in an
order other than FIFO. Just like other sorted collections, PriorityBlockingQueue can compare elements according to
their natural order (if they implement Comparable) or using a Comparator.
The last BlockingQueue implementation, SynchronousQueue, is not really a queue at all, in that it maintains no storage
space for queued elements. Instead, it maintains a list of queued threads waiting to enqueue or dequeue an element. In
the dish‐washing analogy, this would be like having no dish rack, but instead handing the washed dishes directly to the
next available dryer. While this may seem a strange way to implement a queue, it reduces the latency associated with
moving data from producer to consumer because the work can be handed off directly. (In a traditional queue, the
enqueue and dequeue operations must complete sequentially before a unit of work can be handed off.) The direct
handoff also feeds back more information about the state of the task to the producer; when the handoff is accepted, it
knows a consumer has taken responsibility for it, rather than simply letting it sit on a queue somewhere ‐ much like the
difference between handing a document to a colleague and merely putting it in her mailbox and hoping she gets it soon.
Since a SynchronousQueue has no storage capacity, put and take will block unless another thread is already waiting to
participate in the handoff. Synchronous queues are generally suitable only when there are enough consumers that there
nearly always will be one ready to take the handoff.
5.3.1. Example: Desktop Search
One type of program that is amenable to decomposition into producers and consumers is an agent that scans local
drives for documents and indexes them for later searching, similar to Google Desktop or the Windows Indexing service.
DiskCrawler in Listing 5.8 shows a producer task that searches a file hierarchy for files meeting an indexing criterion
and puts their names on the work queue; Indexer in Listing 5.8 shows the consumer task that takes file names from the
queue and indexes them.
The producer‐consumer pattern offers a thread‐friendly means of decomposing the desktop search problem into
simpler components. Factoring file‐crawling and indexing into separate activities results in code that is more readable
and reusable than with a monolithic activity that does both; each of the activities has only a single task to do, and the
blocking queue handles all the flow control, so the code for each is simpler and clearer.
The producer‐consumer pattern also enables several performance benefits. Producers and consumers can execute
concurrently; if one is I/O‐bound and the other is CPU‐bound, executing them concurrently yields better overall
throughput than executing them sequentially. If the producer and consumer activities are parallelizable to different
degrees, tightly coupling them reduces parallelizability to that of the less parallelizable activity.
Listing 5.9 starts several crawlers and indexers, each in their own thread. As written, the consumer threads never exit,
which prevents the program from terminating; we examine several techniques for addressing this problem in Chapter 7.
While this example uses explicitly managed threads, many producer‐consumer designs can be expressed using the
Executor task execution framework, which itself uses the producer‐consumer pattern.
5.3.2. Serial Thread Confinement
The blocking queue implementations in java.util.concurrent all contain sufficient internal synchronization to safely
publish objects from a producer thread to the consumer thread.
58 Java Concurrency In Practice
For mutable objects, producer‐consumer designs and blocking queues facilitate serial thread confinement for handing
off ownership of objects from producers to consumers. A thread‐confined object is owned exclusively by a single thread,
but that ownership can be "transferred" by publishing it safely where only one other thread will gain access to it and
ensuring that the publishing thread does not access it after the handoff. The safe publication ensures that the object's
state is visible to the new owner, and since the original owner will not touch it again, it is now confined to the new
thread. The new owner may modify it freely since it has exclusive access.
Object pools exploit serial thread confinement, "lending" an object to a requesting thread. As long as the pool contains
sufficient internal synchronization to publish the pooled object safely, and as long as the clients do not themselves
publish the pooled object or use it after returning it to the pool, ownership can be transferred safely from thread to
thread.
One could also use other publication mechanisms for transferring ownership of a mutable object, but it is necessary to
ensure that only one thread receives the object being handed off. Blocking queues make this easy; with a little more
work, it could also done with the atomic remove method of ConcurrentMap or the compareAndSet method of
AtomicReference.
Listing 5.8. Producer and Consumer Tasks in a Desktop Search Application.
public class FileCrawler implements Runnable {
private final BlockingQueue<File> fileQueue;
private final FileFilter fileFilter;
private final File root;
...
public void run() {
try {
crawl(root);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
private void crawl(File root) throws InterruptedException {
File[] entries = root.listFiles(fileFilter);
if (entries != null) {
for (File entry : entries)
if (entry.isDirectory())
crawl(entry);
else if (!alreadyIndexed(entry))
fileQueue.put(entry);
}
}
}
public class Indexer implements Runnable {
private final BlockingQueue<File> queue;
public Indexer(BlockingQueue<File> queue) {
this.queue = queue;
}
public void run() {
try {
while (true)
indexFile(queue.take());
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
Listing 5.9. Starting the Desktop Search.
public static void startIndexing(File[] roots) {
BlockingQueue<File> queue = new LinkedBlockingQueue<File>(BOUND);
FileFilter filter = new FileFilter() {
public boolean accept(File file) { return true; }
};
for (File root : roots)
new Thread(new FileCrawler(queue, filter, root)).start();
for (int i = 0; i < N_CONSUMERS; i++)
new Thread(new Indexer(queue)).start();
}
59
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 17BChapter 5. Building
Blocks
5.3.3. Deques and Work Stealing
Java 6 also adds another two collection types, Deque (pronounced "deck") and BlockingDeque, that extend Queue and
BlockingQueue. A Deque is a double‐ended queue that allows efficient insertion and removal from both the head and
the tail. Implementations include ArrayDeque and LinkedBlockingDeque.
Just as blocking queues lend themselves to the producer‐consumer pattern, deques lend themselves to a related
pattern called work stealing. A producer‐consumer design has one shared work queue for all consumers; in a work
stealing design, every consumer has its own deque. If a consumer exhausts the work in its own deque, it can steal work
from the tail of someone else's deque. Work stealing can be more scalable than a traditional producer‐consumer design
because workers don't contend for a shared work queue; most of the time they access only their own deque, reducing
contention. When a worker has to access another's queue, it does so from the tail rather than the head, further
reducing contention.
Work stealing is well suited to problems in which consumers are also producers ‐ when performing a unit of work is
likely to result in the identification of more work. For example, processing a page in a web crawler usually results in the
identification of new pages to be crawled. Similarly, many graph‐exploring algorithms, such as marking the heap during
garbage collection, can be efficiently parallelized using work stealing. When a worker identifies a new unit of work, it
places it at the end of its own deque (or alternatively, in a work sharing design, on that of another worker); when its
deque is empty, it looks for work at the end of someone else's deque, ensuring that each worker stays busy.
5.4. Blocking and Interruptible Methods
Threads may block, or pause, for several reasons: waiting for I/O completion, waiting to acquire a lock, waiting to wake
up from Thread.sleep, or waiting for the result of a computation in another thread. When a thread blocks, it is usually
suspended and placed in one of the blocked thread states (BLOCKED, WAITING, or TIMED_WAITING). The distinction
between a blocking operation and an ordinary operation that merely takes a long time to finish is that a blocked thread
must wait for an event that is beyond its control before it can proceed ‐ the I/O completes, the lock becomes available,
or the external computation finishes. When that external event occurs, the thread is placed back in the RUNNABLE state
and becomes eligible again for scheduling.
The put and take methods of BlockingQueue throw the checked InterruptedException, as do a number of other
library methods such as Thread.sleep. When a method can throw InterruptedException, it is telling you that it is a
blocking method, and further that if it is interrupted, it will make an effort to stop blocking early.
Thread provides the interrupt method for interrupting a thread and for querying whether a thread has been
interrupted. Each thread has a boolean property that represents its interrupted status; interrupting a thread sets this
status.
Interruption is a cooperative mechanism. One thread cannot force another to stop what it is doing and do something
else; when thread A interrupts thread B, A is merely requesting that B stop what it is doing when it gets to a convenient
stopping point ‐ if it feels like it. While there is nothing in the API or language specification that demands any specific
application‐level semantics for interruption, the most sensible use for interruption is to cancel an activity. Blocking
methods that are responsive to interruption make it easier to cancel long‐running activities on a timely basis.
When your code calls a method that throws InterruptedException, then your method is a blocking method too, and
must have a plan for responding to interruption. For library code, there are basically two choices:
Propagate the InterruptedException. This is often the most sensible policy if you can get away with it ‐ just propagate
the InterruptedException to your caller. This could involve not catching InterruptedException, or catching it and
throwing it again after performing some brief activity‐specific cleanup.
Restore the interrupt. Sometimes you cannot throw InterruptedException, for instance when your code is part of a
Runnable. In these situations, you must catch InterruptedException and restore the interrupted status by calling
interrupt on the current thread, so that code higher up the call stack can see that an interrupt was issued, as
demonstrated in Listing 5.10.
You can get much more sophisticated with interruption, but these two approaches should work in the vast majority of
situations. But there is one thing you should not do with InterruptedExceptioncatch it and do nothing in response.
This deprives code higher up on the call stack of the opportunity to act on the interruption, because the evidence that
the thread was interrupted is lost. The only situation in which it is acceptable to swallow an interrupt is when you are
extending Thread and therefore control all the code higher up on the call stack. Cancellation and interruption are
covered in greater detail in Chapter 7.
60 Java Concurrency In Practice
Listing 5.10. Restoring the Interrupted Status so as Not to Swallow the Interrupt.
public class TaskRunnable implements Runnable {
BlockingQueue<Task> queue;
...
public void run() {
try {
processTask(queue.take());
} catch (InterruptedException e) {
// restore interrupted status
Thread.currentThread().interrupt();
}
}
}
5.5. Synchronizers
Blocking queues are unique among the collections classes: not only do they act as containers for objects, but they can
also coordinate the control flow of producer and consumer threads because take and put block until the queue enters
the desired state (not empty or not full).
A synchronizer is any object that coordinates the control flow of threads based on its state. Blocking queues can act as
synchronizers; other types of synchronizers include semaphores, barriers, and latches. There are a number of
synchronizer classes in the platform library; if these do not meet your needs, you can also create your own using the
mechanisms described in Chapter 14.
All synchronizers share certain structural properties: they encapsulate state that determines whether threads arriving at
the synchronizer should be allowed to pass or forced to wait, provide methods to manipulate that state, and provide
methods to wait efficiently for the synchronizer to enter the desired state.
5.5.1. Latches
A latch is a synchronizer that can delay the progress of threads until it reaches its terminal state [CPJ 3.4.2]. A latch acts
as a gate: until the latch reaches the terminal state the gate is closed and no thread can pass, and in the terminal state
the gate opens, allowing all threads to pass. Once the latch reaches the terminal state, it cannot change state again, so it
remains open forever. Latches can be used to ensure that certain activities do not proceed until other one‐time
activities complete, such as:
• Ensuring that a computation does not proceed until resources it needs have been initialized. A simple binary
(two‐state) latch could be used to indicate "Resource R has been initialized", and any activity that requires R
would wait first on this latch.
• Ensuring that a service does not start until other services on which it depends have started. Each service would
have an associated binary latch; starting service S would involve first waiting on the latches for other services on
which S depends, and then releasing the S latch after startup completes so any services that depend on S can
then proceed.
• Waiting until all the parties involved in an activity, for instance the players in a multi‐player game, are ready to
proceed. In this case, the latch reaches the terminal state after all the players are ready.
CountDownLatch is a flexible latch implementation that can be used in any of these situations; it allows one or more
threads to wait for a set of events to occur. The latch state consists of a counter initialized to a positive number,
representing the number of events to wait for. The countDown method decrements the counter, indicating that an event
has occurred, and the await methods wait for the counter to reach zero, which happens when all the events have
occurred. If the counter is nonzero on entry, await blocks until the counter reaches zero, the waiting thread is
interrupted, or the wait times out.
TestHarness in Listing 5.11 illustrates two common uses for latches. TestHarness creates a number of threads that run
a given task concurrently. It uses two latches, a "starting gate" and an "ending gate". The starting gate is initialized with
a count of one; the ending gate is initialized with a count equal to the number of worker threads. The first thing each
worker thread does is wait on the starting gate; this ensures that none of them starts working until they all are ready to
start. The last thing each does is count down on the ending gate; this allows the master thread to wait efficiently until
the last of the worker threads has finished, so it can calculate the elapsed time.
Why did we bother with the latches in TestHarness instead of just starting the threads immediately after they are
created? Presumably, we wanted to measure how long it takes to run a task n times concurrently. If we simply created
and started the threads, the threads started earlier would have a "head start" on the later threads, and the degree of
contention would vary over time as the number of active threads increased or decreased. Using a starting gate allows
61
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 17BChapter 5. Building
Blocks
the master thread to release all the worker threads at once, and the ending gate allows the master thread to wait for
the last thread to finish rather than waiting sequentially for each thread to finish.
5.5.2. FutureTask
FutureTask also acts like a latch. (FutureTask implements Future, which describes an abstract result‐bearing
computation [CPJ 4.3.3].) A computation represented by a FutureTask is implemented with a Callable, the resultbearing
equivalent of Runnable, and can be in one of three states: waiting to run, running, or completed. Completion
subsumes all the ways a computation can complete, including normal completion, cancellation, and exception. Once a
FutureTask enters the completed state, it stays in that state forever.
The behavior of Future.get depends on the state of the task. If it is completed, get returns the result immediately, and
otherwise blocks until the task transitions to the completed state and then returns the result or throws an exception.
FutureTask conveys the result from the thread executing the computation to the thread(s) retrieving the result; the
specification of FutureTask guarantees that this transfer constitutes a safe publication of the result.
Listing 5.11. Using CountDownLatch for Starting and Stopping Threads in Timing Tests.
public class TestHarness {
public long timeTasks(int nThreads, final Runnable task)
throws InterruptedException {
final CountDownLatch startGate = new CountDownLatch(1);
final CountDownLatch endGate = new CountDownLatch(nThreads);
for (int i = 0; i < nThreads; i++) {
Thread t = new Thread() {
public void run() {
try {
startGate.await();
try {
task.run();
} finally {
endGate.countDown();
}
} catch (InterruptedException ignored) { }
}
};
t.start();
}
long start = System.nanoTime();
startGate.countDown();
endGate.await();
long end = System.nanoTime();
return end-start;
}
}
FutureTask is used by the Executor framework to represent asynchronous tasks, and can also be used to represent any
potentially lengthy computation that can be started before the results are needed. Preloader in Listing 5.12 uses
FutureTask to perform an expensive computation whose results are needed later; by starting the computation early,
you reduce the time you would have to wait later when you actually need the results.
62 Java Concurrency In Practice
Listing 5.12. Using FutureTask to Preload Data that is Needed Later.
public class Preloader {
private final FutureTask<ProductInfo> future =
new FutureTask<ProductInfo>(new Callable<ProductInfo>() {
public ProductInfo call() throws DataLoadException {
return loadProductInfo();
}
});
private final Thread thread = new Thread(future);
public void start() { thread.start(); }
public ProductInfo get()
throws DataLoadException, InterruptedException {
try {
return future.get();
} catch (ExecutionException e) {
Throwable cause = e.getCause();
if (cause instanceof DataLoadException)
throw (DataLoadException) cause;
else
throw launderThrowable(cause);
}
}
}
Preloader creates a FutureTask that describes the task of loading product information from a database and a thread in
which the computation will be performed. It provides a start method to start the thread, since it is inadvisable to start
a thread from a constructor or static initializer. When the program later needs the ProductInfo, it can call get, which
returns the loaded data if it is ready, or waits for the load to complete if not.
Tasks described by Callable can throw checked and unchecked exceptions, and any code can throw an Error.
Whatever the task code may throw, it is wrapped in an ExecutionException and rethrown from Future.get. This
complicates code that calls get, not only because it must deal with the possibility of ExecutionException (and the
unchecked CancellationException), but also because the cause of the ExecutionException is returned as a
THRowable, which is inconvenient to deal with.
When get throws an ExecutionException in Preloader, the cause will fall into one of three categories: a checked
exception thrown by the Callable, a RuntimeException, or an Error. We must handle each of these cases separately,
but we will use the launderThrowable utility method in Listing 5.13 to encapsulate some of the messier exceptionhandling
logic. Before calling launderThrowable, Preloader tests for the known checked exceptions and rethrows
them. That leaves only unchecked exceptions, which Preloader handles by calling launderThrowable and throwing the
result. If the Throwable passed to launderThrowable is an Error, launderThrowable rethrows it directly; if it is not a
RuntimeException, it throws an IllegalStateException to indicate a logic error. That leaves only RuntimeException,
which launderThrowable returns to its caller, and which the caller generally rethrows.
Listing 5.13. Coercing an Unchecked Throwable to a RuntimeException.
/** If the Throwable is an Error, throw it; if it is a
* RuntimeException return it, otherwise throw IllegalStateException
*/
public static RuntimeException launderThrowable(Throwable t) {
if (t instanceof RuntimeException)
return (RuntimeException) t;
else if (t instanceof Error)
throw (Error) t;
else
throw new IllegalStateException("Not unchecked", t);
}
5.5.3. Semaphores
Counting semaphores are used to control the number of activities that can access a certain resource or perform a given
action at the same time [CPJ 3.4.1]. Counting semaphores can be used to implement resource pools or to impose a
bound on a collection.
A Semaphore manages a set of virtual permits; the initial number of permits is passed to the Semaphore constructor.
Activities can acquire permits (as long as some remain) and release permits when they are done with them. If no permit
is available, acquire blocks until one is (or until interrupted or the operation times out). The release method returns a
permit to the semaphore. [4] A degenerate case of a counting semaphore is a binary semaphore, a Semaphore with an
initial count of one. A binary semaphore can be used as a mutex with non‐reentrant locking semantics; whoever holds
the sole permit holds the mutex.
63
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 17BChapter 5. Building
Blocks
[4] The implementation has no actual permit objects, and Semaphore does not associate dispensed permits with threads, so a permit acquired in
one thread can be released from another thread. You can think of acquire as consuming a permit and release as creating one; a Semaphore is not
limited to the number of permits it was created with.
Semaphores are useful for implementing resource pools such as database connection pools. While it is easy to construct
a fixed‐sized pool that fails if you request a resource from an empty pool, what you really want is to block if the pool is
empty and unblock when it becomes nonempty again. If you initialize a Semaphore to the pool size, acquire a permit
before trying to fetch a resource from the pool, and release the permit after putting a resource back in the pool,
acquire blocks until the pool becomes nonempty. This technique is used in the bounded buffer class in Chapter 12. (An
easier way to construct a blocking object pool would be to use a BlockingQueue to hold the pooled resources.)
Similarly, you can use a Semaphore to turn any collection into a blocking bounded collection, as illustrated by
BoundedHashSet in Listing 5.14. The semaphore is initialized to the desired maximum size of the collection. The add
operation acquires a permit before adding the item into the underlying collection. If the underlying add operation does
not actually add anything, it releases the permit immediately. Similarly, a successful remove operation releases a permit,
enabling more elements to be added. The underlying Set implementation knows nothing about the bound; this is
handled by BoundedHashSet.
5.5.4. Barriers
We have seen how latches can facilitate starting a group of related activities or waiting for a group of related activities
to complete. Latches are single‐use objects; once a latch enters the terminal state, it cannot be reset.
Barriers are similar to latches in that they block a group of threads until some event has occurred [CPJ 4.4.3]. The key
difference is that with a barrier, all the threads must come together at a barrier point at the same time in order to
proceed. Latches are for waiting for events; barriers are for waiting for other threads. A barrier implements the protocol
some families use to rendezvous during a day at the mall: "Everyone meet at McDonald's at 6:00; once you get there,
stay there until everyone shows up, and then we'll figure out what we're doing next."
CyclicBarrier allows a fixed number of parties to rendezvous repeatedly at a barrier point and is useful in parallel
iterative algorithms that break down a problem into a fixed number of independent subproblems. Threads call await
when they reach the barrier point, and await blocks until all the threads have reached the barrier point. If all threads
meet at the barrier point, the barrier has been successfully passed, in which case all threads are released and the barrier
is reset so it can be used again. If a call to await times out or a thread blocked in await is interrupted, then the barrier is
considered broken and all outstanding calls to await terminate with BrokenBarrierException. If the barrier is
successfully passed, await returns a unique arrival index for each thread, which can be used to "elect" a leader that
takes some special action in the next iteration. CyclicBar rier also lets you pass a barrier action to the constructor;
this is a Runnable that is executed (in one of the subtask threads) when the barrier is successfully passed but before the
blocked threads are released.
64 Java Concurrency In Practice
Listing 5.14. Using Semaphore to Bound a Collection.
public class BoundedHashSet<T> {
private final Set<T> set;
private final Semaphore sem;
public BoundedHashSet(int bound) {
this.set = Collections.synchronizedSet(new HashSet<T>());
sem = new Semaphore(bound);
}
public boolean add(T o) throws InterruptedException {
sem.acquire();
boolean wasAdded = false;
try {
wasAdded = set.add(o);
return wasAdded;
}
finally {
if (!wasAdded)
sem.release();
}
}
public boolean remove(Object o) {
boolean wasRemoved = set.remove(o);
if (wasRemoved)
sem.release();
return wasRemoved;
}
}
Barriers are often used in simulations, where the work to calculate one step can be done in parallel but all the work
associated with a given step must complete before advancing to the next step. For example, in n‐body particle
simulations, each step calculates an update to the position of each particle based on the locations and other attributes
of the other particles. Waiting on a barrier between each update ensures that all updates for step k have completed
before moving on to step k + 1.
CellularAutomata in Listing 5.15 demonstrates using a barrier to compute a cellular automata simulation, such as
Conway's Life game (Gardner, 1970). When parallelizing a simulation, it is generally impractical to assign a separate
thread to each element (in the case of Life, a cell); this would require too many threads, and the overhead of
coordinating them would dwarf the computation. Instead, it makes sense to partition the problem into a number of
subparts, let each thread solve a subpart, and then merge the results. CellularAutomata partitions the board into Ncpu
parts, where Ncpu is the number of CPUs available, and assigns each part to a thread. [5] At each step, the worker threads
calculate new values for all the cells in their part of the board. When all worker threads have reached the barrier, the
barrier action commits the new values to the data model. After the barrier action runs, the worker threads are released
to compute the next step of the calculation, which includes consulting an isDone method to determine whether further
iterations are required.
[5] For computational problems like this that do no I/O and access no shared data, Ncpu or Ncpu + 1 threads yield optimal throughput; more threads
do not help, and may in fact degrade performance as the threads compete for CPU and memory resources.
Another form of barrier is Exchanger, a two‐party barrier in which the parties exchange data at the barrier point [CPJ
3.4.3]. Exchangers are useful when the parties perform asymmetric activities, for example when one thread fills a buffer
with data and the other thread consumes the data from the buffer; these threads could use an Exchanger to meet and
exchange a full buffer for an empty one. When two threads exchange objects via an Exchanger, the exchange
constitutes a safe publication of both objects to the other party.
The timing of the exchange depends on the responsiveness requirements of the application. The simplest approach is
that the filling task exchanges when the buffer is full, and the emptying task exchanges when the buffer is empty; this
minimizes the number of exchanges but can delay processing of some data if the arrival rate of new data is
unpredictable. Another approach would be that the filler exchanges when the buffer is full, but also when the buffer is
partially filled and a certain amount of time has elapsed.
5.6. Building an Efficient, Scalable Result Cache
Nearly every server application uses some form of caching. Reusing the results of a previous computation can reduce
latency and increase throughput, at the cost of some additional memory usage.
Listing 5.15. Coordinating Computation in a Cellular Automaton with CyclicBarrier.
65
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 17BChapter 5. Building
Blocks
public class CellularAutomata {
private final Board mainBoard;
private final CyclicBarrier barrier;
private final Worker[] workers;
public CellularAutomata(Board board) {
this.mainBoard = board;
int count = Runtime.getRuntime().availableProcessors();
this.barrier = new CyclicBarrier(count,
new Runnable() {
public void run() {
mainBoard.commitNewValues();
}});
this.workers = new Worker[count];
for (int i = 0; i < count; i++)
workers[i] = new Worker(mainBoard.getSubBoard(count, i));
}
private class Worker implements Runnable {
private final Board board;
public Worker(Board board) { this.board = board; }
public void run() {
while (!board.hasConverged()) {
for (int x = 0; x < board.getMaxX(); x++)
for (int y = 0; y < board.getMaxY(); y++)
board.setNewValue(x, y, computeValue(x, y));
try {
barrier.await();
} catch (InterruptedException ex) {
return;
} catch (BrokenBarrierException ex) {
return;
}
}
}
}
public void start() {
for (int i = 0; i < workers.length; i++)
new Thread(workers[i]).start();
mainBoard.waitForConvergence();}
}
}
Like many other frequently reinvented wheels, caching often looks simpler than it is. A naive cache implementation is
likely to turn a performance bottleneck into a scalability bottleneck, even if it does improve single‐threaded
performance. In this section we develop an efficient and scalable result cache for a computationally expensive function.
Let's start with the obvious approach ‐ a simple HashMapand then look at some of its concurrency disadvantages and
how to fix them.
The Computable<A,V> interface in Listing 5.16 describes a function with input of type A and result of type V.
ExpensiveFunction, which implements Computable, takes a long time to compute its result; we'd like to create a
Computable wrapper that remembers the results of previous computations and encapsulates the caching process. (This
technique is known as Memorization.)
66 Java Concurrency In Practice
Listing 5.16. Initial Cache Attempt Using HashMap and Synchronization.
public interface Computable<A, V> {
V compute(A arg) throws InterruptedException;
}
public class ExpensiveFunction
implements Computable<String, BigInteger> {
public BigInteger compute(String arg) {
// after deep thought...
return new BigInteger(arg);
}
}
public class Memorizer1<A, V> implements Computable<A, V> {
@GuardedBy("this")
private final Map<A, V> cache = new HashMap<A, V>();
private final Computable<A, V> c;
public Memorizer1(Computable<A, V> c) {
this.c = c;
}
public synchronized V compute(A arg) throws InterruptedException {
V result = cache.get(arg);
if (result == null) {
result = c.compute(arg);
cache.put(arg, result);
}
return result;
}
}
Memorizer1 in Listing 5.16 shows a first attempt: using a HashMap to store the results of previous computations. The
compute method first checks whether the desired result is already cached, and returns the pre‐computed value if it is.
Otherwise, the result is computed and cached in the HashMap before returning.
HashMap is not thread‐safe, so to ensure that two threads do not access the HashMap at the same time, Memorizer1
takes the conservative approach of synchronizing the entire compute method. This ensures thread safety but has an
obvious scalability problem: only one thread at a time can execute compute at all. If another thread is busy computing a
result, other threads calling compute may be blocked for a long time. If multiple threads are queued up waiting to
compute values not already computed, compute may actually take longer than it would have without Memorization.
Figure 5.2 illustrates what could happen when several threads attempt to use a function memorized with this approach.
This is not the sort of performance improvement we had hoped to achieve through caching.
Figure 5.2. Poor Concurrency of Memorizer1.
Memorizer2 in Listing 5.17 improves on the awful concurrent behavior of Memorizer1 by replacing the HashMap with a
ConcurrentHashMap. Since ConcurrentHashMap is thread‐safe, there is no need to synchronize when accessing the
backing Map, thus eliminating the serialization induced by synchronizing compute in Memorizer1.
67
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 17BChapter 5. Building
Blocks
Memorizer2 certainly has better concurrent behavior than Memorizer1: multiple threads can actually use it
concurrently. But it still has some defects as a cache ‐ there is a window of vulnerability in which two threads calling
compute at the same time could end up computing the same value. In the case of memorization, this is merely
inefficient ‐ the purpose of a cache is to prevent the same data from being calculated multiple times. For a more
general‐purpose caching mechanism, it is far worse; for an object cache that is supposed to provide once‐and‐only‐once
initialization, this vulnerability would also pose a safety risk.
The problem with Memorizer2 is that if one thread starts an expensive computation, other threads are not aware that
the computation is in progress and so may start the same computation, as illustrated in Figure 5.3. We'd like to
somehow represent the notion that "thread X is currently computing f (27)", so that if another thread arrives looking for
f (27), it knows that the most efficient way to find it is to head over to Thread X's house, hang out there until X is
finished, and then ask "Hey, what did you get for f (27)?"
Figure 5.3. Two Threads Computing the Same Value When Using Memorizer2.
Listing 5.17. Replacing HashMap with ConcurrentHashMap.
public class Memorizer2<A, V> implements Computable<A, V> {
private final Map<A, V> cache = new ConcurrentHashMap<A, V>();
private final Computable<A, V> c;
public Memorizer2(Computable<A, V> c) { this.c = c; }
public V compute(A arg) throws InterruptedException {
V result = cache.get(arg);
if (result == null) {
result = c.compute(arg);
cache.put(arg, result);
}
return result;
}
}
We've already seen a class that does almost exactly this: FutureTask. FutureTask represents a computational process
that may or may not already have completed. FutureTask.get returns the result of the computation immediately if it is
available; otherwise it blocks until the result has been computed and then returns it.
Memorizer3 in Listing 5.18 redefines the backing Map for the value cache as a ConcurrentHashMap<A,Future<V>>
instead of a ConcurrentHashMap<A,V>. Memorizer3 first checks to see if the appropriate calculation has been started
(as opposed to finished, as in Memorizer2). If not, it creates a FutureTask, registers it in the Map, and starts the
computation; otherwise it waits for the result of the existing computation. The result might be available immediately or
might be in the process of being computed ‐ but this is transparent to the caller of Future.get.
The Memorizer3 implementation is almost perfect: it exhibits very good concurrency (mostly derived from the excellent
concurrency of ConcurrentHashMap), the result is returned efficiently if it is already known, and if the computation is in
progress by another thread, newly arriving threads wait patiently for the result. It has only one defect ‐ there is still a
small window of vulnerability in which two threads might compute the same value. This window is far smaller than in
Memorizer2, but because the if block in compute is still a non‐atomic check‐then‐act sequence, it is possible for two
threads to call compute with the same value at roughly the same time, both see that the cache does not contain the
desired value, and both start the computation. This unlucky timing is illustrated in Figure 5.4.
68 Java Concurrency In Practice
Figure 5.4. Unlucky Timing that could Cause Memorizer3 to Calculate the Same Value Twice.
Listing 5.18. Memorizing Wrapper Using FutureTask.
public class Memorizer3<A, V> implements Computable<A, V> {
private final Map<A, Future<V>> cache
= new ConcurrentHashMap<A, Future<V>>();
private final Computable<A, V> c;
public Memorizer3(Computable<A, V> c) { this.c = c; }
public V compute(final A arg) throws InterruptedException {
Future<V> f = cache.get(arg);
if (f == null) {
Callable<V> eval = new Callable<V>() {
public V call() throws InterruptedException {
return c.compute(arg);
}
};
FutureTask<V> ft = new FutureTask<V>(eval);
f = ft;
cache.put(arg, ft);
ft.run(); // call to c.compute happens here
}
try {
return f.get();
} catch (ExecutionException e) {
throw launderThrowable(e.getCause());
}
}
}
Memorizer3 is vulnerable to this problem because a compound action (put‐if‐absent) is performed on the backing map
that cannot be made atomic using locking. Memorizer in Listing 5.19 takes advantage of the atomic putIfAbsent
method of ConcurrentMap, closing the window of vulnerability in Memorizer3.
Caching a Future instead of a value creates the possibility of cache pollution: if a computation is cancelled or fails,
future attempts to compute the result will also indicate cancellation or failure. To avoid this, Memorizer removes the
Future from the cache if it detects that the computation was cancelled; it might also be desirable to remove the Future
upon detecting a RuntimeException if the computation might succeed on a future attempt. Memorizer also does not
address cache expiration, but this could be accomplished by using a subclass of FutureTask that associates an
expiration time with each result and periodically scanning the cache for expired entries. (Similarly, it does not address
cache eviction, where old entries are removed to make room for new ones so that the cache does not consume too
much memory.)
With our concurrent cache implementation complete, we can now add real caching to the factorizing servlet from
Chapter 2, as promised. Factorizer in Listing 5.20 uses Memorizer to cache previously computed values efficiently and
scalably.
69
5288B5287B5286B5249B5230B5229B5200B5148B4916B4722B4721B4720B4719B4594B4593B4569B4568B4567B45
66B4565B4564B4427B4426B4410B4409B4345B4218B4217B4216B4215B3636B3635B3469B3468B3465B3455B3454
B3453B3449B3221B3220B3219B3214B3059B3058B3057B2535B2534B2190B2189B2188B2 ‐ 17BChapter 5. Building
Blocks
Listing 5.19. Final Implementation of Memorizer.
public class Memorizer<A, V> implements Computable<A, V> {
private final ConcurrentMap<A, Future<V>> cache
= new ConcurrentHashMap<A, Future<V>>();
private final Computable<A, V> c;
public Memorizer(Computable<A, V> c) { this.c = c; }
public V compute(final A arg) throws InterruptedException {
while (true) {
Future<V> f = cache.get(arg);
if (f == null) {
Callable<V> eval = new Callable<V>() {
public V call() throws InterruptedException {
return c.compute(arg);
}
};
FutureTask<V> ft = new FutureTask<V>(eval);
f = cache.putIfAbsent(arg, ft);
if (f == null) { f = ft; ft.run(); }
}
try {
return f.get();
} catch (CancellationException e) {
cache.remove(arg, f);
} catch (ExecutionException e) {
throw launderThrowable(e.getCause());
}
}
}
}
Listing 5.20. Factorizing Servlet that Caches Results Using Memorizer.
@ThreadSafe
public class Factorizer implements Servlet {
private final Computable<BigInteger, BigInteger[]> c =
new Computable<BigInteger, BigInteger[]>() {
public BigInteger[] compute(BigInteger arg) {
return factor(arg);
}
};
private final Computable<BigInteger, BigInteger[]> cache
= new Memorizer<BigInteger, BigInteger[]>(c);
public void service(ServletRequest req,
ServletResponse resp) {
try {
BigInteger i = extractFromRequest(req);
encodeIntoResponse(resp, cache.compute(i));
} catch (InterruptedException e) {
encodeError(resp, "factorization interrupted");
}
}
}
Summary of Part I
We've covered a lot of material so far! The following "concurrency cheat sheet" summarizes the main concepts and
rules presented in Part I.
• It's the mutable state, stupid. [1]
All concurrency issues boil down to coordinating access to mutable state. The less mutable state, the easier it is to
ensure thread safety.
• Make fields final unless they need to be mutable.
• Immutable objects are automatically thread‐safe.
Immutable objects simplify concurrent programming tremendously. They are simpler and safer, and can be shared
freely without locking or defensive copying.
• Encapsulation makes it practical to manage the complexity.
70 Java Concurrency In Practice
You could write a thread‐safe program with all data stored in global variables, but why would you want to?
Encapsulating data within objects makes it easier to preserve their invariants; encapsulating synchronization within
objects makes it easier to comply with their synchronization policy.
• Guard each mutable variable with a lock.
• Guard all variables in an invariant with the same lock.
• Hold locks for the duration of compound actions.
• A program that accesses a mutable variable from multiple threads without synchronization is a broken program.
• Don't rely on clever reasoning about why you don't need to synchronize.
• Include thread safety in the design processor explicitly document that your class is not thread‐safe.
• Document your synchronization policy.
[1] During the 1992 U.S. presidential election, electoral strategist James Carville hung a sign in Bill Clinton's campaign headquarters reading "The
economy, stupid", to keep the campaign on message.
5BPart II: Structuring Concurrent Applications ‐ 17BChapter 5. Building Blocks 71
Part II: Structuring Concurrent Applications
Chapter 6. Task Execution
Chapter 7. Cancellation and Shutdown
Chapter 8. Applying Thread Pools
Chapter 9. GUI Applications
72 Java Concurrency In Practice
Chapter 6. Task Execution
Most concurrent applications are organized around the execution of tasks: abstract, discrete units of work. Dividing the
work of an application into tasks simplifies program organization, facilitates error recovery by providing natural
transaction boundaries, and promotes concurrency by providing a natural structure for parallelizing work.
6.1. Executing Tasks in Threads
The first step in organizing a program around task execution is identifying sensible task boundaries. Ideally, tasks are
independent activities: work that doesn't depend on the state, result, or side effects of other tasks. Independence
facilitates concurrency, as independent tasks can be executed in parallel if there are adequate processing resources. For
greater flexibility in scheduling and load balancing tasks, each task should also represent a small fraction of your
application's processing capacity.
Server applications should exhibit both good throughput and good responsiveness under normal load. Application
providers want applications to support as many users as possible, so as to reduce provisioning costs per user; users
want to get their response quickly. Further, applications should exhibit graceful degradation as they become
overloaded, rather than simply falling over under heavy load. Choosing good task boundaries, coupled with a sensible
task execution policy (see Section 6.2.2), can help achieve these goals.
Most server applications offer a natural choice of task boundary: individual client requests. Web servers, mail servers,
file servers, EJB containers, and database servers all accept requests via network connections from remote clients. Using
individual requests as task boundaries usually offers both independence and appropriate task sizing. For example, the
result of submitting a message to a mail server is not affected by the other messages being processed at the same time,
and handling a single message usually requires a very small percentage of the server's total capacity.
6.1.1. Executing Tasks Sequentially
There are a number of possible policies for scheduling tasks within an application, some of which exploit the potential
for concurrency better than others. The simplest is to execute tasks sequentially in a single thread. SingleThreadWeb-
Server in Listing 6.1 processes its tasks ‐ HTTP requests arriving on port 80 ‐ sequentially. The details of the request
processing aren't important; we're interested in characterizing the concurrency of various scheduling policies.
Listing 6.1. Sequential Web Server.
class SingleThreadWebServer {
public static void main(String[] args) throws IOException {
ServerSocket socket = new ServerSocket(80);
while (true) {
Socket connection = socket.accept();
handleRequest(connection);
}
}
}
SingleThreadedWebServer is simple and theoretically correct, but would perform poorly in production because it can
handle only one request at a time. The main thread alternates between accepting connections and processing the
associated request. While the server is handling a request, new connections must wait until it finishes the current
request and calls accept again. This might work if request processing were so fast that handleRequest effectively
returned immediately, but this doesn't describe any web server in the real world.
Processing a web request involves a mix of computation and I/O. The server must perform socket I/O to read the
request and write the response, which can block due to network congestion or connectivity problems. It may also
perform file I/O or make database requests, which can also block. In a single‐threaded server, blocking not only delays
completing the current request, but prevents pending requests from being processed at all. If one request blocks for an
unusually long time, users might think the server is unavailable because it appears unresponsive. At the same time,
resource utilization is poor, since the CPU sits idle while the single thread waits for its I/O to complete.
In server applications, sequential processing rarely provides either good throughput or good responsiveness. There are
exceptions ‐ such as when tasks are few and long‐lived, or when the server serves a single client that makes only a single
request at a time ‐ but most server applications do not work this way.[1]
5BPart II: Structuring Concurrent Applications ‐ 18BChapter 6. Task Execution 73
[1] In some situations, sequential processing may offer a simplicity or safety advantage; most GUI frameworks process tasks sequentially using a
single thread. We return to the sequential model in Chapter 9.
6.1.2. Explicitly Creating Threads for Tasks
A more responsive approach is to create a new thread for servicing each request, as shown in
ThreadPerTaskWebServer in Listing 6.2.
Listing 6.2. Web Server that Starts a New Thread for Each Request.
class ThreadPerTaskWebServer {
public static void main(String[] args) throws IOException {
ServerSocket socket = new ServerSocket(80);
while (true) {
final Socket connection = socket.accept();
Runnable task = new Runnable() {
public void run() {
handleRequest(connection);
}
};
new Thread(task).start();
}
}
}
ThreadPerTaskWebServer is similar in structure to the single‐threaded version ‐ the main thread still alternates
between accepting an incoming connection and dispatching the request. The difference is that for each connection, the
main loop creates a new thread to process the request instead of processing it within the main thread. This has three
main consequences:
• Task processing is offloaded from the main thread, enabling the main loop to resume waiting for the next
incoming connection more quickly. This enables new connections to be accepted before previous requests
complete, improving responsiveness.
• Tasks can be processed in parallel, enabling multiple requests to be serviced simultaneously. This may improve
throughput if there are multiple processors, or if tasks need to block for any reason such as I/O completion, lock
acquisition, or resource availability.
• Task‐handling code must be thread‐safe, because it may be invoked concurrently for multiple tasks.
Under light to moderate load, the thread‐per‐task approach is an improvement over sequential execution. As long as
the request arrival rate does not exceed the server's capacity to handle requests, this approach offers better
responsiveness and throughput.
6.1.3. Disadvantages of Unbounded Thread Creation
For production use, however, the thread‐per‐task approach has some practical drawbacks, especially when a large
number of threads may be created:
Thread lifecycle overhead. Thread creation and teardown are not free. The actual overhead varies across platforms, but
thread creation takes time, introducing latency into request processing, and requires some processing activity by the
JVM and OS. If requests are frequent and lightweight, as in most server applications, creating a new thread for each
request can consume significant computing resources.
Resource consumption. Active threads consume system resources, especially memory. When there are more runnable
threads than available processors, threads sit idle. Having many idle threads can tie up a lot of memory, putting
pressure on the garbage collector, and having many threads competing for the CPUs can impose other performance
costs as well. If you have enough threads to keep all the CPUs busy, creating more threads won't help and may even
hurt.
Stability. There is a limit on how many threads can be created. The limit varies by platform and is affected by factors
including JVM invocation parameters, the requested stack size in the Thread constructor, and limits on threads placed
by the underlying operating system.[2] When you hit this limit, the most likely result is an OutOfMemoryError. trying to
recover from such an error is very risky; it is far easier to structure your program to avoid hitting this limit.
[2] On 32‐bit machines, a major limiting factor is address space for thread stacks. Each thread maintains two execution stacks, one for Java code
and one for native code. Typical JVM defaults yield a combined stack size of around half a megabyte. (You can change this with the -Xss JVM flag
or through the Thread constructor.) If you divide the per‐thread stack size into 232, you get a limit of a few thousands or tens of thousands of
threads. Other factors, such as OS limitations, may impose stricter limits.
74 Java Concurrency In Practice
Up to a certain point, more threads can improve throughput, but beyond that point creating more threads just slows
down your application, and creating one thread too many can cause your entire application to crash horribly. The way
to stay out of danger is to place some bound on how many threads your application creates, and to test your application
thoroughly to ensure that, even when this bound is reached, it does not run out of resources.
The problem with the thread‐per‐task approach is that nothing places any limit on the number of threads created
except the rate at which remote users can throw HTTP requests at it. Like other concurrency hazards, unbounded
thread creation may appear to work just fine during prototyping and development, with problems surfacing only when
the application is deployed and under heavy load. So a malicious user, or enough ordinary users, can make your web
server crash if the traffic load ever reaches a certain threshold. For a server application that is supposed to provide high
availability and graceful degradation under load, this is a serious failing.
6.2. The Executor Framework
Tasks are logical units of work, and threads are a mechanism by which tasks can run asynchronously. We've examined
two policies for executing tasks using threads ‐ execute tasks sequentially in a single thread, and execute each task in its
own thread. Both have serious limitations: the sequential approach suffers from poor responsiveness and throughput,
and the thread‐per‐task approach suffers from poor resource management.
In Chapter 5, we saw how to use bounded queues to prevent an overloaded application from running out of memory.
Thread pools offer the same benefit for thread management, and java.util.concurrent provides a flexible thread
pool implementation as part of the Executor framework. The primary abstraction for task execution in the Java class
libraries is not Thread, but Executor, shown in Listing 6.3.
Listing 6.3. Executor Interface.
public interface Executor {
void execute(Runnable command);
}
Executor may be a simple interface, but it forms the basis for a flexible and powerful framework for asynchronous task
execution that supports a wide variety of task execution policies. It provides a standard means of decoupling task
submission from task execution, describing tasks with Runnable. The Executor implementations also provide lifecycle
support and hooks for adding statistics gathering, application management, and monitoring.
Executor is based on the producer‐consumer pattern, where activities that submit tasks are the producers (producing
units of work to be done) and the threads that execute tasks are the consumers (consuming those units of work). Using
an Executor is usually the easiest path to implementing a producer‐consumer design in your application.
6.2.1. Example: Web Server Using Executor
Building a web server with an Executor is easy. TaskExecutionWebServer in Listing 6.4 replaces the hard‐coded thread
creation with an Executor. In this case, we use one of the standard Executor implementations, a fixed‐size thread pool
with 100 threads.
In TaskExecutionWebServer, submission of the request‐handling task is decoupled from its execution using an
Executor, and its behavior can be changed merely by substituting a different Executor implementation. Changing
Executor implementations or configuration is far less invasive than changing the way tasks are submitted; Executor
configuration is generally a one‐time event and can easily be exposed for deployment‐time configuration, whereas task
submission code tends to be strewn throughout the program and harder to expose.
5BPart II: Structuring Concurrent Applications ‐ 18BChapter 6. Task Execution 75
Listing 6.4. Web Server Using a Thread Pool.
class TaskExecutionWebServer {
private static final int NTHREADS = 100;
private static final Executor exec
= Executors.newFixedThreadPool(NTHREADS);
public static void main(String[] args) throws IOException {
ServerSocket socket = new ServerSocket(80);
while (true) {
final Socket connection = socket.accept();
Runnable task = new Runnable() {
public void run() {
handleRequest(connection);
}
};
exec.execute(task);
}
}
}
We can easily modify TaskExecutionWebServer to behave like ThreadPer-TaskWebServer by substituting an Executor
that creates a new thread for each request. Writing such an Executor is trivial, as shown in ThreadPerTaskExecutor in
Listing 6.5.
Listing 6.5. Executor that Starts a New Thread for Each Task.
public class ThreadPerTaskExecutor implements Executor {
public void execute(Runnable r) {
new Thread(r).start();
};
}
Similarly, it is also easy to write an Executor that would make TaskExecutionWebServer behave like the singlethreaded
version, executing each task synchronously before returning from execute, as shown in
WithinThreadExecutor in Listing 6.6.
6.2.2. Execution Policies
The value of decoupling submission from execution is that it lets you easily specify, and subsequently change without
great difficulty, the execution policy for a given class of tasks. An execution policy specifies the "what, where, when, and
how" of task execution, including:
Listing 6.6. Executor that Executes Tasks Synchronously in the Calling Thread.
public class WithinThreadExecutor implements Executor {
public void execute(Runnable r) {
r.run();
};
}
• In what thread will tasks be executed?
• In what order should tasks be executed (FIFO, LIFO, priority order)?
• How many tasks may execute concurrently?
• How many tasks may be queued pending execution?
• If a task has to be rejected because the system is overloaded, which task should be selected as the victim, and
how should the application be notified?
• What actions should be taken before or after executing a task?
Execution policies are a resource management tool, and the optimal policy depends on the available computing
resources and your quality‐of‐service requirements. By limiting the number of concurrent tasks, you can ensure that the
application does not fail due to resource exhaustion or suffer performance problems due to contention for scarce
resources.[3] Separating the specification of execution policy from task submission makes it practical to select an
execution policy at deployment time that is matched to the available hardware.
[3] This is analogous to one of the roles of a transaction monitor in an enterprise application: it can throttle the rate at which transactions are
allowed to proceed so as not to exhaust or overstress limited resources.
Whenever you see code of the form:
new Thread(runnable).start()
76 Java Concurrency In Practice
and you think you might at some point want a more flexible execution policy, seriously consider replacing it with the use
of an Executor.
6.2.3. Thread Pools
A thread pool, as its name suggests, manages a homogeneous pool of worker threads. A thread pool is tightly bound to
a work queue holding tasks waiting to be executed. Worker threads have a simple life: request the next task from the
work queue, execute it, and go back to waiting for another task.
Executing tasks in pool threads has a number of advantages over the thread‐per‐task approach. Reusing an existing
thread instead of creating a new one amortizes thread creation and teardown costs over multiple requests. As an added
bonus, since the worker thread often already exists at the time the request arrives, the latency associated with thread
creation does not delay task execution, thus improving responsiveness. By properly tuning the size of the thread pool,
you can have enough threads to keep the processors busy while not having so many that your application runs out of
memory or thrashes due to competition among threads for resources.
The class library provides a flexible thread pool implementation along with some useful predefined configurations. You
can create a thread pool by calling one of the static factory methods in Executors:
newFixedThreadPool. A fixed‐size thread pool creates threads as tasks are submitted, up to the maximum pool size,
and then attempts to keep the pool size constant (adding new threads if a thread dies due to an unexpected
Exception).
newCachedThreadPool. A cached thread pool has more flexibility to reap idle threads when the current size of the pool
exceeds the demand for processing, and to add new threads when demand increases, but places no bounds on the size
of the pool.
newSingleThreadExecutor. A single‐threaded executor creates a single worker thread to process tasks, replacing it if it
dies unexpectedly. Tasks are guaranteed to be processed sequentially according to the order imposed by the task queue
(FIFO, LIFO, priority order).[4]
[4] Single‐threaded executors also provide sufficient internal synchronization to guarantee that any memory writes made by tasks are visible to
subsequent tasks; this means that objects can be safely confined to the "task thread" even though that thread may be replaced with another from
time to time.
newScheduledThreadPool. A fixed‐size thread pool that supports delayed and periodic task execution, similar to Timer.
(See Section 6.2.5.)
The newFixedThreadPool and newCachedThreadPool factories return instances of the general‐purpose
ThreadPoolExecutor, which can also be used directly to construct more specialized executors. We discuss thread pool
configuration options in depth in Chapter 8.
The web server in TaskExecutionWebServer uses an Executor with a bounded pool of worker threads. Submitting a
task with execute adds the task to the work queue, and the worker threads repeatedly dequeue tasks from the work
queue and execute them.
Switching from a thread‐per‐task policy to a pool‐based policy has a big effect on application stability: the web server
will no longer fail under heavy load.[5] It also degrades more gracefully, since it does not create thousands of threads
that compete for limited CPU and memory resources. And using an Executor opens the door to all sorts of additional
opportunities for tuning, management, monitoring, logging, error reporting, and other possibilities that would have
been far more difficult to add without a task execution framework.
[5] While the server may not fail due to the creation of too many threads, if the task arrival rate exceeds the task service rate for long enough it is
still possible (just harder) to run out of memory because of the growing queue of Runnables awaiting execution. This can be addressed within
the Executor framework by using a bounded work queue ‐ see Section 8.3.2.
6.2.4. Executor Lifecycle
We've seen how to create an Executor but not how to shut one down. An Executor implementation is likely to create
threads for processing tasks. But the JVM can't exit until all the (non‐daemon) threads have terminated, so failing to
shut down an Executor could prevent the JVM from exiting.
Because an Executor processes tasks asynchronously, at any given time the state of previously submitted tasks is not
immediately obvious. Some may have completed, some may be currently running, and others may be queued awaiting
execution. In shutting down an application, there is a spectrum from graceful shutdown (finish what you've started but
don't accept any new work) to abrupt shutdown (turn off the power to the machine room), and various points in
5BPart II: Structuring Concurrent Applications ‐ 18BChapter 6. Task Execution 77
between. Since Executors provide a service to applications, they should be able to be shut down as well, both gracefully
and abruptly, and feedback information to the application about the status of tasks that were affected by the shutdown.
To address the issue of execution service lifecycle, the ExecutorService interface extends Executor, adding a number
of methods for lifecycle management (as well as some convenience methods for task submission). The lifecycle
management methods of ExecutorService are shown in Listing 6.7.
Listing 6.7. Lifecycle Methods in ExecutorService.
public interface ExecutorService extends Executor {
void shutdown();
List<Runnable> shutdownNow();
boolean isShutdown();
boolean isTerminated();
boolean awaitTermination(long timeout, TimeUnit unit)
throws InterruptedException;
// ... additional convenience methods for task submission
}
The lifecycle implied by ExecutorService has three states ‐ running, shutting down, and terminated.
ExecutorServices are initially created in the running state. The shutdown method initiates a graceful shutdown: no
new tasks are accepted but previously submitted tasks are allowed to complete ‐ including those that have not yet
begun execution. The shutdownNow method initiates an abrupt shutdown: it attempts to cancel outstanding tasks and
does not start any tasks that are queued but not begun.
Tasks submitted to an ExecutorService after it has been shut down are handled by the rejected execution handler (see
Section 8.3.3), which might silently discard the task or might cause execute to throw the unchecked
RejectedExecutionException. Once all tasks have completed, the ExecutorService transitions to the terminated
state. You can wait for an ExecutorService to reach the terminated state with awaitTermination, or poll for whether
it has yet terminated with isTerminated. It is common to follow shutdown immediately by awaitTermination, creating
the effect of synchronously shutting down the ExecutorService.(Executor shutdown and task cancellation are
covered in more detail in Chapter 7.)
LifecycleWebServer in Listing 6.8 extends our web server with lifecycle support. It can be shut down in two ways:
programmatically by calling stop, and through a client request by sending the web server a specially formatted HTTP
request.
Listing 6.8. Web Server with Shutdown Support.
class LifecycleWebServer {
private final ExecutorService exec = ...;
public void start() throws IOException {
ServerSocket socket = new ServerSocket(80);
while (!exec.isShutdown()) {
try {
final Socket conn = socket.accept();
exec.execute(new Runnable() {
public void run() { handleRequest(conn); }
});
} catch (RejectedExecutionException e) {
if (!exec.isShutdown())
log("task submission rejected", e);
}
}
}
public void stop() { exec.shutdown(); }
void handleRequest(Socket connection) {
Request req = readRequest(connection);
if (isShutdownRequest(req))
stop();
else
dispatchRequest(req);
}
}
6.2.5. Delayed and Periodic Tasks
The Timer facility manages the execution of deferred ("run this task in 100 ms") and periodic ("run this task every 10
ms") tasks. However, Timer has some drawbacks, and ScheduledThreadPoolExecutor should be thought of as its
replacement.[6] You can construct a ScheduledThreadPoolExecutor through its constructor or through the
newScheduledThreadPool factory.
78 Java Concurrency In Practice
[6] Timer does have support for scheduling based on absolute, not relative time, so that tasks can be sensitive to changes in the system clock;
ScheduledThreadPoolExecutor supports only relative time.
A Timer creates only a single thread for executing timer tasks. If a timer task takes too long to run, the timing accuracy
of other TimerTasks can suffer. If a recurring TimerTask is scheduled to run every 10 ms and another Timer-Task takes
40 ms to run, the recurring task either (depending on whether it was scheduled at fixed rate or fixed delay) gets called
four times in rapid succession after the long‐running task completes, or "misses" four invocations completely. Scheduled
thread pools address this limitation by letting you provide multiple threads for executing deferred and periodic tasks.
Another problem with Timer is that it behaves poorly if a TimerTask throws an unchecked exception. The Timer thread
doesn't catch the exception, so an unchecked exception thrown from a TimerTask terminates the timer thread. Timer
also doesn't resurrect the thread in this situation; instead, it erroneously assumes the entire Timer was cancelled. In this
case, TimerTasks that are already scheduled but not yet executed are never run, and new tasks cannot be scheduled.
(This problem, called "thread leakage" is described in Section 7.3, along with techniques for avoiding it.)
OutOfTime in Listing 6.9 illustrates how a Timer can become confused in this manner and, as confusion loves company,
how the Timer shares its confusion with the next hapless caller that tries to submit a TimerTask. You might expect the
program to run for six seconds and exit, but what actually happens is that it terminates after one second with an
IllegalStateException whose message text is "Timer already cancelled". ScheduledThreadPoolExecutor deals
properly with ill‐behaved tasks; there is little reason to use Timer in Java 5.0 or later.
If you need to build your own scheduling service, you may still be able to take advantage of the library by using a
DelayQueue, a BlockingQueue implementation that provides the scheduling functionality of
ScheduledThreadPoolExecutor. A DelayQueue manages a collection of Delayed objects. A Delayed has a delay time
associated with it: DelayQueue lets you take an element only if its delay has expired. Objects are returned from a
DelayQueue ordered by the time associated with their delay.
6.3. Finding Exploitable Parallelism
The Executor framework makes it easy to specify an execution policy, but in order to use an Executor, you have to be
able to describe your task as a Runnable. In most server applications, there is an obvious task boundary: a single client
request. But sometimes good task boundaries are not quite so obvious, as in many desktop applications. There may also
be exploitable parallelism within a single client request in server applications, as is sometimes the case in database
servers. (For a further discussion of the competing design forces in choosing task boundaries, see [CPJ 4.4.1.1].)
Listing 6.9. Class Illustrating Confusing Timer Behavior.
public class OutOfTime {
public static void main(String[] args) throws Exception {
Timer timer = new Timer();
timer.schedule(new ThrowTask(), 1);
SECONDS.sleep(1);
timer.schedule(new ThrowTask(), 1);
SECONDS.sleep(5);
}
static class ThrowTask extends TimerTask {
public void run() { throw new RuntimeException(); }
}
}
In this section we develop several versions of a component that admit varying degrees of concurrency. Our sample
component is the page‐rendering portion of a browser application, which takes a page of HTML and renders it into an
image buffer. To keep it simple, we assume that the HTML consists only of marked up text interspersed with image
elements with pre‐specified dimensions and URLs.
6.3.1. Example: Sequential Page Renderer
The simplest approach is to process the HTML document sequentially. As text markup is encountered, render it into the
image buffer; as image references are encountered, fetch the image over the network and draw it into the image buffer
as well. This is easy to implement and requires touching each element of the input only once (it doesn't even require
buffering the document), but is likely to annoy the user, who may have to wait a long time before all the text is
rendered.
5BPart II: Structuring Concurrent Applications ‐ 18BChapter 6. Task Execution 79
A less annoying but still sequential approach involves rendering the text elements first, leaving rectangular placeholders
for the images, and after completing the initial pass on the document, going back and downloading the images and
drawing them into the associated placeholder. This approach is shown in SingleThreadRenderer in Listing 6.10.
Downloading an image mostly involves waiting for I/O to complete, and during this time the CPU does little work. So the
sequential approach may underutilize the CPU, and also makes the user wait longer than necessary to see the finished
page. We can achieve better utilization and responsiveness by breaking the problem into independent tasks that can
execute concurrently.
Listing 6.10. Rendering Page Elements Sequentially.
public class SingleThreadRenderer {
void renderPage(CharSequence source) {
renderText(source);
List<ImageData> imageData = new ArrayList<ImageData>();
for (ImageInfo imageInfo : scanForImageInfo(source))
imageData.add(imageInfo.downloadImage());
for (ImageData data : imageData)
renderImage(data);
}
}
6.3.2. Resultbearing
Tasks: Callable and Future
The Executor framework uses Runnable as its basic task representation. Runnable is a fairly limiting abstraction; run
cannot return a value or throw checked exceptions, although it can have side effects such as writing to a log file or
placing a result in a shared data structure.
Many tasks are effectively deferred computations ‐ executing a database query, fetching a resource over the network,
or computing a complicated function. For these types of tasks, Callable is a better abstraction: it expects that the main
entry point, call, will return a value and anticipates that it might throw an exception.[7] Executors includes several
utility methods for wrapping other types of tasks, including Runnable and java.security.PrivilegedAction, with a
Callable.
[7] To express a non‐value‐returning task with Callable, use Callable<Void>.
Runnable and Callable describe abstract computational tasks. Tasks are usually finite: they have a clear starting point
and they eventually terminate. The lifecycle of a task executed by an Executor has four phases: created, submitted,
started, and completed. Since tasks can take a long time to run, we also want to be able to cancel a task. In the
Executor framework, tasks that have been submitted but not yet started can always be cancelled, and tasks that have
started can sometimes be cancelled if they are responsive to interruption. Cancelling a task that has already completed
has no effect. (Cancellation is covered in greater detail in Chapter 7.)
Future represents the lifecycle of a task and provides methods to test whether the task has completed or been
cancelled, retrieve its result, and cancel the task. Callable and Future are shown in Listing 6.11. Implicit in the
specification of Future is that task lifecycle can only move forwards, not backwards ‐ just like the ExecutorService
lifecycle. Once a task is completed, it stays in that state forever.
The behavior of get varies depending on the task state (not yet started, running, completed). It returns immediately or
throws an Exception if the task has already completed, but if not it blocks until the task completes. If the task
completes by throwing an exception, get rethrows it wrapped in an ExecutionException; if it was cancelled, get
throws CancellationException. If get throws ExecutionException, the underlying exception can be retrieved with
getCause.
80 Java Concurrency In Practice
Listing 6.11. Callable and Future Interfaces.
public interface Callable<V> {
V call() throws Exception;
}
public interface Future<V> {
boolean cancel(boolean mayInterruptIfRunning);
boolean isCancelled();
boolean isDone();
V get() throws InterruptedException, ExecutionException,
CancellationException;
V get(long timeout, TimeUnit unit)
throws InterruptedException, ExecutionException,
CancellationException, TimeoutException;
}
There are several ways to create a Future to describe a task. The submit methods in ExecutorService all return a
Future, so that you can submit a Runnable or a Callable to an executor and get back a Future that can be used to
retrieve the result or cancel the task. You can also explicitly instantiate a FutureTask for a given Runnable or Callable.
(Because FutureTask implements Runnable, it can be submitted to an Executor for execution or executed directly by
calling its run method.)
As of Java 6, ExecutorService implementations can override newTaskFor in AbstractExecutorService to control
instantiation of the Future corresponding to a submitted Callable or Runnable. The default implementation just
creates a new FutureTask, as shown in Listing 6.12.
Listing 6.12. Default Implementation of newTaskFor in ThreadPoolExecutor.
protected <T> RunnableFuture<T> newTaskFor(Callable<T> task) {
return new FutureTask<T>(task);
}
Submitting a Runnable or Callable to an Executor constitutes a safe publication (see Section 3.5) of the Runnable or
Callable from the submitting thread to the thread that will eventually execute the task. Similarly, setting the result
value for a Future constitutes a safe publication of the result from the thread in which it was computed to any thread
that retrieves it via get.
6.3.3. Example: Page Renderer with Future
As a first step towards making the page renderer more concurrent, let's divide it into two tasks, one that renders the
text and one that downloads all the images. (Because one task is largely CPU‐bound and the other is largely I/O‐bound,
this approach may yield improvements even on single‐CPU systems.)
Callable and Future can help us express the interaction between these cooperating tasks. In FutureRenderer in
Listing 6.13, we create a Callable to download all the images, and submit it to an ExecutorService. This returns a
Future describing the task's execution; when the main task gets to the point where it needs the images, it waits for the
result by calling Future.get. If we're lucky, the results will already be ready by the time we ask; otherwise, at least we
got a head start on downloading the images.
The state‐dependent nature of get means that the caller need not be aware of the state of the task, and the safe
publication properties of task submission and result retrieval make this approach thread‐safe. The exception handling
code surrounding Future.get deals with two possible problems: that the task encountered an Exception, or the thread
calling get was interrupted before the results were available. (See Sections 5.5.2 and 5.4.)
FutureRenderer allows the text to be rendered concurrently with downloading the image data. When all the images
are downloaded, they are rendered onto the page. This is an improvement in that the user sees a result quickly and it
exploits some parallelism, but we can do considerably better. There is no need for users to wait for all the images to be
downloaded; they would probably prefer to see individual images drawn as they become available.
6.3.4. Limitations of Parallelizing Heterogeneous Tasks
In the last example, we tried to execute two different types of tasks in parallel ‐ downloading the images and rendering
the page. But obtaining significant performance improvements by trying to parallelize sequential heterogeneous tasks
can be tricky.
Two people can divide the work of cleaning the dinner dishes fairly effectively: one person washes while the other dries.
However, assigning a different type of task to each worker does not scale well; if several more people show up, it is not
obvious how they can help without getting in the way or significantly restructuring the division of labor. Without finding
finer‐grained parallelism among similar tasks, this approach will yield diminishing returns.
5BPart II: Structuring Concurrent Applications ‐ 18BChapter 6. Task Execution 81
A further problem with dividing heterogeneous tasks among multiple workers is that the tasks may have disparate sizes.
If you divide tasks A and B between two workers but A takes ten times as long as B, you've only speeded up the total
process by 9%. Finally, dividing a task among multiple workers always involves some amount of coordination overhead;
for the division to be worthwhile, this overhead must be more than compensated by productivity improvements due to
parallelism.
FutureRenderer uses two tasks: one for rendering text and one for downloading the images. If rendering the text is
much faster than downloading the images, as is entirely possible, the resulting performance is not much different from
the sequential version, but the code is a lot more complicated. And the best we can do with two threads is speed things
up by a factor of two. Thus, trying to increase concurrency by parallelizing heterogeneous activities can be a lot of work,
and there is a limit to how much additional concurrency you can get out of it. (See Sections 11.4.2 and 11.4.3 for
another example of the same phenomenon.)
Listing 6.13. Waiting for Image Download with Future.
public class FutureRenderer {
private final ExecutorService executor = ...;
void renderPage(CharSequence source) {
final List<ImageInfo> imageInfos = scanForImageInfo(source);
Callable<List<ImageData>> task =
new Callable<List<ImageData>>() {
public List<ImageData> call() {
List<ImageData> result
= new ArrayList<ImageData>();
for (ImageInfo imageInfo : imageInfos)
result.add(imageInfo.downloadImage());
return result;
}
};
Future<List<ImageData>> future = executor.submit(task);
renderText(source);
try {
List<ImageData> imageData = future.get();
for (ImageData data : imageData)
renderImage(data);
} catch (InterruptedException e) {
// Re-assert the thread's interrupted status
Thread.currentThread().interrupt();
// We don't need the result, so cancel the task too
future.cancel(true);
} catch (ExecutionException e) {
throw launderThrowable(e.getCause());
}
}
}
The real performance payoff of dividing a program's workload into tasks comes when there are a large number of
independent, homogeneous tasks that can be processed concurrently.
6.3.5. CompletionService: Executor Meets BlockingQueue
If you have a batch of computations to submit to an Executor and you want to retrieve their results as they become
available, you could retain the Future associated with each task and repeatedly poll for completion by calling get with a
timeout of zero. This is possible, but tedious. Fortunately there is a better way: a completion service.
CompletionService combines the functionality of an Executor and a BlockingQueue. You can submit Callable tasks
to it for execution and use the queue‐like methods take and poll to retrieve completed results, packaged as Futures,
as they become available. ExecutorCompletionService implements CompletionService, delegating the computation
to an Executor.
The implementation of ExecutorCompletionService is quite straightforward. The constructor creates a
BlockingQueue to hold the completed results. Future-Task has a done method that is called when the computation
completes. When a task is submitted, it is wrapped with a QueueingFuture, a subclass of FutureTask that overrides
done to place the result on the BlockingQueue, as shown in Listing 6.14. The take and poll methods delegate to the
BlockingQueue, blocking if results are not yet available.
82 Java Concurrency In Practice
Listing 6.14. QueueingFuture Class Used By ExecutorCompletionService.
private class QueueingFuture<V> extends FutureTask<V> {
QueueingFuture(Callable<V> c) { super(c); }
QueueingFuture(Runnable t, V r) { super(t, r); }
protected void done() {
completionQueue.add(this);
}
}
6.3.6. Example: Page Renderer with CompletionService
We can use a CompletionService to improve the performance of the page renderer in two ways: shorter total runtime
and improved responsiveness. We can create a separate task for downloading each image and execute them in a thread
pool, turning the sequential download into a parallel one: this reduces the amount of time to download all the images.
And by fetching results from the CompletionService and rendering each image as soon as it is available, we can give
the user a more dynamic and responsive user interface. This implementation is shown in Renderer in Listing 6.15.
Listing 6.15. Using CompletionService to Render Page Elements as they Become Available.
public class Renderer {
private final ExecutorService executor;
Renderer(ExecutorService executor) { this.executor = executor; }
void renderPage(CharSequence source) {
final List<ImageInfo> info = scanForImageInfo(source);
CompletionService<ImageData> completionService =
new ExecutorCompletionService<ImageData>(executor);
for (final ImageInfo imageInfo : info)
completionService.submit(new Callable<ImageData>() {
public ImageData call() {
return imageInfo.downloadImage();
}
});
renderText(source);
try {
for (int t = 0, n = info.size(); t < n; t++) {
Future<ImageData> f = completionService.take();
ImageData imageData = f.get();
renderImage(imageData);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} catch (ExecutionException e) {
throw launderThrowable(e.getCause());
}
}
}
Multiple ExecutorCompletionServices can share a single Executor, so it is perfectly sensible to create an
ExecutorCompletionService that is private to a particular computation while sharing a common Executor. When used
in this way, a CompletionService acts as a handle for a batch of computations in much the same way that a Future
acts as a handle for a single computation. By remembering how many tasks were submitted to the CompletionService
and counting how many completed results are retrieved, you can know when all the results for a given batch have been
retrieved, even if you use a shared Executor.
6.3.7. Placing Time Limits on Tasks
Sometimes, if an activity does not complete within a certain amount of time, the result is no longer needed and the
activity can be abandoned. For example, a web application may fetch its advertisements from an external ad server, but
if the ad is not available within two seconds, it instead displays a default advertisement so that ad unavailability does
not undermine the site's responsiveness requirements. Similarly, a portal site may fetch data in parallel from multiple
data sources, but may be willing to wait only a certain amount of time for data to be available before rendering the page
without it.
The primary challenge in executing tasks within a time budget is making sure that you don't wait longer than the time
budget to get an answer or find out that one is not forthcoming. The timed version of Future.get supports this
requirement: it returns as soon as the result is ready, but throws TimeoutException if the result is not ready within the
timeout period.
5BPart II: Structuring Concurrent Applications ‐ 18BChapter 6. Task Execution 83
A secondary problem when using timed tasks is to stop them when they run out of time, so they do not waste
computing resources by continuing to compute a result that will not be used. This can be accomplished by having the
task strictly manage its own time budget and abort if it runs out of time, or by cancelling the task if the timeout expires.
Again, Future can help; if a timed get completes with a TimeoutException, you can cancel the task through the
Future. If the task is written to be cancellable (see Chapter 7), it can be terminated early so as not to consume excessive
resources. This technique is used in Listings 6.13 and 6.16.
Listing 6.16 shows a typical application of a timed Future.get. It generates a composite web page that contains the
requested content plus an advertisement fetched from an ad server. It submits the ad‐fetching task to an executor,
computes the rest of the page content, and then waits for the ad until its time budget runs out.[8] If the get times out, it
cancels[9] the ad‐fetching task and uses a default advertisement instead.
[8] The timeout passed to get is computed by subtracting the current time from the deadline; this may in fact yield a negative number, but all the
timed methods in java.util.concurrent treat negative timeouts as zero, so no extra code is needed to deal with this case.
[9] The true parameter to Future.cancel means that the task thread can be interrupted if the task is currently running; see Chapter 7.
6.3.8. Example: A Travel Reservations Portal
The time‐budgeting approach in the previous section can be easily generalized to an arbitrary number of tasks. Consider
a travel reservation portal: the user enters travel dates and requirements and the portal fetches and displays bids from
a number of airlines, hotels or car rental companies. Depending on the company, fetching a bid might involve invoking a
web service, consulting a database, performing an EDI transaction, or some other mechanism. Rather than have the
response time for the page be driven by the slowest response, it may be preferable to present only the information
available within a given time budget. For providers that do not respond in time, the page could either omit them
completely or display a placeholder such as "Did not hear from Air Java in time."
Listing 6.16. Fetching an Advertisement with a Time Budget.
Page renderPageWithAd() throws InterruptedException {
long endNanos = System.nanoTime() + TIME_BUDGET;
Future<Ad> f = exec.submit(new FetchAdTask());
// Render the page while waiting for the ad
Page page = renderPageBody();
Ad ad;
try {
// Only wait for the remaining time budget
long timeLeft = endNanos - System.nanoTime();
ad = f.get(timeLeft, NANOSECONDS);
} catch (ExecutionException e) {
ad = DEFAULT_AD;
} catch (TimeoutException e) {
ad = DEFAULT_AD;
f.cancel(true);
}
page.setAd(ad);
return page;
}
Fetching a bid from one company is independent of fetching bids from another, so fetching a single bid is a sensible task
boundary that allows bid retrieval to proceed concurrently. It would be easy enough to create n tasks, submit them to a
thread pool, retain the Futures, and use a timed get to fetch each result sequentially via its Future, but there is an
even easierway ‐ invokeAll.
Listing 6.17 uses the timed version of invokeAll to submit multiple tasks to an ExecutorService and retrieve the
results. The invokeAll method takes a collection of tasks and returns a collection of Futures. The two collections have
identical structures; invokeAll adds the Futures to the returned collection in the order imposed by the task collection's
iterator, thus allowing the caller to associate a Future with the Callable it represents. The timed version of invokeAll
will return when all the tasks have completed, the calling thread is interrupted, or the timeout expires. Any tasks that
are not complete when the timeout expires are cancelled. On return from invokeAll, each task will have either
completed normally or been cancelled; the client code can call get or isCancelled to find out which.
Summary
Structuring applications around the execution of tasks can simplify development and facilitate concurrency. The
Executor framework permits you to decouple task submission from execution policy and supports a rich variety of
execution policies; whenever you find yourself creating threads to perform tasks, consider using an Executor instead.
To maximize the benefit of decomposing an application into tasks, you must identify sensible task boundaries. In some
84 Java Concurrency In Practice
applications, the obvious task boundaries work well, whereas in others some analysis may be required to uncover finergrained
exploitable parallelism.
Listing 6.17. Requesting Travel Quotes Under a Time Budget.
private class QuoteTask implements Callable<TravelQuote> {
private final TravelCompany company;
private final TravelInfo travelInfo;
...
public TravelQuote call() throws Exception {
return company.solicitQuote(travelInfo);
}
}
public List<TravelQuote> getRankedTravelQuotes(
TravelInfo travelInfo, Set<TravelCompany> companies,
Comparator<TravelQuote> ranking, long time, TimeUnit unit)
throws InterruptedException {
List<QuoteTask> tasks = new ArrayList<QuoteTask>();
for (TravelCompany company : companies)
tasks.add(new QuoteTask(company, travelInfo));
List<Future<TravelQuote>> futures =
exec.invokeAll(tasks, time, unit);
List<TravelQuote> quotes =
new ArrayList<TravelQuote>(tasks.size());
Iterator<QuoteTask> taskIter = tasks.iterator();
for (Future<TravelQuote> f : futures) {
QuoteTask task = taskIter.next();
try {
quotes.add(f.get());
} catch (ExecutionException e) {
quotes.add(task.getFailureQuote(e.getCause()));
} catch (CancellationException e) {
quotes.add(task.getTimeoutQuote(e));
}
}
Collections.sort(quotes, ranking);
return quotes;
}
5BPart II: Structuring Concurrent Applications ‐ 19BChapter 7. Cancellation and Shutdown 85
Chapter 7. Cancellation and Shutdown
It is easy to start tasks and threads. Most of the time we allow them to decide when to stop by letting them run to
completion. Sometimes, however, we want to stop tasks or threads earlier than they would on their own, perhaps
because the user cancelled an operation or the application needs to shut down quickly.
Getting tasks and threads to stop safely, quickly, and reliably is not always easy. Java does not provide any mechanism
for safely forcing a thread to stop what it is doing.[1] Instead, it provides interruption, a cooperative mechanism that lets
one thread ask another to stop what it is doing.
[1] The deprecated Thread.stop and suspend methods were an attempt to provide such a mechanism, but were quickly realized to be seriously
flawed and should be avoided. See
http://java.sun.com/j2se/1.5.0/docs/guide/misc/threadPrimitiveDeprecation.html for an explanation of the
problems with these methods.
The cooperative approach is required because we rarely want a task, thread, or service to stop immediately, since that
could leave shared data structures in an inconsistent state. Instead, tasks and services can be coded so that, when
requested, they clean up any work currently in progress and then terminate. This provides greater flexibility, since the
task code itself is usually better able to assess the cleanup required than is the code requesting cancellation.
End‐of‐lifecycle issues can complicate the design and implementation of tasks, services, and applications, and this
important element of program design is too often ignored. Dealing well with failure, shutdown, and cancellation is one
of the characteristics that distinguish a well‐behaved application from one that merely works. This chapter addresses
mechanisms for cancellation and interruption, and how to code tasks and services to be responsive to cancellation
requests.
7.1. Task Cancellation
An activity is cancellable if external code can move it to completion before its normal completion. There are a number
of reasons why you might want to cancel an activity:
User‐requested cancellation. The user clicked on the "cancel" button in a GUI application, or requested cancellation
through a management interface such as JMX (Java Management Extensions).
Time‐limited activities. An application searches a problem space for a finite amount of time and chooses the best
solution found within that time. When the timer expires, any tasks still searching are cancelled.
Application events. An application searches a problem space by decomposing it so that different tasks search different
regions of the problem space. When one task finds a solution, all other tasks still searching are cancelled.
Errors. A web crawler searches for relevant pages, storing pages or summary data to disk. When a crawler task
encounters an error (for example, the disk is full), other crawling tasks are cancelled, possibly recording their current
state so that they can be restarted later.
Shutdown. When an application or service is shut down, something must be done about work that is currently being
processed or queued for processing. In a graceful shutdown, tasks currently in progress might be allowed to complete;
in a more immediate shutdown, currently executing tasks might be cancelled.
There is no safe way to preemptively stop a thread in Java, and therefore no safe way to preemptively stop a task. There
are only cooperative mechanisms, by which the task and the code requesting cancellation follow an agreed‐upon
protocol.
One such cooperative mechanism is setting a "cancellation requested" flag that the task checks periodically; if it finds
the flag set, the task terminates early. PrimeGenerator in Listing 7.1, which enumerates prime numbers until it is
cancelled, illustrates this technique. The cancel method sets the cancelled flag, and the main loop polls this flag
before searching for the next prime number. (For this to work reliably, cancelled must be volatile.)
Listing 7.2 shows a sample use of this class that lets the prime generator run for one second before cancelling it. The
generator won't necessarily stop after exactly one second, since there may be some delay between the time that
cancellation is requested and the time that the run loop next checks for cancellation. The cancel method is called from
a finally block to ensure that the prime generator is cancelled even if the call to sleep is interrupted. If cancel were
not called, the prime‐seeking thread would run forever, consuming CPU cycles and preventing the JVM from exiting.
86 Java Concurrency In Practice
A task that wants to be cancellable must have a cancellation policy that specifies the "how", "when", and "what" of
cancellation ‐ how other code can request cancellation, when the task checks whether cancellation has been requested,
and what actions the task takes in response to a cancellation request.
Consider the real‐world example of stopping payment on a check. Banks have rules about how to submit a stoppayment
request, what responsiveness guarantees it makes in processing such requests, and what procedures it follows
when payment is actually stopped (such as notifying the other bank involved in the transaction and assessing a fee
against the payer’s account). Taken together, these procedures and guarantees comprise the cancellation policy for
check payment.
Listing 7.1. Using a Volatile Field to Hold Cancellation State.
@ThreadSafe
public class PrimeGenerator implements Runnable {
@GuardedBy("this")
private final List<BigInteger> primes
= new ArrayList<BigInteger>();
private volatile boolean cancelled;
public void run() {
BigInteger p = BigInteger.ONE;
while (!cancelled ) {
p = p.nextProbablePrime();
synchronized (this) {
primes.add(p);
}
}
}
public void cancel() { cancelled = true; }
public synchronized List<BigInteger> get() {
return new ArrayList<BigInteger>(primes);
}
}
Listing 7.2. Generating a Second's Worth of Prime Numbers.
List<BigInteger> aSecondOfPrimes() throws InterruptedException {
PrimeGenerator generator = new PrimeGenerator();
new Thread(generator).start();
try {
SECONDS.sleep(1);
} finally {
generator.cancel();
}
return generator.get();
}
PrimeGenerator uses a simple cancellation policy: client code requests cancellation by calling cancel, PrimeGenerator
checks for cancellation once per prime found and exits when it detects cancellation has been requested.
7.1.1. Interruption
The cancellation mechanism in PrimeGenerator will eventually cause the prime‐seeking task to exit, but it might take a
while. If, however, a task that uses this approach calls a blocking method such as BlockingQueue.put, we could have a
more serious problem ‐ the task might never check the cancellation flag and therefore might never terminate.
BrokenPrimeProducer in Listing 7.3 illustrates this problem. The producer thread generates primes and places them on
a blocking queue. If the producer gets ahead of the consumer, the queue will fill up and put will block. What happens if
the consumer tries to cancel the producer task while it is blocked in put? It can call cancel which will set the cancelled
flag ‐ but the producer will never check the flag because it will never emerge from the blocking put (because the
consumer has stopped retrieving primes from the queue).
As we hinted in Chapter 5, certain blocking library methods support interruption. Thread interruption is a cooperative
mechanism for a thread to signal another thread that it should, at its convenience and if it feels like it, stop what it is
doing and do something else.
There is nothing in the API or language specification that ties interruption to any specific cancellation semantics, but in
practice, using interruption for anything but cancellation is fragile and difficult to sustain in larger applications.
Each thread has a boolean interrupted status; interrupting a thread sets its interrupted status to true. Thread contains
methods for interrupting a thread and querying the interrupted status of a thread, as shown in Listing 7.4. The
5BPart II: Structuring Concurrent Applications ‐ 19BChapter 7. Cancellation and Shutdown 87
interrupt method interrupts the target thread, and isInterrupted returns the interrupted status of the target thread.
The poorly named static interrupted method clears the interrupted status of the current thread and returns its
previous value; this is the only way to clear the interrupted status.
Blocking library methods like Thread.sleep and Object.wait try to detect when a thread has been interrupted and
return early. They respond to interruption by clearing the interrupted status and throwing InterruptedException,
indicating that the blocking operation completed early due to interruption. The JVM makes no guarantees on how
quickly a blocking method will detect interruption, but in practice this happens reasonably quickly.
Listing 7.3. Unreliable Cancellation that can Leave Producers Stuck in a Blocking Operation. Don't Do this.
class BrokenPrimeProducer extends Thread {
private final BlockingQueue<BigInteger> queue;
private volatile boolean cancelled = false;
BrokenPrimeProducer(BlockingQueue<BigInteger> queue) {
this.queue = queue;
}
public void run() {
try {
BigInteger p = BigInteger.ONE;
while (!cancelled)
queue.put(p = p.nextProbablePrime());
} catch (InterruptedException consumed) { }
}
public void cancel() { cancelled = true; }
}
void consumePrimes() throws InterruptedException {
BlockingQueue<BigInteger> primes = ...;
BrokenPrimeProducer producer = new BrokenPrimeProducer(primes);
producer.start();
try {
while (needMorePrimes())
consume(primes.take());
} finally {
producer.cancel();
}
}
Listing 7.4. Interruption Methods in Thread.
public class Thread {
public void interrupt() { ... }
public boolean isInterrupted() { ... }
public static boolean interrupted() { ... }
...
}
If a thread is interrupted when it is not blocked, its interrupted status is set, and it is up to the activity being cancelled to
poll the interrupted status to detect interruption. In this way interruption is "sticky” if it doesn't trigger an
InterruptedException, evidence of interruption persists until someone deliberately clears the interrupted status.
Calling interrupt does not necessarily stop the target thread from doing what it is doing; it merely delivers the
message that interruption has been requested.
A good way to think about interruption is that it does not actually interrupt a running thread; it just requests that the
thread interrupt itself at the next convenient opportunity. (These opportunities are called cancellation points.) Some
methods, such as wait, sleep, and join, take such requests seriously, throwing an exception when they receive an
interrupt request or encounter an already set interrupt status upon entry. Well behaved methods may totally ignore
such requests so long as they leave the interruption request in place so that calling code can do something with it.
Poorly behaved methods swallow the interrupt request, thus denying code further up the call stack the opportunity to
act on it.
The static interrupted method should be used with caution, because it clears the current thread's interrupted status. If
you call interrupted and it returns TRue, unless you are planning to swallow the interruption, you should do something
88 Java Concurrency In Practice
with it ‐ either throw InterruptedException or restore the interrupted status by calling interrupt again, as in Listing
5.10 on page 94.
BrokenPrimeProducer illustrates how custom cancellation mechanisms do not always interact well with blocking library
methods. If you code your tasks to be responsive to interruption, you can use interruption as your cancellation
mechanism and take advantage of the interruption support provided by many library classes.
Interruption is usually the most sensible way to implement cancellation.
BrokenPrimeProducer can be easily fixed (and simplified) by using interruption instead of a boolean flag to request
cancellation, as shown in Listing 7.5. There are two points in each loop iteration where interruption may be detected: in
the blocking put call, and by explicitly polling the interrupted status in the loop header. The explicit test is not strictly
necessary here because of the blocking put call, but it makes PrimeProducer more responsive to interruption because
it checks for interruption before starting the lengthy task of searching for a prime, rather than after. When calls to
interruptible blocking methods are not frequent enough to deliver the desired responsiveness, explicitly testing the
interrupted status can help.
Listing 7.5. Using Interruption for Cancellation.
class PrimeProducer extends Thread {
private final BlockingQueue<BigInteger> queue;
PrimeProducer(BlockingQueue<BigInteger> queue) {
this.queue = queue;
}
public void run() {
try {
BigInteger p = BigInteger.ONE;
while (!Thread.currentThread().isInterrupted())
queue.put(p = p.nextProbablePrime());
} catch (InterruptedException consumed) {
/* Allow thread to exit */
}
}
public void cancel() { interrupt(); }
}
7.1.2. Interruption Policies
Just as tasks should have a cancellation policy, threads should have an interruption policy. An interruption policy
determines how a thread interprets an interruption request ‐ what it does (if anything) when one is detected, what units
of work are considered atomic with respect to interruption, and how quickly it reacts to interruption.
The most sensible interruption policy is some form of thread‐level or service‐level cancellation: exit as quickly as
practical, cleaning up if necessary, and possibly notifying some owning entity that the thread is exiting. It is possible to
establish other interruption policies, such as pausing or resuming a service, but threads or thread pools with
nonstandard interruption policies may need to be restricted to tasks that have been written with an awareness of the
policy.
It is important to distinguish between how tasks and threads should react to interruption. A single interrupt request
may have more than one desired recipient interrupting a worker thread in a thread pool can mean both "cancel the
current task" and "shut down the worker thread".
Tasks do not execute in threads they own; they borrow threads owned by a service such as a thread pool. Code that
doesn't own the thread (for a thread pool, any code outside of the thread pool implementation) should be careful to
preserve the interrupted status so that the owning code can eventually act on it, even if the "guest" code acts on the
interruption as well. (If you are house‐sitting for someone, you don't throw out the mail that comes while they're away ‐
you save it and let them deal with it when they get back, even if you do read their magazines.)
This is why most blocking library methods simply throw InterruptedException in response to an interrupt. They will
never execute in a thread they own, so they implement the most reasonable cancellation policy for task or library code:
get out of the way as quickly as possible and communicate the interruption back to the caller so that code higher up on
the call stack can take further action.
A task needn't necessarily drop everything when it detects an interruption request ‐ it can choose to postpone it until a
more opportune time by remembering that it was interrupted, finishing the task it was performing, and then throwing
5BPart II: Structuring Concurrent Applications ‐ 19BChapter 7. Cancellation and Shutdown 89
InterruptedException or otherwise indicating interruption. This technique can protect data structures from
corruption when an activity is interrupted in the middle of an update.
A task should not assume anything about the interruption policy of its executing thread unless it is explicitly designed to
run within a service that has a specific interruption policy. Whether a task interprets interruption as cancellation or
takes some other action on interruption, it should take care to preserve the executing thread's interruption status. If it is
not simply going to propagate InterruptedException to its caller, it should restore the interruption status after
catching InterruptedException:
Thread.currentThread().interrupt();
Just as task code should not make assumptions about what interruption means to its executing thread, cancellation
code should not make assumptions about the interruption policy of arbitrary threads. A thread should be interrupted
only by its owner; the owner can encapsulate knowledge of the thread's interruption policy in an appropriate
cancellation mechanism such as a shutdown method.
Because each thread has its own interruption policy, you should not interrupt a thread unless you know what
interruption means to that thread.
Critics have derided the Java interruption facility because it does not provide a preemptive interruption capability and
yet forces developers to handle InterruptedException. However, the ability to postpone an interruption request
enables developers to craft flexible interruption policies that balance responsiveness and robustness as appropriate for
the application.
7.1.3. Responding to Interruption
As mentioned in Section 5.4, when you call an interruptible blocking method such as Thread.sleep or
BlockingQueue.put, there are two practical strategies for handling InterruptedException:
• Propagate the exception (possibly after some task‐specific cleanup), making your method an interruptible
blocking method, too; or
• Restore the interruption status so that code higher up on the call stack can deal with it.
Propagating InterruptedException can be as easy as adding InterruptedException to the throws clause, as shown
by getNextTask in Listing 7.6.
Listing 7.6. Propagating InterruptedException to Callers.
BlockingQueue<Task> queue;
...
public Task getNextTask() throws InterruptedException {
return queue.take();
}
If you don't want to or cannot propagate InterruptedException (perhaps because your task is defined by a Runnable),
you need to find another way to preserve the interruption request. The standard way to do this is to restore the
interrupted status by calling interrupt again. What you should not do is swallow the InterruptedException by
catching it and doing nothing in the catch block, unless your code is actually implementing the interruption policy for a
thread. PrimeProducer swallows the interrupt, but does so with the knowledge that the thread is about to terminate
and that therefore there is no code higher up on the call stack that needs to know about the interruption. Most code
does not know what thread it will run in and so should preserve the interrupted status.
Only code that implements a thread's interruption policy may swallow an interruption request. General‐purpose task
and library code should never swallow interruption requests.
Activities that do not support cancellation but still call interruptible blocking methods will have to call them in a loop,
retrying when interruption is detected. In this case, they should save the interruption status locally and restore it just
before returning, as shown in Listing 7.7, rather than immediately upon catching InterruptedException. Setting the
interrupted status too early could result in an infinite loop, because most interruptible blocking methods check the
interrupted status on entry and throw InterruptedException immediately if it is set. (Interruptible methods usually
poll for interruption before blocking or doing any significant work, so as to be as responsive to interruption as possible.)
If your code does not call interruptible blocking methods, it can still be made responsive to interruption by polling the
current thread's interrupted status throughout the task code. Choosing a polling frequency is a tradeoff between
90 Java Concurrency In Practice
efficiency and responsiveness. If you have high responsiveness requirements, you cannot call potentially long‐running
methods that are not themselves responsive to interruption, potentially restricting your options for calling library code.
Cancellation can involve state other than the interruption status; interruption can be used to get the thread's attention,
and information stored elsewhere by the interrupting thread can be used to provide further instructions for the
interrupted thread. (Be sure to use synchronization when accessing that information.)
Listing 7.7. Noncancelable
Task that Restores Interruption Before Exit.
public Task getNextTask(BlockingQueue<Taskgt; queue) {
boolean interrupted = false;
try {
while (true) {
try {
return queue.take();
} catch (InterruptedException e) {
interrupted = true;
// fall through and retry
}
}
} finally {
if (interrupted)
Thread.currentThread().interrupt();
}
}
For example, when a worker thread owned by a ThreadPoolExecutor detects interruption, it checks whether the pool
is being shut down. If so, it performs some pool cleanup before terminating; otherwise it may create a new thread to
restore the thread pool to the desired size.
7.1.4. Example: Timed Run
Many problems can take forever to solve (e.g., enumerate all the prime numbers); for others, the answer might be
found reasonably quickly but also might take forever. Being able to say "spend up to ten minutes looking for the
answer" or "enumerate all the answers you can in ten minutes" can be useful in these situations.
The aSecondOfPrimes method in Listing 7.2 starts a PrimeGenerator and interrupts it after a second. While the
PrimeGenerator might take somewhat longer than a second to stop, it will eventually notice the interrupt and stop,
allowing the thread to terminate. But another aspect of executing a task is that you want to find out if the task throws
an exception. If PrimeGenerator throws an unchecked exception before the timeout expires, it will probably go
unnoticed, since the prime generator runs in a separate thread that does not explicitly handle exceptions.
Listing 7.8 shows an attempt at running an arbitrary Runnable for a given amount of time. It runs the task in the calling
thread and schedules a cancellation task to interrupt it after a given time interval. This addresses the problem of
unchecked exceptions thrown from the task, since they can then be caught by the caller of timedRun.
This is an appealingly simple approach, but it violates the rules: you should know a thread's interruption policy before
interrupting it. Since timedRun can be called from an arbitrary thread, it cannot know the calling thread's interruption
policy. If the task completes before the timeout, the cancellation task that interrupts the thread in which timedRun was
called could go off after timedRun has returned to its caller. We don't know what code will be running when that
happens, but the result won't be good. (It is possible but surprisingly tricky to eliminate this risk by using the
ScheduledFuture returned by schedule to cancel the cancellation task.)
Listing 7.8. Scheduling an Interrupt on a Borrowed Thread. Don't Do this.
private static final ScheduledExecutorService cancelExec = ...;
public static void timedRun(Runnable r,
long timeout, TimeUnit unit) {
final Thread taskThread = Thread.currentThread();
cancelExec.schedule(new Runnable() {
public void run() { taskThread.interrupt(); }
}, timeout, unit);
r.run();
}
Further, if the task is not responsive to interruption, timedRun will not return until the task finishes, which may be long
after the desired timeout (or even not at all). A timed run service that doesn't return after the specified time is likely to
be irritating to its callers.
5BPart II: Structuring Concurrent Applications ‐ 19BChapter 7. Cancellation and Shutdown 91
Listing 7.9 addresses the exception‐handling problem of aSecondOfPrimes and the problems with the previous attempt.
The thread created to run the task can have its own execution policy, and even if the task doesn't respond to the
interrupt, the timed run method can still return to its caller. After starting the task thread, timedRun executes a timed
join with the newly created thread. After join returns, it checks if an exception was thrown from the task and if so,
rethrows it in the thread calling timedRun. The saved Throwable is shared between the two threads, and so is declared
volatile to safely publish it from the task thread to the timedRun thread.
This version addresses the problems in the previous examples, but because it relies on a timed join, it shares a
deficiency with join: we don't know if control was returned because the thread exited normally or because the join
timed out.[2]
[2] This is a flaw in the Thread API, because whether or not the join completes successfully has memory visibility consequences in the Java
Memory Model, but join does not return a status indicating whether it was successful.
7.1.5. Cancellation Via Future
We've already used an abstraction for managing the lifecycle of a task, dealing with exceptions, and facilitating
cancellationFuture. Following the general principle that it is better to use existing library classes than to roll your own,
let's build timedRun using Future and the task execution framework.
Listing 7.9. Interrupting a Task in a Dedicated Thread.
public static void timedRun(final Runnable r,
long timeout, TimeUnit unit)
throws InterruptedException {
class RethrowableTask implements Runnable {
private volatile Throwable t;
public void run() {
try { r.run(); }
catch (Throwable t) { this.t = t; }
}
void rethrow() {
if (t != null)
throw launderThrowable(t);
}
}
RethrowableTask task = new RethrowableTask();
final Thread taskThread = new Thread(task);
taskThread.start();
cancelExec.schedule(new Runnable() {
public void run() { taskThread.interrupt(); }
}, timeout, unit);
taskThread.join(unit.toMillis(timeout));
task.rethrow();
}
ExecutorService.submit returns a Future describing the task. Future has a cancel method that takes a boolean
argument, mayInterruptIfRunning, and returns a value indicating whether the cancellation attempt was successful.
(This tells you only whether it was able to deliver the interruption, not whether the task detected and acted on it.)
When mayInterruptIfRunning is true and the task is currently running in some thread, then that thread is
interrupted. Setting this argument to false means "don't run this task if it hasn't started yet", and should be used for
tasks that are not designed to handle interruption.
Since you shouldn't interrupt a thread unless you know its interruption policy, when is it OK to call cancel with an
argument of TRue? The task execution threads created by the standard Executor implementations implement an
interruption policy that lets tasks be cancelled using interruption, so it is safe to set mayInterruptIfRunning when
cancelling tasks through their Futures when they are running in a standard Executor. You should not interrupt a pool
thread directly when attempting to cancel a task, because you won't know what task is running when the interrupt
request is delivered ‐ do this only through the task's Future. This is yet another reason to code tasks to treat
interruption as a cancellation request: then they can be cancelled through their Futures.
Listing 7.10 shows a version of timedRun that submits the task to an ExecutorService and retrieves the result with a
timed Future.get. If get terminates with a TimeoutException, the task is cancelled via its Future. (To simplify coding,
this version calls Future.cancel unconditionally in a finally block, taking advantage of the fact that cancelling a
completed task has no effect.) If the underlying computation throws an exception prior to cancellation, it is rethrown
from timedRun, which is the most convenient way for the caller to deal with the exception. Listing 7.10 also illustrates
92 Java Concurrency In Practice
another good practice: cancelling tasks whose result is no longer needed. (This technique was also used in Listing 6.13
on page 128 and Listing 6.16 on page 132.)
Listing 7.10. Cancelling a Task Using Future.
public static void timedRun(Runnable r,
long timeout, TimeUnit unit)
throws InterruptedException {
Future<?> task = taskExec.submit(r);
try {
task.get(timeout, unit);
} catch (TimeoutException e) {
// task will be cancelled below
} catch (ExecutionException e) {
// exception thrown in task; rethrow
throw launderThrowable(e.getCause());
} finally {
// Harmless if task already completed
task.cancel(true); // interrupt if running
}
}
When Future.get throws InterruptedException or TimeoutException and you know that the result is no longer
needed by the program, cancel the task with Future.cancel.
7.1.6. Dealing with Non‐interruptible Blocking
Many blocking library methods respond to interruption by returning early and throwing InterruptedException, which
makes it easier to build tasks that are responsive to cancellation. However, not all blocking methods or blocking
mechanisms are responsive to interruption; if a thread is blocked performing synchronous socket I/O or waiting to
acquire an intrinsic lock, interruption has no effect other than setting the thread's interrupted status. We can
sometimes convince threads blocked in non‐interruptible activities to stop by means similar to interruption, but this
requires greater awareness of why the thread is blocked.
Synchronous socket I/O in java.io. The common form of blocking I/O in server applications is reading or writing to a
socket. Unfortunately, the read and write methods in InputStream and OutputStream are not responsive to
interruption, but closing the underlying socket makes any threads blocked in read or write throw a SocketException.
Synchronous I/O in java.nio. Interrupting a thread waiting on an InterruptibleChannel causes it to throw
ClosedByInterruptException and close the channel (and also causes all other threads blocked on the channel to
throw ClosedByInterruptException). Closing an InterruptibleChannel causes threads blocked on channel
operations to throw AsynchronousCloseException. Most standard Channels implement InterruptibleChannel.
Asynchronous I/O with Selector. If a thread is blocked in Selector.select (in java.nio.channels), wakeup causes it to
return prematurely by throwing a ClosedSelectorException.
Lock acquisition. If a thread is blocked waiting for an intrinsic lock, there is nothing you can do to stop it short of
ensuring that it eventually acquires the lock and makes enough progress that you can get its attention some other way.
However, the explicit Lock classes offer the lockInterruptibly method, which allows you to wait for a lock and still be
responsive to interrupts ‐ see Chapter 13.
ReaderThread in Listing 7.11 shows a technique for encapsulating nonstandard cancellation. ReaderThread manages a
single socket connection, reading synchronously from the socket and passing any data received to processBuffer. To
facilitate terminating a user connection or shutting down the server, ReaderThread overrides interrupt to both deliver
a standard interrupt and close the underlying socket; thus interrupting a ReaderThread makes it stop what it is doing
whether it is blocked in read or in an interruptible blocking method.
7.1.7. Encapsulating Nonstandard Cancellation with Newtaskfor
The technique used in ReaderThread to encapsulate nonstandard cancellation can be refined using the newTaskFor
hook added to ThreadPoolExecutor in Java 6. When a Callable is submitted to an ExecutorService, submit returns
a Future that can be used to cancel the task. The newTaskFor hook is a factory method that creates the Future
representing the task. It returns a RunnableFuture, an interface that extends both Future and Runnable (and is
implemented by FutureTask).
Customizing the task Future allows you to override Future.cancel. Custom cancellation code can perform logging or
gather statistics on cancellation, and can also be used to cancel activities that are not responsive to interruption.
5BPart II: Structuring Concurrent Applications ‐ 19BChapter 7. Cancellation and Shutdown 93
ReaderThread encapsulates cancellation of socket‐using threads by overriding interrupt; the same can be done for
tasks by overriding Future.cancel.
CancellableTask in Listing 7.12 defines a CancellableTask interface that extends Callable and adds a cancel
method and a newTask factory method for constructing a RunnableFuture. CancellingExecutor extends
ThreadPoolExecutor, and overrides newTaskFor to let a CancellableTask create its own Future.
Listing 7.11. Encapsulating Nonstandard Cancellation in a Thread by Overriding Interrupt.
public class ReaderThread extends Thread {
private final Socket socket;
private final InputStream in;
public ReaderThread(Socket socket) throws IOException {
this.socket = socket;
this.in = socket.getInputStream();
}
public void interrupt() {
try {
socket.close();
}
catch (IOException ignored) { }
finally {
super.interrupt();
}
}
public void run() {
try {
byte[] buf = new byte[BUFSZ];
while (true) {
int count = in.read(buf);
if (count < 0)
break;
else if (count > 0)
processBuffer(buf, count);
}
} catch (IOException e) { /* Allow thread to exit */ }
}
}
SocketUsingTask implements CancellableTask and defines Future.cancel to close the socket as well as call
super.cancel. If a SocketUsingTask is cancelled through its Future, the socket is closed and the executing thread is
interrupted. This increases the task's responsiveness to cancellation: not only can it safely call interruptible blocking
methods while remaining responsive to cancellation, but it can also call blocking socket I/O methods.
7.2. Stopping a Threadbased
Service
Applications commonly create services that own threads, such as thread pools, and the lifetime of these services is
usually longer than that of the method that creates them. If the application is to shut down gracefully, the threads
owned by these services need to be terminated. Since there is no preemptive way to stop a thread, they must instead
be persuaded to shut down on their own.
Sensible encapsulation practices dictate that you should not manipulate a thread ‐ interrupt it, modify its priority, etc. ‐
unless you own it. The thread API has no formal concept of thread ownership: a thread is represented with a Thread
object that can be freely shared like any other object. However, it makes sense to think of a thread as having an owner,
and this is usually the class that created the thread. So a thread pool owns its worker threads, and if those threads need
to be interrupted, the thread pool should take care of it.
As with any other encapsulated object, thread ownership is not transitive: the application may own the service and the
service may own the worker threads, but the application doesn't own the worker threads and therefore should not
attempt to stop them directly. Instead, the service should provide lifecycle methods for shutting itself down that also
shut down the owned threads; then the application can shut down the service, and the service can shut down the
threads. ExecutorService provides the shutdown and shutdownNow methods; other thread‐owning services should
provide a similar shutdown mechanism.
Provide lifecycle methods whenever a thread‐owning service has a lifetime longer than that of the method that created
it.
94 Java Concurrency In Practice
7.2.1. Example: A Logging Service
Most server applications use logging, which can be as simple as inserting println statements into the code. Stream
classes like PrintWriter are thread‐safe, so this simple approach would require no explicit synchronization.[3] However,
as we'll see in Section 11.6, inline logging can have some performance costs in highvolume applications. Another
alternative is have the log call queue the log message for processing by another thread.
[3] If you are logging multiple lines as part of a single log message, you may need to use additional client‐side locking to prevent undesirable
interleaving of output from multiple threads. If two threads logged multiline stack traces to the same stream with one println call per line, the
results would be interleaved unpredictably, and could easily look like one large but meaningless stack trace.
Listing 7.12. Encapsulating Nonstandard Cancellation in a Task with Newtaskfor.
public interface CancellableTask<T> extends Callable<T> {
void cancel();
RunnableFuture<T> newTask();
}
@ThreadSafe
public class CancellingExecutor extends ThreadPoolExecutor {
...
protected<T> RunnableFuture<T> newTaskFor(Callable<T> callable) {
if (callable instanceof CancellableTask)
return ((CancellableTask<T>) callable).newTask();
else
return super.newTaskFor(callable);
}
}
public abstract class SocketUsingTask<T>
implements CancellableTask<T> {
@GuardedBy("this") private Socket socket;
protected synchronized void setSocket(Socket s) { socket = s; }
public synchronized void cancel() {
try {
if (socket != null)
socket.close();
} catch (IOException ignored) { }
}
public RunnableFuture<T> newTask() {
return new FutureTask<T>(this) {
public boolean cancel(boolean mayInterruptIfRunning) {
try {
SocketUsingTask.this.cancel();
} finally {
return super.cancel(mayInterruptIfRunning);
}
}
};
}
}
LogWriter in Listing 7.13 shows a simple logging service in which the logging activity is moved to a separate logger
thread. Instead of having the thread that produces the message write it directly to the output stream, LogWriter hands
it off to the logger thread via a BlockingQueue and the logger thread writes it out. This is a multiple‐producer, singleconsumer
design: any activity calling log is acting as a producer, and the background logger thread is the consumer. If
the logger thread falls behind, the BlockingQueue eventually blocks the producers until the logger thread catches up.
5BPart II: Structuring Concurrent Applications ‐ 19BChapter 7. Cancellation and Shutdown 95
Listing 7.13. ProducerConsumer
Logging Service with No Shutdown Support.
public class LogWriter {
private final BlockingQueue<String> queue;
private final LoggerThread logger;
public LogWriter(Writer writer) {
this.queue = new LinkedBlockingQueue<String>(CAPACITY);
this.logger = new LoggerThread(writer);
}
public void start() { logger.start(); }
public void log(String msg) throws InterruptedException {
queue.put(msg);
}
private class LoggerThread extends Thread {
private final PrintWriter writer;
...
public void run() {
try {
while (true)
writer.println(queue.take());
} catch(InterruptedException ignored) {
} finally {
writer.close();
}
}
}
}
For a service like LogWriter to be useful in production, we need a way to terminate the logger thread so it does not
prevent the JVM from shutting down normally. Stopping the logger thread is easy enough, since it repeatedly calls take,
which is responsive to interruption; if the logger thread is modified to exit on catching InterruptedException, then
interrupting the logger thread stops the service.
However, simply making the logger thread exit is not a very satifying shutdown mechanism. Such an abrupt shutdown
discards log messages that might be waiting to be written to the log, but, more importantly, threads blocked in log
because the queue is full will never become unblocked. Cancelling a producerconsumer activity requires cancelling both
the producers and the consumers. Interrupting the logger thread deals with the consumer, but because the producers in
this case are not dedicated threads, cancelling them is harder.
Another approach to shutting down LogWriter would be to set a "shutdown requested" flag to prevent further
messages from being submitted, as shown in Listing 7.14. The consumer could then drain the queue upon being notified
that shutdown has been requested, writing out any pending messages and unblocking any producers blocked in log.
However, this approach has race conditions that make it unreliable. The implementation of log is a check‐then‐act
sequence: producers could observe that the service has not yet been shut down but still queue messages after the
shutdown, again with the risk that the producer might get blocked in log and never become unblocked. There are tricks
that reduce the likelihood of this (like having the consumer wait several seconds before declaring the queue drained),
but these do not change the fundamental problem, merely the likelihood that it will cause a failure.
Listing 7.14. Unreliable Way to Add Shutdown Support to the Logging Service.
public void log(String msg) throws InterruptedException {
if (!shutdownRequested)
queue.put(msg);
else
throw new IllegalStateException("logger is shut down");
}
The way to provide reliable shutdown for LogWriter is to fix the race condition, which means making the submission of
a new log message atomic. But we don't want to hold a lock while trying to enqueue the message, since put could block.
Instead, we can atomically check for shutdown and conditionally increment a counter to "reserve" the right to submit a
message, as shown in LogService in Listing 7.15.
96 Java Concurrency In Practice
7.2.2. ExecutorService Shutdown
In Section 6.2.4, we saw that ExecutorService offers two ways to shut down: graceful shutdown with shutdown, and
abrupt shutdown with shutdownNow. In an abrupt shutdown, shutdownNow returns the list of tasks that had not yet
started after attempting to cancel all actively executing tasks.
Listing 7.15. Adding Reliable Cancellation to LogWriter.
public class LogService {
private final BlockingQueue<String> queue;
private final LoggerThread loggerThread;
private final PrintWriter writer;
@GuardedBy("this") private boolean isShutdown;
@GuardedBy("this") private int reservations;
public void start() { loggerThread.start(); }
public void stop() {
synchronized (this) { isShutdown = true; }
loggerThread.interrupt();
}
public void log(String msg) throws InterruptedException {
synchronized (this) {
if (isShutdown)
throw new IllegalStateException(...);
++reservations;
}
queue.put(msg);
}
private class LoggerThread extends Thread {
public void run() {
try {
while (true) {
try {
synchronized (this) {
if (isShutdown && reservations == 0)
break;
}
String msg = queue.take();
synchronized (this) { --reservations; }
writer.println(msg);
} catch (InterruptedException e) { /* retry */ }
}
} finally {
writer.close();
}
}
}
}
The two different termination options offer a tradeoff between safety and responsiveness: abrupt termination is faster
but riskier because tasks may be interrupted in the middle of execution, and normal termination is slower but safer
because the ExecutorService does not shut down until all queued tasks are processed. Other thread‐owning services
should consider providing a similar choice of shutdown modes.
Simple programs can get away with starting and shutting down a global ExecutorService from main. More
sophisticated programs are likely to encapsulate an ExecutorService behind a higher‐level service that provides its
own lifecycle methods, such as the variant of LogService in Listing 7.16 that delegates to an ExecutorService instead
of managing its own threads. Encapsulating an ExecutorService extends the ownership chain from application to
service to thread by adding another link; each member of the chain manages the lifecycle of the services or threads it
owns.
5BPart II: Structuring Concurrent Applications ‐ 19BChapter 7. Cancellation and Shutdown 97
Listing 7.16. Logging Service that Uses an ExecutorService.
public class LogService {
private final ExecutorService exec = newSingleThreadExecutor();
...
public void start() { }
public void stop() throws InterruptedException {
try {
exec.shutdown();
exec.awaitTermination(TIMEOUT, UNIT);
} finally {
writer.close();
}
}
public void log(String msg) {
try {
exec.execute(new WriteTask(msg));
} catch (RejectedExecutionException ignored) { }
}
}
7.2.3. Poison Pills
Another way to convince a producer‐consumer service to shut down is with a poison pill: a recognizable object placed
on the queue that means "when you get this, stop." With a FIFO queue, poison pills ensure that consumers finish the
work on their queue before shutting down, since any work submitted prior to submitting the poison pill will be retrieved
before the pill; producers should not submit any work after putting a poison pill on the queue. IndexingService in
Listings 7.17, 7.18, and 7.19 shows a single‐producer, single‐consumer version of the desktop search example from
Listing 5.8 on page 91 that uses a poison pill to shut down the service.
Listing 7.17. Shutdown with Poison Pill.
public class IndexingService {
private static final File POISON = new File("");
private final IndexerThread consumer = new IndexerThread();
private final CrawlerThread producer = new CrawlerThread();
private final BlockingQueue<File> queue;
private final FileFilter fileFilter;
private final File root;
class CrawlerThread extends Thread { /* Listing 7.18 */ }
class IndexerThread extends Thread { /* Listing 7.19 */ }
public void start() {
producer.start();
consumer.start();
}
public void stop() { producer.interrupt(); }
public void awaitTermination() throws InterruptedException {
consumer.join();
}
}
Poison pills work only when the number of producers and consumers is known. The approach in IndexingService can
be extended tomultiple producers by having each producer place a pill on the queue and having the consumer stop only
when it receives Nproducers pills. It can be extended to multiple consumers by having each producer place Nconsumers pills on
the queue, though this can get unwieldy with large numbers of producers and consumers. Poison pills work reliably only
with unbounded queues.
7.2.4. Example: A Oneshot
Execution Service
If a method needs to process a batch of tasks and does not return until all the tasks are finished, it can simplify service
lifecycle management by using a private Executor whose lifetime is bounded by that method. (The invokeAll and
invokeAny methods can often be useful in such situations.)
The checkMail method in Listing 7.20 checks for new mail in parallel on a number of hosts. It creates a private executor
and submits a task for each host: it then shuts down the executor and waits for termination, which occurs when all the
mail‐checking tasks have completed.[4]
[4] The reason an AtomicBoolean is used instead of a volatile boolean is that in order to access the hasNewMail flag from the inner
Runnable, it would have to be final, which would preclude modifying it.
98 Java Concurrency In Practice
Listing 7.18. Producer Thread for IndexingService.
public class CrawlerThread extends Thread {
public void run() {
try {
crawl(root);
} catch (InterruptedException e) { /* fall through */ }
finally {
while (true) {
try {
queue.put(POISON);
break;
} catch (InterruptedException e1) { /* retry */ }
}
}
}
private void crawl(File root) throws InterruptedException {
...
}
}
Listing 7.19. Consumer Thread for IndexingService.
public class IndexerThread extends Thread {
public void run() {
try {
while (true) {
File file = queue.take();
if (file == POISON)
break;
else
indexFile(file);
}
} catch (InterruptedException consumed) { }
}
}
Listing 7.20. Using a Private Executor Whose Lifetime is Bounded by a Method Call.
boolean checkMail(Set<String> hosts, long timeout, TimeUnit unit)
throws InterruptedException {
ExecutorService exec = Executors.newCachedThreadPool();
final AtomicBoolean hasNewMail = new AtomicBoolean(false);
try {
for (final String host : hosts)
exec.execute(new Runnable() {
public void run() {
if (checkMail(host))
hasNewMail.set(true);
}
});
} finally {
exec.shutdown();
exec.awaitTermination(timeout, unit);
}
return hasNewMail.get();
}
7.2.5. Limitations of Shutdownnow
When an ExecutorService is shut down abruptly with shutdownNow, it attempts to cancel the tasks currently in
progress and returns a list of tasks that were submitted but never started so that they can be logged or saved for later
processing.[5]
[5] The Runnable objects returned by shutdownNow might not be the same objects that were submitted to the ExecutorService: they
might be wrapped instances of the submitted tasks.
However, there is no general way to find out which tasks started but did not complete. This means that there is no way
of knowing the state of the tasks in progress at shutdown time unless the tasks themselves perform some sort of
checkpointing. To know which tasks have not completed, you need to know not only which tasks didn't start, but also
which tasks were in progress when the executor was shut down.[6]
[6] Unfortunately, there is no shutdown option in which tasks not yet started are returned to the caller but tasks in progress are allowed to
complete; such an option would eliminate this uncertain intermediate state.
TRackingExecutor in Listing 7.21 shows a technique for determining which tasks were in progress at shutdown time. By
encapsulating an ExecutorService and instrumenting execute (and similarly submit, not shown) to remember which
tasks were cancelled after shutdown, trackingExecutor can identify which tasks started but did not complete
normally. After the executor terminates, getCancelledTasks returns the list of cancelled tasks. In order for this
5BPart II: Structuring Concurrent Applications ‐ 19BChapter 7. Cancellation and Shutdown 99
technique to work, the tasks must preserve the thread's interrupted status when they return, which well behaved tasks
will do anyway.
Listing 7.21. ExecutorService that Keeps Track of Cancelled Tasks After Shutdown.
public class TrackingExecutor extends AbstractExecutorService {
private final ExecutorService exec;
private final Set<Runnable> tasksCancelledAtShutdown =
Collections.synchronizedSet(new HashSet<Runnable>());
...
public List<Runnable> getCancelledTasks() {
if (!exec.isTerminated())
throw new IllegalStateException(...);
return new ArrayList<Runnable>(tasksCancelledAtShutdown);
}
public void execute(final Runnable runnable) {
exec.execute(new Runnable() {
public void run() {
try {
runnable.run();
} finally {
if (isShutdown()
&& Thread.currentThread().isInterrupted())
tasksCancelledAtShutdown.add(runnable);
}
}
});
}
// delegate other ExecutorService methods to exec
}
WebCrawler in Listing 7.22 shows an application of trackingExecutor. The work of a web crawler is often unbounded,
so if a crawler must be shut down we might want to save its state so it can be restarted later. CrawlTask provides a
getPage method that identifies what page it is working on. When the crawler is shut down, both the tasks that did not
start and those that were cancelled are scanned and their URLs recorded, so that page‐crawling tasks for those URLs can
be added to the queue when the crawler restarts.
TRackingExecutor has an unavoidable race condition that could make it yield false positives: tasks that are identified as
cancelled but actually completed. This arises because the thread pool could be shut down between when the last
instruction of the task executes and when the pool records the task as complete. This is not a problem if tasks are
idempotent (if performing them twice has the same effect as performing them once), as they typically are in a web
crawler. Otherwise, the application retrieving the cancelled tasks must be aware of this risk and be prepared to deal
with false positives.
100 Java Concurrency In Practice
Listing 7.22. Using TRackingExecutorService to Save Unfinished Tasks for Later Execution.
public abstract class WebCrawler {
private volatile TrackingExecutor exec;
@GuardedBy("this")
private final Set<URL> urlsToCrawl = new HashSet<URL>();
...
public synchronized void start() {
exec = new TrackingExecutor(
Executors.newCachedThreadPool());
for (URL url : urlsToCrawl) submitCrawlTask(url);
urlsToCrawl.clear();
}
public synchronized void stop() throws InterruptedException {
try {
saveUncrawled(exec.shutdownNow());
if (exec.awaitTermination(TIMEOUT, UNIT))
saveUncrawled(exec.getCancelledTasks());
} finally {
exec = null;
}
}
protected abstract List<URL> processPage(URL url);
private void saveUncrawled(List<Runnable> uncrawled) {
for (Runnable task : uncrawled)
urlsToCrawl.add(((CrawlTask) task).getPage());
}
private void submitCrawlTask(URL u) {
exec.execute(new CrawlTask(u));
}
private class CrawlTask implements Runnable {
private final URL url;
...
public void run() {
for (URL link : processPage(url)) {
if (Thread.currentThread().isInterrupted())
return;
submitCrawlTask(link);
}
}
public URL getPage() { return url; }
}
}
7.3. Handling Abnormal Thread Termination
It is obvious when a single‐threaded console application terminates due to an uncaught exceptionthe program stops
running and produces a stack trace that is very different from typical program output. Failure of a thread in a concurrent
application is not always so obvious. The stack trace may be printed on the console, but no one may be watching the
console. Also, when a thread fails, the application may appear to continue to work, so its failure could go unnoticed.
Fortunately, there are means of both detecting and preventing threads from "leaking" from an application.
The leading cause of premature thread death is RuntimeException. Because these exceptions indicate a programming
error or other unrecoverable problem, they are generally not caught. Instead they propagate all the way up the stack, at
which point the default behavior is to print a stack trace on the console and let the thread terminate.
The consequences of abnormal thread death range from benign to disastrous, depending on the thread's role in the
application. Losing a thread from a thread pool can have performance consequences, but an application that runs well
with a 50‐thread pool will probably run fine with a 49‐thread pool too. But losing the event dispatch thread in a GUI
application would be quite noticeable ‐ the application would stop processing events and the GUI would freeze.
OutOfTime on 124 showed a serious consequence of thread leakage: the service represented by the Timer is
permanently out of commission.
Just about any code can throw a RuntimeException. Whenever you call another method, you are taking a leap of faith
that it will return normally or throw one of the checked exceptions its signature declares. The less familiar you are with
the code being called, the more skeptical you should be about its behavior.
Task‐processing threads such as the worker threads in a thread pool or the Swing event dispatch thread spend their
whole life calling unknown code through an abstraction barrier like Runnable, and these threads should be very
skeptical that the code they call will be well behaved. It would be very bad if a service like the Swing event thread failed
just because some poorly written event handler threw a NullPointerException. Accordingly, these facilities should call
tasks within a try-catch block that catches unchecked exceptions, or within a try-finally block to ensure that if the
5BPart II: Structuring Concurrent Applications ‐ 19BChapter 7. Cancellation and Shutdown 101
thread exits abnormally the framework is informed of this and can take corrective action. This is one of the few times
when you might want to consider catching RuntimeExceptionwhen you are calling unknown, untrusted code through
an abstraction such as Runnable.[7]
[7] There is some controversy over the safety of this technique; when a thread throws an unchecked exception, the entire application may possibly
be compromised. But the alternative ‐ shutting down the entire application ‐ is usually not practical.
Listing 7.23 illustrates a way to structure a worker thread within a thread pool. If a task throws an unchecked exception,
it allows the thread to die, but not before notifying the framework that the thread has died. The framework may then
replace the worker thread with a new thread, or may choose not to because the thread pool is being shut down or there
are already enough worker threads to meet current demand. ThreadPoolExecutor and Swing use this technique to
ensure that a poorly behaved task doesn't prevent subsequent tasks from executing. If you are writing a worker thread
class that executes submitted tasks, or calling untrusted external code (such as dynamically loaded plugins), use one of
these approaches to prevent a poorly written task or plugin from taking down the thread that happens to call it.
Listing 7.23. Typical Threadpool
Worker Thread Structure.
public void run() {
Throwable thrown = null;
try {
while (!isInterrupted())
runTask(getTaskFromWorkQueue());
} catch (Throwable e) {
thrown = e;
} finally {
threadExited(this, thrown);
}
}
7.3.1. Uncaught Exception Handlers
The previous section offered a proactive approach to the problem of unchecked exceptions. The Thread API also
provides the UncaughtExceptionHandler facility, which lets you detect when a thread dies due to an uncaught
exception. The two approaches are complementary: taken together, they provide defense‐indepth against thread
leakage.
When a thread exits due to an uncaught exception, the JVM reports this event to an application‐provided
UncaughtExceptionHandler (see Listing 7.24); if no handler exists, the default behavior is to print the stack trace to
System.err.[8]
[8] Before Java 5.0, the only way to control the UncaughtExceptionHandler was by subclassing ThreadGroup. In Java 5.0 and later, you
can set an UncaughtExceptionHandler on a per‐thread basis with Thread.setUncaughtExceptionHandler, and can also set the
default UncaughtExceptionHandler with Thread.setDefaultUncaughtExceptionHandler. However, only one of these
handlers is calledfirst the JVM looks for a per‐thread handler, then for a ThreadGroup handler. The default handler implementation in
ThreadGroup delegates to its parent thread group, and so on up the chain until one of the ThreadGroup handlers deals with the uncaught
exception or it bubbles up to the toplevel thread group. The top‐level thread group handler delegates to the default system handler (if one exists;
the default is none) and otherwise prints the stack trace to the console.
Listing 7.24. UncaughtExceptionHandler Interface.
public interface UncaughtExceptionHandler {
void uncaughtException(Thread t, Throwable e);
}
What the handler should do with an uncaught exception depends on your quality‐of‐service requirements. The most
common response is to write an error message and stack trace to the application log, as shown in Listing 7.25. Handlers
can also take more direct action, such as trying to restart the thread, shutting down the application, paging an operator,
or other corrective or diagnostic action.
Listing 7.25. UncaughtExceptionHandler that Logs the Exception.
public class UEHLogger implements Thread.UncaughtExceptionHandler {
public void uncaughtException(Thread t, Throwable e) {
Logger logger = Logger.getAnonymousLogger();
logger.log(Level.SEVERE,
"Thread terminated with exception: " + t.getName(),
e);
}
}
In long‐running applications, always use uncaught exception handlers for all threads that at least log the exception.
102 Java Concurrency In Practice
To set an UncaughtExceptionHandler for pool threads, provide a ThreadFactory to the ThreadPoolExecutor
constructor. (As with all thread manipulation, only the thread's owner should change its UncaughtExceptionHandler.)
The standard thread pools allow an uncaught task exception to terminate the pool thread, but use a try-finally block
to be notified when this happens so the thread can be replaced. Without an uncaught exception handler or other failure
notification mechanism, tasks can appear to fail silently, which can be very confusing. If you want to be notified when a
task fails due to an exception so that you can take some task‐specific recovery action, either wrap the task with a
Runnable or Callable that catches the exception or override the afterExecute hook in ThreadPoolExecutor.
Somewhat confusingly, exceptions thrown from tasks make it to the uncaught exception handler only for tasks
submitted with execute; for tasks submitted with submit, any thrown exception, checked or not, is considered to be
part of the task's return status. If a task submitted with submit terminates with an exception, it is rethrown by
Future.get, wrapped in an ExecutionException.
7.4. JVM Shutdown
The JVM can shut down in either an orderly or abrupt manner. An orderly shutdown is initiated when the last "normal"
(non‐daemon) thread terminates, someone calls System.exit, or by other platform‐specific means (such as sending a
SIGINT or hitting Ctrl-C). While this is the standard and preferred way for the JVM to shut down, it can also be shut
down abruptly by calling Runtime.halt or by killing the JVM process through the operating system (such as sending a
SIGKILL).
7.4.1. Shutdown Hooks
In an orderly shutdown, the JVM first starts all registered shutdown hooks. Shutdown hooks are unstarted threads that
are registered with Runtime.addShutdownHook. The JVM makes no guarantees on the order in which shutdown hooks
are started. If any application threads (daemon or nondaemon) are still running at shutdown time, they continue to run
concurrently with the shutdown process. When all shutdown hooks have completed, the JVM may choose to run
finalizers if runFinalizersOnExit is true, and then halts. The JVM makes no attempt to stop or interrupt any
application threads that are still running at shutdown time; they are abruptly terminated when the JVM eventually halts.
If the shutdown hooks or finalizers don't complete, then the orderly shutdown process "hangs" and the JVM must be
shut down abruptly. In an abrupt shutdown, the JVM is not required to do anything other than halt the JVM; shutdown
hooks will not run.
Shutdown hooks should be thread‐safe: they must use synchronization when accessing shared data and should be
careful to avoid deadlock, just like any other concurrent code. Further, they should not make assumptions about the
state of the application (such as whether other services have shut down already or all normal threads have completed)
or about why the JVM is shutting down, and must therefore be coded extremely defensively. Finally, they should exit as
quickly as possible, since their existence delays JVM termination at a time when the user may be expecting the JVM to
terminate quickly.
Shutdown hooks can be used for service or application cleanup, such as deleting temporary files or cleaning up
resources that are not automatically cleaned up by the OS. Listing 7.26 shows how LogService in Listing 7.16 could
register a shutdown hook from its start method to ensure the log file is closed on exit.
Because shutdown hooks all run concurrently, closing the log file could cause trouble for other shutdown hooks who
want to use the logger. To avoid this problem, shutdown hooks should not rely on services that can be shut down by the
application or other shutdown hooks. One way to accomplish this is to use a single shutdown hook for all services,
rather than one for each service, and have it call a series of shutdown actions. This ensures that shutdown actions
execute sequentially in a single thread, thus avoiding the possibility of race conditions or deadlock between shutdown
actions. This technique can be used whether or not you use shutdown hooks; executing shutdown actions sequentially
rather than concurrently eliminates many potential sources of failure. In applications that maintain explicit dependency
information among services, this technique can also ensure that shutdown actions are performed in the right order.
5BPart II: Structuring Concurrent Applications ‐ 19BChapter 7. Cancellation and Shutdown 103
Listing 7.26. Registering a Shutdown Hook to Stop the Logging Service.
public void start() {
Runtime.getRuntime().addShutdownHook(new Thread() {
public void run() {
try { LogService.this.stop(); }
catch (InterruptedException ignored) {}
}
});
}
7.4.2. Daemon Threads
Sometimes you want to create a thread that performs some helper function but you don't want the existence of this
thread to prevent the JVM from shutting down. This is what daemon threads are for.
Threads are divided into two types: normal threads and daemon threads. When the JVM starts up, all the threads it
creates (such as garbage collector and other housekeeping threads) are daemon threads, except the main thread. When
a new thread is created, it inherits the daemon status of the thread that created it, so by default any threads created by
the main thread are also normal threads.
Normal threads and daemon threads differ only in what happens when they exit. When a thread exits, the JVM
performs an inventory of running threads, and if the only threads that are left are daemon threads, it initiates an orderly
shutdown. When the JVM halts, any remaining daemon threads are abandoned ‐ finally blocks are not executed,
stacks are not unwound ‐ the JVM just exits.
Daemon threads should be used sparingly ‐ few processing activities can be safely abandoned at any time with no
cleanup. In particular, it is dangerous to use daemon threads for tasks that might perform any sort of I/O. Daemon
threads are best saved for "housekeeping" tasks, such as a background thread that periodically removes expired entries
from an in‐memory cache.
Daemon threads are not a good substitute for properly managing the lifecycle of services within an application.
7.4.3. Finalizers
The garbage collector does a good job of reclaiming memory resources when they are no longer needed, but some
resources, such as file or socket handles, must be explicitly returned to the operating system when no longer needed. To
assist in this, the garbage collector treats objects that have a nontrivial finalize method specially: after they are
reclaimed by the collector, finalize is called so that persistent resources can be released.
Since finalizers can run in a thread managed by the JVM, any state accessed by a finalizer will be accessed by more than
one thread and therefore must be accessed with synchronization. Finalizers offer no guarantees on when or even if they
run, and they impose a significant performance cost on objects with nontrivial finalizers. They are also extremely
difficult to write correctly.[9] In most cases, the combination of finally blocks and explicit close methods does a better
job of resource management than finalizers; the sole exception is when you need to manage objects that hold resources
acquired by native methods. For these reasons and others, work hard to avoid writing or using classes with finalizers
(other than the platform library classes) [EJ Item 6].
[9] See (Boehm, 2005) for some of the challenges involved in writing finalizers.
Avoid finalizers.
Summary
End‐of‐lifecycle issues for tasks, threads, services, and applications can add complexity to their design and
implementation. Java does not provide a preemptive mechanism for cancelling activities or terminating threads.
Instead, it provides a cooperative interruption mechanism that can be used to facilitate cancellation, but it is up to you
to construct protocols for cancellation and use them consistently. Using FutureTask and the Executor framework
simplifies building cancellable tasks and services.
104 Java Concurrency In Practice
Chapter 8. Applying Thread Pools
Chapter 6 introduced the task execution framework, which simplifies management of task and thread lifecycles and
provides a simple and flexible means for decoupling task submission from execution policy. Chapter 7 covered some of
the messy details of service lifecycle that arise from using the task execution framework in real applications. This
chapter looks at advanced options for configuring and tuning thread pools, describes hazards to watch for when using
the task execution framework, and offers some more advanced
8.1. Implicit Couplings Between Tasks and Execution Policies
We claimed earlier that the Executor framework decouples task submission from task execution. Like many attempts at
decoupling complex processes, this was a bit of an overstatement. While the Executor framework offers substantial
flexibility in specifying and modifying execution policies, not all tasks are compatible with all execution policies. Types of
tasks that require specific execution policies include:
Dependent tasks. The most well behaved tasks are independent: those that do not depend on the timing, results, or side
effects of other tasks. When executing independent tasks in a thread pool, you can freely vary the pool size and
configuration without affecting anything but performance. On the other hand, when you submit tasks that depend on
other tasks to a thread pool, you implicitly create constraints on the execution policy that must be carefully managed to
avoid liveness problems (see Section 8.1.1).
Tasks that exploit thread confinement. Single‐threaded executors make stronger promises about concurrency than do
arbitrary thread pools. They guarantee that tasks are not executed concurrently, which allows you to relax the thread
safety of task code. Objects can be confined to the task thread, thus enabling tasks designed to run in that thread to
access those objects without synchronization, even if those resources are not thread‐safe. This forms an implicit
coupling between the task and the execution policy ‐ the tasks require their executor to be single‐threaded.[1] In this
case, if you changed the Executor from a single‐threaded one to a thread pool, thread safety could be lost.
[1] The requirement is not quite this strong; it would be enough to ensure only that tasks not execute concurrently and provide enough
synchronization so that the memory effects of one task are guaranteed to be visible to the next task ‐ which is precisely the guarantee offered by
newSingle-ThreadExecutor.
Response‐time‐sensitive tasks. GUI applications are sensitive to response time: users are annoyed at long delays
between a button click and the corresponding visual feedback. Submitting a long‐running task to a single‐threaded
executor, or submitting several long‐running tasks to a thread pool with a small number of threads, may impair the
responsiveness of the service managed by that Executor.
Tasks that use ThreadLocal. ThreadLocal allows each thread to have its own private "version" of a variable. However,
executors are free to reuse threads as they see fit. The standard Executor implementations may reap idle threads when
demand is low and add new ones when demand is high, and also replace a worker thread with a fresh one if an
unchecked exception is thrown from a task. ThreadLocal makes sense to use in pool threads only if the thread‐local
value has a lifetime that is bounded by that of a task; Thread-Local should not be used in pool threads to communicate
values between tasks.
Thread pools work best when tasks are homogeneous and independent. Mixing long‐running and short‐running tasks
risks "clogging" the pool unless it is very large; submitting tasks that depend on other tasks risks deadlock unless the
pool is unbounded. Fortunately, requests in typical network‐based server applications ‐ web servers, mail servers, file
servers ‐ usually meet these guidelines.
Some tasks have characteristics that require or preclude a specific execution policy. Tasks that depend on other tasks
require that the thread pool be large enough that tasks are never queued or rejected; tasks that exploit thread
confinement require sequential execution. Document these requirements so that future maintainers do not undermine
safety or liveness by substituting an incompatible execution policy.
8.1.1. Thread Starvation Deadlock
If tasks that depend on other tasks execute in a thread pool, they can deadlock. In a single‐threaded executor, a task
that submits another task to the same executor and waits for its result will always deadlock. The second task sits on the
work queue until the first task completes, but the first will not complete because it is waiting for the result of the
second task. The same thing can happen in larger thread pools if all threads are executing tasks that are blocked waiting
for other tasks still on the work queue. This is called thread starvation deadlock, and can occur whenever a pool task
initiates an unbounded blocking wait for some resource or condition that can succeed only through the action of
5BPart II: Structuring Concurrent Applications ‐ 20BChapter 8. Applying Thread Pools 105
another pool task, such as waiting for the return value or side effect of another task, unless you can guarantee that the
pool is large enough.
ThreadDeadlock in Listing 8.1 illustrates thread starvation deadlock. Render-PageTask submits two additional tasks to
the Executor to fetch the page header and footer, renders the page body, waits for the results of the header and footer
tasks, and then combines the header, body, and footer into the finished page. With a single‐threaded executor,
ThreadDeadlock will always deadlock. Similarly, tasks coordinating amongst themselves with a barrier could also cause
thread starvation deadlock if the pool is not big enough.
Whenever you submit to an Executor tasks that are not independent, be aware of the possibility of thread starvation
deadlock, and document any pool sizing or configuration constraints in the code or configuration file where the
Executor is configured.
In addition to any explicit bounds on the size of a thread pool, there may also be implicit limits because of constraints on
other resources. If your application uses a JDBC connection pool with ten connections and each task needs a database
connection, it is as if your thread pool only has ten threads because tasks in excess of ten will block waiting for a
connection.
Listing 8.1. Task that Deadlocks in a Singlethreaded
Executor. Don't Do this.
public class ThreadDeadlock {
ExecutorService exec = Executors.newSingleThreadExecutor();
public class RenderPageTask implements Callable<String> {
public String call() throws Exception {
Future<String> header, footer;
header = exec.submit(new LoadFileTask("header.html"));
footer = exec.submit(new LoadFileTask("footer.html"));
String page = renderBody();
// Will deadlock -- task waiting for result of subtask
return header.get() + page + footer.get();
}
}
}
8.1.2. Longrunning
Tasks
Thread pools can have responsiveness problems if tasks can block for extended periods of time, even if deadlock is not a
possibility. A thread pool can become clogged with long‐running tasks, increasing the service time even for short tasks. If
the pool size is too small relative to the expected steady‐state number of longrunning tasks, eventually all the pool
threads will be running long‐running tasks and responsiveness will suffer.
One technique that can mitigate the ill effects of long‐running tasks is for tasks to use timed resource waits instead of
unbounded waits. Most blocking methods in the plaform libraries come in both untimed and timed versions, such as
Thread.join, BlockingQueue.put, CountDownLatch.await, and Selector.select. If the wait times out, you can mark
the task as failed and abort it or requeue it for execution later. This guarantees that each task eventually makes progress
towards either successful or failed completion, freeing up threads for tasks that might complete more quickly. If a
thread pool is frequently full of blocked tasks, this may also be a sign that the pool
8.2. Sizing Thread Pools
The ideal size for a thread pool depends on the types of tasks that will be submitted and the characteristics of the
deployment system. Thread pool sizes should rarely be hard‐coded; instead pool sizes should be provided by a
configuration mechanism or computed dynamically by consulting Runtime.availableProcessors.
Sizing thread pools is not an exact science, but fortunately you need only avoid the extremes of "too big" and "too
small". If a thread pool is too big, then threads compete for scarce CPU and memory resources, resulting in higher
memory usage and possible resource exhaustion. If it is too small, throughput suffers as processors go unused despite
available work.
To size a thread pool properly, you need to understand your computing environment, your resource budget, and the
nature of your tasks. How many processors does the deployment system have? How much memory? Do tasks perform
106 Java Concurrency In Practice
mostly computation, I/O, or some combination? Do they require a scarce resource, such as a JDBC connection? If you
have different categories of tasks with very different behaviors, consider using multiple thread pools so each can be
tuned according to its workload.
For compute‐intensive tasks, an Ncpu‐processor system usually achieves optimum utilization with a thread pool of Ncpu
+1 threads. (Even compute‐intensive threads occasionally take a page fault or pause for some other reason, so an
"extra" runnable thread prevents CPU cycles from going unused when this happens.) For tasks that also include I/O or
other blocking operations, you want a larger pool, since not all of the threads will be schedulable at all times. In order to
size the pool properly, you must estimate the ratio of waiting time to compute time for your tasks; this estimate need
not be precise and can be obtained through pro‐filing or instrumentation. Alternatively, the size of the thread pool can
be tuned by running the application using several different pool sizes under a benchmark load and observing the level of
CPU utilization.
Given these definitions:
发表评论
-
how tomcat works
2016-09-11 14:18 585how tomcat works, chinese and e ... -
【Core Java】 The Java I/O System
2014-09-09 16:32 402I/O 1. 最common的用法,从一个文件按行 ... -
用java编写简单Webserver,理解webserver的功能。
2014-01-08 11:06 909from http://www.cnblogs.com/wa ... -
课前练习
2013-12-27 17:47 0设计模式 课前李艾尼西 -
Java的规范制定
2012-08-14 15:49 7261. JCP: Java Community Process ... -
ResourceBundle
2012-08-03 13:23 7681. getBoundle //resources文件夹下的 ... -
Java Source Learning
2012-07-26 17:49 750Java Source Learning 2012-7-26 ... -
Java的位运算符&|^
2012-07-26 11:24 7451. & 与,同时满足 1&1 = 000 ... -
volatile
2012-07-26 11:02 615不稳定的,所以不同线程都要到共享内存中读取,不同线程看 ... -
change package and import
2012-07-25 17:58 758package util; import java.io.B ... -
transient
2012-07-13 23:57 969如果用transient声明一个实例变量,当对象存储时,它的 ... -
Java数据类型的长度
2012-07-13 23:55 706float4个字节 double8个字节 -
java 集合类图形解释
2012-04-30 20:30 803转自: http://messon619.iteye.com/ ... -
Java网络编程
2012-04-27 23:54 689http://www.cnblogs.com/springcs ... -
Java并发/多线程
2012-04-10 17:03 849Java多线程 实现线程的两种方法: a. 继承Threa ... -
深入研究java.lang.ThreadLocal类
2012-03-08 15:22 357深入研究java.lang.ThreadLocal类 ... -
Java中Split函数的用法技巧
2012-02-09 13:29 952<!-- [if gte mso 9]><x ... -
SimpleDataFormat 日期格式
2011-12-21 16:44 815Letter Date or Ti ... -
Thread stop不能用了
2011-12-19 17:39 3817http://docs.oracle.com/javase/6 ... -
java jar 命令行
2011-10-27 17:07 691java -jar xxx.jar <参数> ...
相关推荐
7. **版本控制**:"vvvvvvvvvvvvvvvvvvvvvvvvv" 可能是表示版本号的省略部分,表明API可能有多个版本,每个版本可能存在向后兼容性问题或新功能的添加。 8. **文档**:一个完整的API不仅要有代码实现,还应有清晰的...
配合博客中的代码去除水印,转换pdf vvvvvvvvvvvvvvvvvvvvvvvvv
【项目资源】:包含前端、后端、移动开发、操作系统、人工智能、物联网、信息化管理、数据库、硬件开发、大数据、课程资源、音视频、网站开发等各种技术项目的源码。包括STM32、ESP8266、PHP、QT、Linux、iOS、C++、Java、python、web、C#、EDA、proteus、RTOS等项目的源码。【项目质量】:所有源码都经过严格测试,可以直接运行。功能在确认正常工作后才上传。【适用人群】:适用于希望学习不同技术领域的小白或进阶学习者。可作为毕设项目、课程设计、大作业、工程实训或初期项目立项。【附加价值】:项目具有较高的学习借鉴价值,也可直接拿来修改复刻。对于有一定基础或热衷于研究的人来说,可以在这些基础代码上进行修改和扩展,实现其他功能。【沟通交流】:有任何使用上的问题,欢迎随时与博主沟通,博主会及时解答。鼓励下载和使用,并欢迎大家互相学习,共同进步。
中医诊所系统,WPF.zip
【项目资源】:包含前端、后端、移动开发、操作系统、人工智能、物联网、信息化管理、数据库、硬件开发、大数据、课程资源、音视频、网站开发等各种技术项目的源码。包括STM32、ESP8266、PHP、QT、Linux、iOS、C++、Java、python、web、C#、EDA、proteus、RTOS等项目的源码。【项目质量】:所有源码都经过严格测试,可以直接运行。功能在确认正常工作后才上传。【适用人群】:适用于希望学习不同技术领域的小白或进阶学习者。可作为毕设项目、课程设计、大作业、工程实训或初期项目立项。【附加价值】:项目具有较高的学习借鉴价值,也可直接拿来修改复刻。对于有一定基础或热衷于研究的人来说,可以在这些基础代码上进行修改和扩展,实现其他功能。【沟通交流】:有任何使用上的问题,欢迎随时与博主沟通,博主会及时解答。鼓励下载和使用,并欢迎大家互相学习,共同进步。
全国各省、297个地级市公路里程面板数据1999-2021年涵盖了中国各地区公路建设的详细情况,是衡量地区基础设施水平的重要指标。这些数据不仅包括了全国31个省份的公路里程,还深入到了297个地级市的层面,提供了从1999年至2021年的连续年份数据。这些数据来源于各省统计年鉴、经济社会发展统计数据库、地级市统计年鉴以及地级市发展统计公报,确保了数据的准确性和权威性。通过这些数据,可以观察到中国公路交通建设的发展不平衡性,沿海地区和长江中下游地区公路交通密度较高,而西部地区相对较低。这些面板数据为研究中国城市化进程、区域经济发展以及交通基础设施建设提供了宝贵的信息资源。
技术处工作事项延期完成申请单.docx
本文为图书馆管理课程设计SQL Server功能规范说明书。本说明书将: 描述数据库设计的目的; 说明数据库设计中的主要组成部分; 说明数据库设计中各功能的实现。 本文档主要内容包括对数据库设计结构的总体描述,对数据库中各种对象的描述(包括对象的名称、对象的属性、对象和其他对象直接的关系);在数据库主要对象之外,本文还将描述数据库安全性设置、数据库属性设置和数据库备份策略,为数据库管理员维护数据库安全稳定地运行提供参考;有需要的朋友可以下载看看
项目中常见的问题,记录一下解决方案
octopart数据格式样例
【项目资源】:包含前端、后端、移动开发、操作系统、人工智能、物联网、信息化管理、数据库、硬件开发、大数据、课程资源、音视频、网站开发等各种技术项目的源码。包括STM32、ESP8266、PHP、QT、Linux、iOS、C++、Java、python、web、C#、EDA、proteus、RTOS等项目的源码。【项目质量】:所有源码都经过严格测试,可以直接运行。功能在确认正常工作后才上传。【适用人群】:适用于希望学习不同技术领域的小白或进阶学习者。可作为毕设项目、课程设计、大作业、工程实训或初期项目立项。【附加价值】:项目具有较高的学习借鉴价值,也可直接拿来修改复刻。对于有一定基础或热衷于研究的人来说,可以在这些基础代码上进行修改和扩展,实现其他功能。【沟通交流】:有任何使用上的问题,欢迎随时与博主沟通,博主会及时解答。鼓励下载和使用,并欢迎大家互相学习,共同进步。
本文档主要讲述的是Oracle 11g RAC安装与配置for Linux;希望对大家的学习会有帮助 文档结构 第一部分:Oracle Grid Infrastructure安装 第二部分:Oracle Clusterware与Oracle Real Application Clusters安装前准备规程 第三部分:安装Oracle Clusterware与Oracle Real Application Clusters 第四部分:Oracle Real Application Clusters环境配置 第五部分:Oracle Clusterware与Oracle Real Application Clusters参考资料
python教程.txt
文件太大放服务器下请务必到资源详情查看后然后下载 样本图:blog.csdn.net/2403_88102872/article/details/143979016 重要说明:数据集为小目标检测,训练map精度偏低属于正常现象,只要能检测出来即可。如果map低于0.5请勿奇怪,因为小目标检测是业界公认难检测的研究方向之一。 数据集格式:Pascal VOC格式+YOLO格式(不包含分割路径的txt文件,仅仅包含jpg图片以及对应的VOC格式xml文件和yolo格式txt文件) 图片数量(jpg文件个数):3763 标注数量(xml文件个数):3763 标注数量(txt文件个数):3763 标注类别数:7 标注类别名称:["blackheads","cyst","fore","nodule","papule","pustule","whiteheads"]
【项目资源】:包含前端、后端、移动开发、操作系统、人工智能、物联网、信息化管理、数据库、硬件开发、大数据、课程资源、音视频、网站开发等各种技术项目的源码。包括STM32、ESP8266、PHP、QT、Linux、iOS、C++、Java、python、web、C#、EDA、proteus、RTOS等项目的源码。【项目质量】:所有源码都经过严格测试,可以直接运行。功能在确认正常工作后才上传。【适用人群】:适用于希望学习不同技术领域的小白或进阶学习者。可作为毕设项目、课程设计、大作业、工程实训或初期项目立项。【附加价值】:项目具有较高的学习借鉴价值,也可直接拿来修改复刻。对于有一定基础或热衷于研究的人来说,可以在这些基础代码上进行修改和扩展,实现其他功能。【沟通交流】:有任何使用上的问题,欢迎随时与博主沟通,博主会及时解答。鼓励下载和使用,并欢迎大家互相学习,共同进步。
全国各地级市固定资产投资统计数据集覆盖了1996至2020年的时间跨度,提供了详尽的年度固定资产投资金额,单位为百万人民币。这些数据不仅包括了地级市级别的投资情况,还涵盖了省、区县以及行业等多个维度,为研究区域经济增长、投资结构和发展趋势提供了宝贵的数据支持。固定资产投资作为衡量一个地区经济发展活力和潜力的重要指标,反映了社会固定资产在生产、投资额的规模和速度。通过这些数据,研究人员可以深入分析不同地区、不同行业的投资特点,以及随时间变化的趋势,进而为政策制定和经济预测提供科学依据。
training_plan_db.sql
【项目资源】:包含前端、后端、移动开发、操作系统、人工智能、物联网、信息化管理、数据库、硬件开发、大数据、课程资源、音视频、网站开发等各种技术项目的源码。包括STM32、ESP8266、PHP、QT、Linux、iOS、C++、Java、python、web、C#、EDA、proteus、RTOS等项目的源码。【项目质量】:所有源码都经过严格测试,可以直接运行。功能在确认正常工作后才上传。【适用人群】:适用于希望学习不同技术领域的小白或进阶学习者。可作为毕设项目、课程设计、大作业、工程实训或初期项目立项。【附加价值】:项目具有较高的学习借鉴价值,也可直接拿来修改复刻。对于有一定基础或热衷于研究的人来说,可以在这些基础代码上进行修改和扩展,实现其他功能。【沟通交流】:有任何使用上的问题,欢迎随时与博主沟通,博主会及时解答。鼓励下载和使用,并欢迎大家互相学习,共同进步。
5
全国各省地区城乡收入差距、泰尔指数、城镇农村居民可支配收入统计数据集提供了1990至2021年间的详细数据,覆盖全国31个省份。该数据集不仅包括城镇居民和农村居民的人均可支配收入,还涵盖了乡村人口、全体居民人均可支配收入、城镇人口以及年末常住人口等关键指标。泰尔指数作为衡量收入不平等的重要工具,通过计算城镇收入与农村收入之比,为研究者提供了一个量化城乡收入差距的科学方法。这些数据不仅有助于分析中国城乡之间的经济差异,还能为政策制定者提供决策支持,以缩小城乡差距、促进区域均衡发展。数据集的丰富性使其成为社会科学领域研究城乡发展、收入分配不平等等问题的宝贵资源。