- 浏览: 16305 次
- 性别:
- 来自: 上海
文章分类
最新评论
Java Concurrency in Practice reading Notes (1) - Thread Safety
- 博客分类:
- concurrency
- java
Chapter 2. Thread Safety
Whether an object needs to be thread-safe depends on whether it will be accessed from multiple threads. This is a property of how the object is used in a program, not what it does.
Whenever more than one thread accesses a given state variable, and one of them might write to it, they all must coordinate their access to it using synchronization. The primary mechanism for synchronization in Java is thesynchronized keyword, which provides exclusive locking, but the term "synchronization" also includes the use ofvolatile variables, explicit locks, and atomic variables.
If multiple threads access the same mutable state variable without appropriate synchronization, your program is broken. There are three ways to fix it:
-
Don't share the state variable across threads;
-
Make the state variable immutable; or
-
Use synchronization whenever accessing the state variable
When designing thread-safe classes, good object-oriented techniques encapsulation, immutability, and clear specification of invariants are your best friends.
it is always a good practice first to make your code right, and then make it fast.
A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or other coordination on the part of the calling code.
stateless: it has no fields and references no fields from other classes.
Stateless objects are always thread-safe. Since the actions of a thread accessing a stateless object cannot affect the correctness of operations in other threads, stateless objects are thread-safe.
A Stateless Servlet.
@ThreadSafe public class StatelessFactorizer implements Servlet { public void service(ServletRequest req, ServletResponse resp) { BigInteger i = extractFromRequest(req); BigInteger[] factors = factor(i); encodeIntoResponse(resp, factors); } } |
2.2. Atomicity
While the increment operation,++count, may look like a single action because of its compact syntax, it is not atomic, which means that it does not execute as a single, indivisible operation. Instead, it is a shorthand for a sequence of three discrete operations: fetch the current value, add one to it, and write the new value back.
A race condition occurs when the correctness of a computation depends on the relative timing or interleaving of multiple threads by the runtime; in other words, when getting the right answer relies on lucky timing. [4] The most common type of race condition ischeck-then-act, where a potentially stale observation is used to make a decision on what to do next.
The term race condition is often confused with the related term data race, which arises when synchronization is not used to coordinate all access to a shared nonfinal field. You risk a data race whenever a thread writes a variable that might next be read by another thread or reads a variable that might have last been written by another thread if both threads do not use synchronization; code with data races has no useful defined semantics under the Java Memory Model. Not all race conditions are data races, and not all data races are race conditions, but they both can cause concurrent programs to fail in unpredictable ways.
Listing 2.3. Race Condition in Lazy Initialization. Don't Do this.
@NotThreadSafe
public class LazyInitRace {
private ExpensiveObject instance = null;
public ExpensiveObject getInstance() {
if (instance == null)
instance = new ExpensiveObject();
return instance;
}
}
LazyInitRace has race conditions that can undermine its correctness. Say that threads A and B execute getInstanceat the same time. A sees that instance is null, and instantiates a new ExpensiveObject. B also checks if instance isnull. Whether instance is null at this point depends unpredictably on timing, including the vagaries of scheduling and how long A takes to instantiate the ExpensiveObject and set the instance field. If instance is null when Bexamines it, the two callers to getInstance may receive two different results, even though getInstance is always supposed to return the same instance.
Operations A and B are atomic with respect to each other if, from the perspective of a thread executingA, when another thread executes B, either all of B has executed or none of it has. An atomic operation is one that is atomic with respect to all operations, including itself, that operate on the same state.
2.2.3. Compound Actions
We refer collectively to check-then-act and read-modify-write sequences as compound actions: sequences of operations that must be executed atomically in order to remain thread-safe.
Listing 2.4. Servlet that Counts Requests Using AtomicLong.
@ThreadSafe public class CountingFactorizer implements Servlet { private final AtomicLong count = new AtomicLong(0); public long getCount() { return count.get(); } public void service(ServletRequest req, ServletResponse resp) { BigInteger i = extractFromRequest(req); BigInteger[] factors = factor(i); count.incrementAndGet(); encodeIntoResponse(resp, factors); } } |
The java.util.concurrent.atomic package contains atomic variable classes for effecting atomic state transitions on numbers and object references. By replacing the long counter with an AtomicLong, we ensure that all actions that access the counter state are atomic.
Where practical, use existing thread-safe objects, like AtomicLong, to manage your class's state. It is simpler to reason about the possible states and state transitions for existing thread-safe objects than it is for arbitrary state variables, and this makes it easier to maintain and verify thread safety.
2.3. Locking |
Listing 2.5. Servlet that Attempts to Cache its Last Result without Adequate Atomicity. Don't Do this.
@NotThreadSafe public class UnsafeCachingFactorizer implements Servlet { private final AtomicReference<BigInteger> lastNumber = new AtomicReference<BigInteger>(); private final AtomicReference<BigInteger[]> lastFactors = new AtomicReference<BigInteger[]>(); public void service(ServletRequest req, ServletResponse resp) { BigInteger i = extractFromRequest(req); if (i.equals(lastNumber.get())) encodeIntoResponse(resp, lastFactors.get() ); else { BigInteger[] factors = factor(i); lastNumber.set(i); lastFactors.set(factors); encodeIntoResponse(resp, factors); } } } |
The definition of thread safety requires that invariants be preserved regardless of timing or interleaving of operations in multiple threads.
To preserve state consistency, update related state variables in a single atomic operation.
2.3.1. Intrinsic Locks
Asynchronized block has two parts: a reference to an object that will serve as the lock, and a block of code to be guarded by that lock. A synchronized method is a shorthand for a synchronized block that spans an entire method body, and whose lock is the object on which the method is being invoked.
Intrinsic locks in Java act as mutexes (or mutual exclusion locks), which means that at most one thread may own the lock. When thread A attempts to acquire a lock held by thread B, A must wait, or block, until B releases it. If B never releases the lock, A waits forever.
In the context of concurrency, atomicity means the same thing as it does in transactional applicationsthat a group of statements appear to execute as a single, indivisible unit.
Listing 2.6. Servlet that Caches Last Result, But with Unnacceptably Poor Concurrency. Don't Do this.
@ThreadSafe
public class SynchronizedFactorizer implements Servlet {
@GuardedBy("this") private BigInteger lastNumber;
@GuardedBy("this") private BigInteger[] lastFactors;
public synchronized void service(ServletRequest req,
ServletResponse resp) {
BigInteger i = extractFromRequest(req);
if (i.equals(lastNumber))
encodeIntoResponse(resp, lastFactors);
else {
BigInteger[] factors = factor(i);
lastNumber = i;
lastFactors = factors;
encodeIntoResponse(resp, factors);
}
}
}
Figure 2.1. Poor Concurrency of SynchronizedFactorizer.
The machinery of synchronization makes it easy to restore thread safety to the factoring servlet. Listing 2.6 makes theservice method synchronized, so only one thread may enter service at a time. SynchronizedFactorizer is now thread-safe; however, this approach is fairly extreme, since it inhibits multiple clients from using the factoring servlet simultaneously at allresulting in unacceptably poor responsiveness. This problemwhich is a performance problem, not a thread safety problemis addressed in Section 2.5.
2.3.2. Reentrancy
Listing 2.7. Code that would Deadlock if Intrinsic Locks were Not Reentrant.
public class Widget {
public synchronized void doSomething() {
...
}
}
public class LoggingWidget extends Widget {
public synchronized void doSomething() {
System.out.println(toString() + ": calling doSomething");
super.doSomething();
}
}
Without reentrant locks, the very natural-looking code in Listing 2.7, in which a subclass overrides asynchronized method and then calls the superclass method, would deadlock. Because the doSomething methods inWidget and LoggingWidget are both synchronized, each tries to acquire the lock on the Widget before proceeding. But if intrinsic locks were not reentrant, the call to super.doSomething would never be able to acquire the lock because it would be considered already held, and the thread would permanently stall waiting for a lock it can never acquire. Reentrancy saves us from deadlock in situations like this.
2.4. Guarding State with Locks
Serializing access to an object has nothing to do with object serialization (turning an object into a byte stream); serializing access means that threads take turns accessing the object exclusively, rather than doing so concurrently.
For each mutable state variable that may be accessed by more than one thread, all accesses to that variable must be performed with the same lock held. In this case, we say that the variable is guarded bythat lock.
Every shared, mutable variable should be guarded by exactly one lock. Make it clear to maintainers which lock that is.
A common locking convention is to encapsulate all mutable state within an object and to protect it from concurrent access by synchronizing any code path that accesses mutable state using the object's intrinsic lock. This pattern is used by many thread-safe classes, such as Vector and other synchronized collection classes.
Code auditing tools like FindBugs can identify when a variable is frequently but not always accessed with a lock held, which may indicate a bug.
For every invariant that involves more than one variable, all the variables involved in that invariant must be guarded by the same lock.
2.5. Liveness and Performance
Listing 2.8. Servlet that Caches its Last Request and Result.
@ThreadSafe public class CachedFactorizer implements Servlet { @GuardedBy("this") private BigInteger lastNumber; @GuardedBy("this") private BigInteger[] lastFactors; @GuardedBy("this") private long hits; @GuardedBy("this") private long cacheHits; public synchronized long getHits() { return hits; } public synchronized double getCacheHitRatio() { return (double) cacheHits / (double) hits; } public void service(ServletRequest req, ServletResponse resp) { BigInteger i = extractFromRequest(req); BigInteger[] factors = null; synchronized (this) { ++hits; if (i.equals(lastNumber)) { ++cacheHits; factors = lastFactors.clone(); } } if (factors == null) { factors = factor(i); synchronized (this) { lastNumber = i; lastFactors = factors.clone(); } } encodeIntoResponse(resp, factors); } } |
The restructuring of CachedFactorizer provides a balance between simplicity (synchronizing the entire method) and concurrency (synchronizing the shortest possible code paths). Acquiring and releasing a lock has some overhead, so it is undesirable to break down synchronized blocks too far (such as factoring ++hits into its own synchronized block), even if this would not compromise atomicity. CachedFactorizer holds the lock when accessing state variables and for the duration of compound actions, but releases it before executing the potentially long-running factorization operation. This preserves thread safety without unduly affecting concurrency; the code paths in each of thesynchronized blocks are "short enough".
There is frequently a tension between simplicity and performance. When implementing a synchronization policy, resist the temptation to prematurely sacriflce simplicity (potentially compromising safety) for the sake of performance.
Avoid holding locks during lengthy computations or operations at risk of not completing quickly such as network or console I/O.
相关推荐
Java Concurrency in Practice 英文无水印pdf pdf所有页面使用FoxitReader和PDF-XChangeViewer测试都可以打开 本资源转载自网络,如有侵权,请联系上传者或csdn删除 本资源转载自网络,如有侵权,请联系上传者...
<<java并行编程>>英文版chm格式,英文名称<Java Concurrency in Practice>,一直想买这本书,但总是缺货,找到了电子版,分享给大家。 Java Concurrency in Practice By Brian Goetz, Tim Peierls, Joshua Bloch,...
Basic concepts of concurrency and thread safety Techniques for building and composing thread-safe classes Using the concurrency building blocks in java.util.concurrent Performance optimization dos ...
Java Concurrency in practice
Java提供了`Thread`类和`Runnable`接口来实现线程,同时讲解了`start()`与`run()`方法的区别。 2. **并发模型**:Java并发模型基于共享内存,其中线程通过内存共享进行通信。书中会讲解监视器(Monitor)、锁(Lock...
Java Concurrency in Practice JAVA并发编程实践中文版(全)第二部分
《Java Concurrency in Practice》是Java并发编程领域的一本经典著作,由Brian Goetz、Tim Peierls、Joshua Bloch、Joseph Bowles和Doug Lea等专家共同编写。这本书深入探讨了Java平台上的多线程和并发编程,旨在...
java concurrency in practice 经典的多线程编程书籍,英文版
java_concurrency_in_practice.pdf jcip-examples-src.jar jcip-annotations-src.jar 英文版是高清晰的,实战和实践都是同一帮人对英文版书的翻译,网传实战的翻译质量更好,实战是2012年出版的,应该是对前一版实践...
本笔记将深入探讨《Java Concurrency In Practice》这本书中的核心概念,结合Guava库的实际使用案例,帮助读者理解并掌握Java并发编程的精髓。 首先,我们来了解Java并发的基础知识。Java提供了丰富的并发工具类,...
首先,"Java Concurrency in Practice"是Java并发编程的经典之作,由Brian Goetz、Tim Peierls、Joshua Bloch、David Holmes和Doug Lea合著。这本书提供了一套实用的指导原则、设计模式和最佳实践,帮助Java开发者...
《Java Concurrency In Practice》是一本关于Java并发编程的经典著作,由Brian Göetz、Tim Peierls、Joshua Bloch、Joseph Bowbeer、David Holmes和Doug Lea共同编写。本书深入探讨了Java平台上的多线程编程技巧,...
- **书名**:《Java并发实践》(Java Concurrency in Practice) - **作者**:Brian Goetz, Tim Peierls, Joshua Bloch, Joseph Bowbeer, David Holmes, Doug Lea - **出版社**:Addison Wesley Professional - **...
这本书很出名吧,大家都知道吧,我靠,20个字的描述咋这么累啊。
If you are a java developer, you should read this book. It will bring a lot benifit to you.
《JAVA并发编程实践》随着多核处理器的普及,使用并发成为构建高性能应用程序的关键。Java 5以及6在开发并发程序中取得了显著的进步,提高了Java虚拟机的性能以及并发类的可伸缩性,并加入了丰富的新并发构建块。在...