- 浏览: 33954 次
- 性别:
- 来自: 上海
最近访客 更多访客>>
文章分类
最新评论
-
Julien:
彼得定律假设高阶的管理职位要求人的能力比低阶职位的能力更高,这 ...
组织行为学的几点想法 -
xielingjiang:
对线程池的再次理解对java1.5的concurrent包的拜 ...
关于线程Pool的分析 -
xielingjiang:
不用咬文嚼字吧?有些道理谁也不能证明它是完全是对的,或者完全是 ...
组织行为学的几点想法 -
night_stalker:
这种伪科学连什么是公理,什么是定理都没弄清楚
组织行为学的几点想法 -
xielingjiang:
只能说道理谁都懂,做起来难啊。
组织行为学的几点想法
http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLo
The "Double-Checked Locking is Broken" Declaration
Signed by: David Bacon (IBM Research) Joshua Bloch (Javasoft), Jeff Bogda, Cliff Click (Hotspot JVM project), Paul Haahr, Doug Lea, Tom May, Jan-Willem Maessen, John D. Mitchell (jGuru) Kelvin Nilsen, Bill Pugh, Emin Gun Sirer
Double-Checked Locking is widely cited and used as an efficient method for implementing lazy initialization in a multithreaded environment.
Unfortunately, it will not work reliably in a platform independent way when implemented in Java. When implemented in other languages, such as C++, it depends on the memory model of the processor, the reorderings performed by the compiler and the interaction between the compiler and the synchronization library. Since none of these are specified in a language such as C++, little can be said about the situations in which it will work. Explicit memory barriers can be used to make it work in C++, but these barriers are not available in Java.
To first explain the desired behavior, consider the following code:
// Single threaded version
class Foo {
private Helper helper = null;
public Helper getHelper() {
if (helper == null)
helper = new Helper();
return helper;
}
// other functions and members...
}
If this code was used in a multithreaded context, many things could go wrong. Most obviously, two or more Helper objects could be allocated. (We'll bring up other problems later). The fix to this is simply to synchronize the getHelper() method:
// Correct multithreaded version
class Foo {
private Helper helper = null;
public synchronized Helper getHelper() {
if (helper == null)
helper = new Helper();
return helper;
}
// other functions and members...
}
The code above performs synchronization every time getHelper() is called. The double-checked locking idiom tries to avoid synchronization after the helper is allocated:
// Broken multithreaded version
// "Double-Checked Locking" idiom
class Foo {
private Helper helper = null;
public Helper getHelper() {
if (helper == null)
synchronized(this) {
if (helper == null)
helper = new Helper();
}
return helper;
}
// other functions and members...
}
Unfortunately, that code just does not work in the presence of either optimizing compilers or shared memory multiprocessors.
It doesn't work
There are lots of reasons it doesn't work. The first couple of reasons we'll describe are more obvious. After understanding those, you may be tempted to try to devise a way to "fix" the double-checked locking idiom. Your fixes will not work: there are more subtle reasons why your fix won't work. Understand those reasons, come up with a better fix, and it still won't work, because there are even more subtle reasons.
Lots of very smart people have spent lots of time looking at this. There is no way to make it work without requiring each thread that accesses the helper object to perform synchronization.
The first reason it doesn't work
The most obvious reason it doesn't work it that the writes that initialize the Helper object and the write to the helper field can be done or perceived out of order. Thus, a thread which invokes getHelper() could see a non-null reference to a helper object, but see the default values for fields of the helper object, rather than the values set in the constructor.
If the compiler inlines the call to the constructor, then the writes that initialize the object and the write to the helper field can be freely reordered if the compiler can prove that the constructor cannot throw an exception or perform synchronization.
Even if the compiler does not reorder those writes, on a multiprocessor the processor or the memory system may reorder those writes, as perceived by a thread running on another processor.
Doug Lea has written a more detailed description of compiler-based reorderings.
A test case showing that it doesn't work
Paul Jakubik found an example of a use of double-checked locking that did not work correctly. A slightly cleaned up version of that code is available here.
When run on a system using the Symantec JIT, it doesn't work. In particular, the Symantec JIT compiles
singletons[i].reference = new Singleton();
to the following (note that the Symantec JIT using a handle-based object allocation system).
0206106A mov eax,0F97E78h
0206106F call 01F6B210 ; allocate space for
; Singleton, return result in eax
02061074 mov dword ptr [ebp],eax ; EBP is &singletons[i].reference
; store the unconstructed object here.
02061077 mov ecx,dword ptr [eax] ; dereference the handle to
; get the raw pointer
02061079 mov dword ptr [ecx],100h ; Next 4 lines are
0206107F mov dword ptr [ecx+4],200h ; Singleton's inlined constructor
02061086 mov dword ptr [ecx+8],400h
0206108D mov dword ptr [ecx+0Ch],0F84030h
As you can see, the assignment to singletons[i].reference is performed before the constructor for Singleton is called. This is completely legal under the existing Java memory model, and also legal in C and C++ (since neither of them have a memory model).
A fix that doesn't work
Given the explanation above, a number of people have suggested the following code:
// (Still) Broken multithreaded version
// "Double-Checked Locking" idiom
class Foo {
private Helper helper = null;
public Helper getHelper() {
if (helper == null) {
Helper h;
synchronized(this) {
h = helper;
if (h == null)
synchronized (this) {
h = new Helper();
} // release inner synchronization lock
helper = h;
}
}
return helper;
}
// other functions and members...
}
This code puts construction of the Helper object inside an inner synchronized block. The intuitive idea here is that there should be a memory barrier at the point where synchronization is released, and that should prevent the reordering of the initialization of the Helper object and the assignment to the field helper.
Unfortunately, that intuition is absolutely wrong. The rules for synchronization don't work that way. The rule for a monitorexit (i.e., releasing synchronization) is that actions before the monitorexit must be performed before the monitor is released. However, there is no rule which says that actions after the monitorexit may not be done before the monitor is released. It is perfectly reasonable and legal for the compiler to move the assignment helper = h; inside the synchronized block, in which case we are back where we were previously. Many processors offer instructions that perform this kind of one-way memory barrier. Changing the semantics to require releasing a lock to be a full memory barrier would have performance penalties.
More fixes that don't work
There is something you can do to force the writer to perform a full bidirectional memory barrier. This is gross, inefficient, and is almost guaranteed not to work once the Java Memory Model is revised. Do not use this. In the interests of science, I've put a description of this technique on a separate page. Do not use it.
However, even with a full memory barrier being performed by the thread that initializes the helper object, it still doesn't work.
The problem is that on some systems, the thread which sees a non-null value for the helper field also needs to perform memory barriers.
Why? Because processors have their own locally cached copies of memory. On some processors, unless the processor performs a cache coherence instruction (e.g., a memory barrier), reads can be performed out of stale locally cached copies, even if other processors used memory barriers to force their writes into global memory.
I've created a separate web page with a discussion of how this can actually happen on an Alpha processor.
Is it worth the trouble?
For most applications, the cost of simply making the getHelper() method synchronized is not high. You should only consider this kind of detailed optimizations if you know that it is causing a substantial overhead for an application.
Very often, more high level cleverness, such as using the builtin mergesort rather than handling exchange sort (see the SPECJVM DB benchmark) will have much more impact.
Making it work for static singletons
If the singleton you are creating is static (i.e., there will only be one Helper created), as opposed to a property of another object (e.g., there will be one Helper for each Foo object, there is a simple and elegant solution.
Just define the singleton as a static field in a separate class. The semantics of Java guarantee that the field will not be initialized until the field is referenced, and that any thread which accesses the field will see all of the writes resulting from initializing that field.
class HelperSingleton {
static Helper singleton = new Helper();
}
It will work for 32-bit primitive values
Although the double-checked locking idiom cannot be used for references to objects, it can work for 32-bit primitive values (e.g., int's or float's). Note that it does not work for long's or double's, since unsynchronized reads/writes of 64-bit primitives are not guaranteed to be atomic.
// Correct Double-Checked Locking for 32-bit primitives
class Foo {
private int cachedHashCode = 0;
public int hashCode() {
int h = cachedHashCode;
if (h == 0)
synchronized(this) {
if (cachedHashCode != 0) return cachedHashCode;
h = computeHashCode();
cachedHashCode = h;
}
return h;
}
// other functions and members...
}
In fact, assuming that the computeHashCode function always returned the same result and had no side effects (i.e., idempotent), you could even get rid of all of the synchronization.
// Lazy initialization 32-bit primitives
// Thread-safe if computeHashCode is idempotent
class Foo {
private int cachedHashCode = 0;
public int hashCode() {
int h = cachedHashCode;
if (h == 0) {
h = computeHashCode();
cachedHashCode = h;
}
return h;
}
// other functions and members...
}
Making it work with explicit memory barriers
It is possible to make the double checked locking pattern work if you have explicit memory barrier instructions. For example, if you are programming in C++, you can use the code from Doug Schmidt et al.'s book:
// C++ implementation with explicit memory barriers
// Should work on any platform, including DEC Alphas
// From "Patterns for Concurrent and Distributed Objects",
// by Doug Schmidt
template <class TYPE, class LOCK> TYPE *
Singleton<TYPE, LOCK>::instance (void) {
// First check
TYPE* tmp = instance_;
// Insert the CPU-specific memory barrier instruction
// to synchronize the cache lines on multi-processor.
asm ("memoryBarrier");
if (tmp == 0) {
// Ensure serialization (guard
// constructor acquires lock_).
Guard<LOCK> guard (lock_);
// Double check.
tmp = instance_;
if (tmp == 0) {
tmp = new TYPE;
// Insert the CPU-specific memory barrier instruction
// to synchronize the cache lines on multi-processor.
asm ("memoryBarrier");
instance_ = tmp;
}
return tmp;
}
Fixing Double-Checked Locking using Thread Local Storage
Alexander Terekhov (TEREKHOV@de.ibm.com) came up clever suggestion for implementing double checked locking using thread local storage. Each thread keeps a thread local flag to determine whether that thread has done the required synchronization. class Foo {
/** If perThreadInstance.get() returns a non-null value, this thread
has done synchronization needed to see initialization
of helper */
private final ThreadLocal perThreadInstance = new ThreadLocal();
private Helper helper = null;
public Helper getHelper() {
if (perThreadInstance.get() == null) createHelper();
return helper;
}
private final void createHelper() {
synchronized(this) {
if (helper == null)
helper = new Helper();
}
// Any non-null value would do as the argument here
perThreadInstance.set(perThreadInstance);
}
}
The performance of this technique depends quite a bit on which JDK implementation you have. In Sun's 1.2 implementation, ThreadLocal's were very slow. They are significantly faster in 1.3, and are expected to be faster still in 1.4. Doug Lea analyzed the performance of some techniques for implementing lazy initialization.
Under the proposed Java Memory Model
Work is underway to revise the Java Memory Model and Thread specification.
Fixing Double-Checked Locking using Volatile
The consensus proposal extends the semantics for volatile so that the system will not allow a write of a volatile to be reordered with respect to any previous read or write, and a read of a volatile cannot be reordered with respect to any following read or write. See the slides from the JavaOne BOF for more details.
With this change, the Double-Checked Locking idiom can be made to work by declaring the helper field to be volatile. This does not work under the current semantics for volatile.
// Works with acquire/release semantics for volatile
// Broken under current semantics for volatile
class Foo {
private volatile Helper helper = null;
public Helper getHelper() {
if (helper == null) {
synchronized(this) {
if (helper == null)
helper = new Helper();
}
}
return helper;
}
}
Double-Checked Locking Immutable Objects
If Helper is an immutable object, such that all of the fields of Helper are final, then double-checked locking will work without having to use volatile fields. The idea is that a reference to an immutable object (such as a String or an Integer) should behave in much the same way as an int or float; reading and writing references to immutable objects are atomic.
This is, in fact, not true under the existing semantics, and even under some existing implementations. The details of the proposed semantics for immutable objects are tricky and still subject to working out lots of fine print.
Descriptions of double-check idiom
Reality Check, Douglas C. Schmidt, C++ Report, SIGS, Vol. 8, No. 3, March 1996.
Double-Checked Locking: An Optimization Pattern for Efficiently Initializing and Accessing Thread-safe Objects, Douglas Schmidt and Tim Harrison. 3rd annual Pattern Languages of Program Design conference, 1996
Lazy instantiation, Philip Bishop and Nigel Warren, JavaWorld Magazine
Programming Java threads in the real world, Part 7, Allen Holub, Javaworld Magazine, April 1999.
Java 2 Performance and Idiom Guide, Craig Larman and Rhett Guthrie, p100.
Java in Practice: Design Styles and Idioms for Effective Java, Nigel Warren and Philip Bishop, p142.
Rule 99, The Elements of Java Style, Allan Vermeulen, Scott Ambler, Greg Bumgardner, Eldon Metz, Trvor Misfeldt, Jim Shur, Patrick Thompson, SIGS Reference library
Global Variables in Java with the Singleton Pattern, Wiebe de Jong, Gamelan
Signed by: David Bacon (IBM Research) Joshua Bloch (Javasoft), Jeff Bogda, Cliff Click (Hotspot JVM project), Paul Haahr, Doug Lea, Tom May, Jan-Willem Maessen, John D. Mitchell (jGuru) Kelvin Nilsen, Bill Pugh, Emin Gun Sirer
Double-Checked Locking is widely cited and used as an efficient method for implementing lazy initialization in a multithreaded environment.
Unfortunately, it will not work reliably in a platform independent way when implemented in Java. When implemented in other languages, such as C++, it depends on the memory model of the processor, the reorderings performed by the compiler and the interaction between the compiler and the synchronization library. Since none of these are specified in a language such as C++, little can be said about the situations in which it will work. Explicit memory barriers can be used to make it work in C++, but these barriers are not available in Java.
To first explain the desired behavior, consider the following code:
// Single threaded version
class Foo {
private Helper helper = null;
public Helper getHelper() {
if (helper == null)
helper = new Helper();
return helper;
}
// other functions and members...
}
If this code was used in a multithreaded context, many things could go wrong. Most obviously, two or more Helper objects could be allocated. (We'll bring up other problems later). The fix to this is simply to synchronize the getHelper() method:
// Correct multithreaded version
class Foo {
private Helper helper = null;
public synchronized Helper getHelper() {
if (helper == null)
helper = new Helper();
return helper;
}
// other functions and members...
}
The code above performs synchronization every time getHelper() is called. The double-checked locking idiom tries to avoid synchronization after the helper is allocated:
// Broken multithreaded version
// "Double-Checked Locking" idiom
class Foo {
private Helper helper = null;
public Helper getHelper() {
if (helper == null)
synchronized(this) {
if (helper == null)
helper = new Helper();
}
return helper;
}
// other functions and members...
}
Unfortunately, that code just does not work in the presence of either optimizing compilers or shared memory multiprocessors.
It doesn't work
There are lots of reasons it doesn't work. The first couple of reasons we'll describe are more obvious. After understanding those, you may be tempted to try to devise a way to "fix" the double-checked locking idiom. Your fixes will not work: there are more subtle reasons why your fix won't work. Understand those reasons, come up with a better fix, and it still won't work, because there are even more subtle reasons.
Lots of very smart people have spent lots of time looking at this. There is no way to make it work without requiring each thread that accesses the helper object to perform synchronization.
The first reason it doesn't work
The most obvious reason it doesn't work it that the writes that initialize the Helper object and the write to the helper field can be done or perceived out of order. Thus, a thread which invokes getHelper() could see a non-null reference to a helper object, but see the default values for fields of the helper object, rather than the values set in the constructor.
If the compiler inlines the call to the constructor, then the writes that initialize the object and the write to the helper field can be freely reordered if the compiler can prove that the constructor cannot throw an exception or perform synchronization.
Even if the compiler does not reorder those writes, on a multiprocessor the processor or the memory system may reorder those writes, as perceived by a thread running on another processor.
Doug Lea has written a more detailed description of compiler-based reorderings.
A test case showing that it doesn't work
Paul Jakubik found an example of a use of double-checked locking that did not work correctly. A slightly cleaned up version of that code is available here.
When run on a system using the Symantec JIT, it doesn't work. In particular, the Symantec JIT compiles
singletons[i].reference = new Singleton();
to the following (note that the Symantec JIT using a handle-based object allocation system).
0206106A mov eax,0F97E78h
0206106F call 01F6B210 ; allocate space for
; Singleton, return result in eax
02061074 mov dword ptr [ebp],eax ; EBP is &singletons[i].reference
; store the unconstructed object here.
02061077 mov ecx,dword ptr [eax] ; dereference the handle to
; get the raw pointer
02061079 mov dword ptr [ecx],100h ; Next 4 lines are
0206107F mov dword ptr [ecx+4],200h ; Singleton's inlined constructor
02061086 mov dword ptr [ecx+8],400h
0206108D mov dword ptr [ecx+0Ch],0F84030h
As you can see, the assignment to singletons[i].reference is performed before the constructor for Singleton is called. This is completely legal under the existing Java memory model, and also legal in C and C++ (since neither of them have a memory model).
A fix that doesn't work
Given the explanation above, a number of people have suggested the following code:
// (Still) Broken multithreaded version
// "Double-Checked Locking" idiom
class Foo {
private Helper helper = null;
public Helper getHelper() {
if (helper == null) {
Helper h;
synchronized(this) {
h = helper;
if (h == null)
synchronized (this) {
h = new Helper();
} // release inner synchronization lock
helper = h;
}
}
return helper;
}
// other functions and members...
}
This code puts construction of the Helper object inside an inner synchronized block. The intuitive idea here is that there should be a memory barrier at the point where synchronization is released, and that should prevent the reordering of the initialization of the Helper object and the assignment to the field helper.
Unfortunately, that intuition is absolutely wrong. The rules for synchronization don't work that way. The rule for a monitorexit (i.e., releasing synchronization) is that actions before the monitorexit must be performed before the monitor is released. However, there is no rule which says that actions after the monitorexit may not be done before the monitor is released. It is perfectly reasonable and legal for the compiler to move the assignment helper = h; inside the synchronized block, in which case we are back where we were previously. Many processors offer instructions that perform this kind of one-way memory barrier. Changing the semantics to require releasing a lock to be a full memory barrier would have performance penalties.
More fixes that don't work
There is something you can do to force the writer to perform a full bidirectional memory barrier. This is gross, inefficient, and is almost guaranteed not to work once the Java Memory Model is revised. Do not use this. In the interests of science, I've put a description of this technique on a separate page. Do not use it.
However, even with a full memory barrier being performed by the thread that initializes the helper object, it still doesn't work.
The problem is that on some systems, the thread which sees a non-null value for the helper field also needs to perform memory barriers.
Why? Because processors have their own locally cached copies of memory. On some processors, unless the processor performs a cache coherence instruction (e.g., a memory barrier), reads can be performed out of stale locally cached copies, even if other processors used memory barriers to force their writes into global memory.
I've created a separate web page with a discussion of how this can actually happen on an Alpha processor.
Is it worth the trouble?
For most applications, the cost of simply making the getHelper() method synchronized is not high. You should only consider this kind of detailed optimizations if you know that it is causing a substantial overhead for an application.
Very often, more high level cleverness, such as using the builtin mergesort rather than handling exchange sort (see the SPECJVM DB benchmark) will have much more impact.
Making it work for static singletons
If the singleton you are creating is static (i.e., there will only be one Helper created), as opposed to a property of another object (e.g., there will be one Helper for each Foo object, there is a simple and elegant solution.
Just define the singleton as a static field in a separate class. The semantics of Java guarantee that the field will not be initialized until the field is referenced, and that any thread which accesses the field will see all of the writes resulting from initializing that field.
class HelperSingleton {
static Helper singleton = new Helper();
}
It will work for 32-bit primitive values
Although the double-checked locking idiom cannot be used for references to objects, it can work for 32-bit primitive values (e.g., int's or float's). Note that it does not work for long's or double's, since unsynchronized reads/writes of 64-bit primitives are not guaranteed to be atomic.
// Correct Double-Checked Locking for 32-bit primitives
class Foo {
private int cachedHashCode = 0;
public int hashCode() {
int h = cachedHashCode;
if (h == 0)
synchronized(this) {
if (cachedHashCode != 0) return cachedHashCode;
h = computeHashCode();
cachedHashCode = h;
}
return h;
}
// other functions and members...
}
In fact, assuming that the computeHashCode function always returned the same result and had no side effects (i.e., idempotent), you could even get rid of all of the synchronization.
// Lazy initialization 32-bit primitives
// Thread-safe if computeHashCode is idempotent
class Foo {
private int cachedHashCode = 0;
public int hashCode() {
int h = cachedHashCode;
if (h == 0) {
h = computeHashCode();
cachedHashCode = h;
}
return h;
}
// other functions and members...
}
Making it work with explicit memory barriers
It is possible to make the double checked locking pattern work if you have explicit memory barrier instructions. For example, if you are programming in C++, you can use the code from Doug Schmidt et al.'s book:
// C++ implementation with explicit memory barriers
// Should work on any platform, including DEC Alphas
// From "Patterns for Concurrent and Distributed Objects",
// by Doug Schmidt
template <class TYPE, class LOCK> TYPE *
Singleton<TYPE, LOCK>::instance (void) {
// First check
TYPE* tmp = instance_;
// Insert the CPU-specific memory barrier instruction
// to synchronize the cache lines on multi-processor.
asm ("memoryBarrier");
if (tmp == 0) {
// Ensure serialization (guard
// constructor acquires lock_).
Guard<LOCK> guard (lock_);
// Double check.
tmp = instance_;
if (tmp == 0) {
tmp = new TYPE;
// Insert the CPU-specific memory barrier instruction
// to synchronize the cache lines on multi-processor.
asm ("memoryBarrier");
instance_ = tmp;
}
return tmp;
}
Fixing Double-Checked Locking using Thread Local Storage
Alexander Terekhov (TEREKHOV@de.ibm.com) came up clever suggestion for implementing double checked locking using thread local storage. Each thread keeps a thread local flag to determine whether that thread has done the required synchronization. class Foo {
/** If perThreadInstance.get() returns a non-null value, this thread
has done synchronization needed to see initialization
of helper */
private final ThreadLocal perThreadInstance = new ThreadLocal();
private Helper helper = null;
public Helper getHelper() {
if (perThreadInstance.get() == null) createHelper();
return helper;
}
private final void createHelper() {
synchronized(this) {
if (helper == null)
helper = new Helper();
}
// Any non-null value would do as the argument here
perThreadInstance.set(perThreadInstance);
}
}
The performance of this technique depends quite a bit on which JDK implementation you have. In Sun's 1.2 implementation, ThreadLocal's were very slow. They are significantly faster in 1.3, and are expected to be faster still in 1.4. Doug Lea analyzed the performance of some techniques for implementing lazy initialization.
Under the proposed Java Memory Model
Work is underway to revise the Java Memory Model and Thread specification.
Fixing Double-Checked Locking using Volatile
The consensus proposal extends the semantics for volatile so that the system will not allow a write of a volatile to be reordered with respect to any previous read or write, and a read of a volatile cannot be reordered with respect to any following read or write. See the slides from the JavaOne BOF for more details.
With this change, the Double-Checked Locking idiom can be made to work by declaring the helper field to be volatile. This does not work under the current semantics for volatile.
// Works with acquire/release semantics for volatile
// Broken under current semantics for volatile
class Foo {
private volatile Helper helper = null;
public Helper getHelper() {
if (helper == null) {
synchronized(this) {
if (helper == null)
helper = new Helper();
}
}
return helper;
}
}
Double-Checked Locking Immutable Objects
If Helper is an immutable object, such that all of the fields of Helper are final, then double-checked locking will work without having to use volatile fields. The idea is that a reference to an immutable object (such as a String or an Integer) should behave in much the same way as an int or float; reading and writing references to immutable objects are atomic.
This is, in fact, not true under the existing semantics, and even under some existing implementations. The details of the proposed semantics for immutable objects are tricky and still subject to working out lots of fine print.
Descriptions of double-check idiom
Reality Check, Douglas C. Schmidt, C++ Report, SIGS, Vol. 8, No. 3, March 1996.
Double-Checked Locking: An Optimization Pattern for Efficiently Initializing and Accessing Thread-safe Objects, Douglas Schmidt and Tim Harrison. 3rd annual Pattern Languages of Program Design conference, 1996
Lazy instantiation, Philip Bishop and Nigel Warren, JavaWorld Magazine
Programming Java threads in the real world, Part 7, Allen Holub, Javaworld Magazine, April 1999.
Java 2 Performance and Idiom Guide, Craig Larman and Rhett Guthrie, p100.
Java in Practice: Design Styles and Idioms for Effective Java, Nigel Warren and Philip Bishop, p142.
Rule 99, The Elements of Java Style, Allan Vermeulen, Scott Ambler, Greg Bumgardner, Eldon Metz, Trvor Misfeldt, Jim Shur, Patrick Thompson, SIGS Reference library
Global Variables in Java with the Singleton Pattern, Wiebe de Jong, Gamelan
相关推荐
具体网址为:[http://www.cs.umd.edu/~pugh/java/memoryModel](http://www.cs.umd.edu/~pugh/java/memoryModel)。 #### 四、Java内存模型的核心概念 ##### 4.1 变量可见性 在多线程环境中,一个线程对共享变量的...
- [JMM详解](http://www.cs.umd.edu/~pugh/java/memoryModel/) - [JMM Cookbook](http://gee.cs.oswego.edu/dl/jmm/cookbook.html) #### 二、Java基础知识 **1.2.1 阅读源代码** - **核心类库** - `java.lang....
《docbook自定义xsl》 在IT领域,DocBook是一种广泛应用的XML文档格式,用于编写技术手册、书籍、教程等技术文档。...本话题将深入探讨如何根据需求自定义DocBook的XSL样式表。 首先,我们要理解XSL的作用。...
- [Java Memory Model FAQ](http://www.cs.umd.edu/~pugh/java/memoryModel/):提供了关于JMM的常见问题解答。 - [JMM Cookbook](http://gee.cs.oswego.edu/dl/jmm/cookbook.html):通过示例解释了JMM的相关概念。 ...
- 关于Java内存模型的研究:[Java Memory Model](http://www.cs.umd.edu/~pugh/java/memoryModel/) ##### 1.2 Java基础知识 **1.2.1 阅读源代码** - 对Java标准库中的关键类进行源代码阅读,比如`String`、`...
资源分类:Python库 所属语言:Python 资源全名:pugh_torch-0.3.0-py2.py3-none-any.whl 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059
- [The Java Memory Model FAQ](http://www.cs.umd.edu/~pugh/java/memoryModel/) - [JMM Cookbook](http://gee.cs.oswego.edu/dl/jmm/cookbook.html) --- ##### 1.2 Java基础知识 **1.2.1 阅读源代码** - **...
《Java Memory Model》是一篇关于Java内存模型的重要论文,由Jeremy Manson、William Pugh以及Sarita V. Adve共同撰写。该论文主要介绍了Java 5.0版本中更新后的Java内存模型(JMM),并详细阐述了它在多线程程序中...
- **网站**: [www.packtpub.com](http://www.packtpub.com) #### 作者简介 **David Smiley** 和 **Eric Pugh** 是本书的作者。David Smiley 是一位资深软件开发者,拥有超过 10 年在国防工业领域工作的经验,主要...
在这个数据集中,可能也有一个预后评分系统,如Child-Pugh评分,用于评估肝病的严重程度和预测患者的生存概率。 在分析这两个数据集时,我们可以使用survival包中的`Surv()`函数来定义生存时间变量,`censor()`函数...
`dumpFile`:数据落盘- `loadFile`:加载数据- `size`:返回数据规模项目编译运行方式```shellmake // complie demo main.cpp./bin/main // run ```如果要在其他程序中适用该引擎,只需 `include "EZ_SkipList.h" `...
跳表最初由William Pugh在1989年提出,其主要特点是通过多级索引来加速查找过程,同时保持链表的灵活性。 #### 二、跳表的基本概念 跳表本质上是一种有序链表,但它在每个节点上增加了多个指向前方节点的“快车道...
Java设计模式详解涉及到23种设计模式,这些设计模式可以根据其目的和范围被划分为三大类:创建型模式(Creational Patterns)、结构型模式(Structural Patterns)和行为型模式(Behavioral Patterns)。下面将详细...
- **Eclipse插件强化**:edu.umd.cs.findbugs.plugin.eclipse_3.0.1.20150306-5afe4d1是Eclipse集成插件的版本,提供实时代码分析,开发者可以在编写代码的同时立即看到潜在的问题。 2. **FindBugs的工作原理** -...
本文主要探讨了基于AHP-PUGH方法的儿童陪伴机器人设计评价研究,旨在通过科学的评价体系选择最优设计方案,以满足日益增长的儿童教育市场需求。儿童陪伴机器人作为服务机器人的一种,专注于语音交互、视频通话、儿童...
Child-Pugh 分类 ChildPughClassification = require ( ' adopt-lc ' ). ChildPughClassification cpclass = new ChildPughClassification ( 2 # frequency of encephalopathy 2 # degree of ascites 3.1 # ...
在“classwork-pugh-kyara:人才之路队列12课堂回购”这个项目中,我们聚焦于Java编程语言的学习和应用。这个标题暗示了这是一个关于教育或培训的项目,可能是一个课程作业或者团队协作练习,旨在提升学员们对Java...
跳过列表(Skip List)是一种数据结构,由William Pugh在1990年提出,它的设计灵感来源于二分查找法。跳过列表通过多级索引的方式,使得查找、插入和删除操作的时间复杂度达到平均O(log n),在实际应用中表现出与...