在java中如何深度拷贝对象呢?
The java.lang.Object
root superclass defines a clone()
method that will, assuming the subclass implements the java.lang.Cloneable
interface, return a copy of the object. While Java classes are free to override this method to do more complex kinds of cloning, the default behavior of clone()
is to return a shallow copy of the object. This means that the values of all of the origical object’s fields are copied to the fields of the new object.
A property of shallow copies is that fields that refer to other objects will point to the same objects in both the original and the clone. For fields that contain primitive or immutable values (int
, String
,float
, etc…), there is little chance of this causing problems. For mutable objects, however, cloning can lead to unexpected results. Figure 1 shows an example.
import java.io.IOException; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.ObjectOutputStream; import java.io.ObjectInputStream; /** * Utility for making deep copies (vs. clone()'s shallow copies) of * objects. Objects are first serialized and then deserialized. Error * checking is fairly minimal in this implementation. If an object is * encountered that cannot be serialized (or that references an object * that cannot be serialized) an error is printed to System.err and * null is returned. Depending on your specific application, it might * make more sense to have copy(...) re-throw the exception. * * A later version of this class includes some minor optimizations. */ public class UnoptimizedDeepCopy { /** * Returns a copy of the object, or null if the object cannot * be serialized. */ public static Object copy(Object orig) { Object obj = null; try { // Write the object out to a byte array ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream out = new ObjectOutputStream(bos); out.writeObject(orig); out.flush(); out.close(); // Make an input stream from the byte array and read // a copy of the object back in. ObjectInputStream in = new ObjectInputStream( new ByteArrayInputStream(bos.toByteArray())); obj = in.readObject(); } catch(IOException e) { e.printStackTrace(); } catch(ClassNotFoundException cnfe) { cnfe.printStackTrace(); } return obj; } }
Unfortunately, this approach has some problems, too:
- It will only work when the object being copied, as well as all of the other objects references directly or indirectly by the object, are serializable. (In other words, they must implement
java.io.Serializable
.) Fortunately it is often sufficient to simply declare that a given classimplements java.io.Serializable
and let Java’s default serialization mechanisms do their thing. - Java Object Serialization is slow, and using it to make a deep copy requires both serializing and deserializing. There are ways to speed it up (e.g., by pre-computing serial version ids and defining custom
readObject()
andwriteObject()
methods), but this will usually be the primary bottleneck. - The byte array stream implementations included in the
java.io
package are designed to be general enough to perform reasonable well for data of different sizes and to be safe to use in a multi-threaded environment. These characteristics, however, slow downByteArrayOutputStream
and (to a lesser extent)ByteArrayInputStream
.
The first two of these problems cannot be addressed in a general way. We can, however, use alternative implementations of ByteArrayOutputStream
and ByteArrayInputStream
that makes three simple optimizations:
-
ByteArrayOutputStream
, by default, begins with a 32 byte array for the output. As content is written to the stream, the required size of the content is computed and (if necessary), the array is expanded to the greater of the required size or twice the current size. JOS produces output that is somewhat bloated (for example, fully qualifies path names are included in uncompressed string form), so the 32 byte default starting size means that lots of small arrays are created, copied into, and thrown away as data is written. This has an easy fix: construct the array with a larger inital size. - All of the methods of
ByteArrayOutputStream
that modify the contents of the byte array aresynchronized
. In general this is a good idea, but in this case we can be certain that only a single thread will ever be accessing the stream. Removing the synchronization will speed things up a little.ByteArrayInputStream
‘s methods are also synchronized. - The
toByteArray()
method creates and returns a copy of the stream’s byte array. Again, this is usually a good idea: If you retrieve the byte array and then continue writing to the stream, the retrieved byte array should not change. For this case, however, creating another byte array and copying into it merely wastes cycles and makes extra work for the garbage collector.
An optimized implementation of ByteArrayOutputStream
is shown in Figure 4.
import java.io.OutputStream; import java.io.IOException; import java.io.InputStream; import java.io.ByteArrayInputStream; /** * ByteArrayOutputStream implementation that doesn't synchronize methods * and doesn't copy the data on toByteArray(). */ public class FastByteArrayOutputStream extends OutputStream { /** * Buffer and size */ protected byte[] buf = null; protected int size = 0; /** * Constructs a stream with buffer capacity size 5K */ public FastByteArrayOutputStream() { this(5 * 1024); } /** * Constructs a stream with the given initial size */ public FastByteArrayOutputStream(int initSize) { this.size = 0; this.buf = new byte[initSize]; } /** * Ensures that we have a large enough buffer for the given size. */ private void verifyBufferSize(int sz) { if (sz > buf.length) { byte[] old = buf; buf = new byte[Math.max(sz, 2 * buf.length )]; System.arraycopy(old, 0, buf, 0, old.length); old = null; } } public int getSize() { return size; } /** * Returns the byte array containing the written data. Note that this * array will almost always be larger than the amount of data actually * written. */ public byte[] getByteArray() { return buf; } public final void write(byte b[]) { verifyBufferSize(size + b.length); System.arraycopy(b, 0, buf, size, b.length); size += b.length; } public final void write(byte b[], int off, int len) { verifyBufferSize(size + len); System.arraycopy(b, off, buf, size, len); size += len; } public final void write(int b) { verifyBufferSize(size + 1); buf[size++] = (byte) b; } public void reset() { size = 0; } /** * Returns a ByteArrayInputStream for reading back the written data */ public InputStream getInputStream() { return new FastByteArrayInputStream(buf, size); } }
Figure 4. Optimized version of
ByteArrayOutputStream
The getInputStream()
method returns an instance of an optimized version ofByteArrayInputStream
that has unsychronized methods. The implementation ofFastByteArrayInputStream
is shown in Figure 5.
import java.io.InputStream; import java.io.IOException; /** * ByteArrayInputStream implementation that does not synchronize methods. */ public class FastByteArrayInputStream extends InputStream { /** * Our byte buffer */ protected byte[] buf = null; /** * Number of bytes that we can read from the buffer */ protected int count = 0; /** * Number of bytes that have been read from the buffer */ protected int pos = 0; public FastByteArrayInputStream(byte[] buf, int count) { this.buf = buf; this.count = count; } public final int available() { return count - pos; } public final int read() { return (pos < count) ? (buf[pos++] & 0xff) : -1; } public final int read(byte[] b, int off, int len) { if (pos >= count) return -1; if ((pos + len) > count) len = (count - pos); System.arraycopy(buf, pos, b, off, len); pos += len; return len; } public final long skip(long n) { if ((pos + n) > count) n = count - pos; if (n < 0) return 0; pos += n; return n; } }
Figure 5. Optimized version of ByteArrayInputStream
.
Figure 6 shows a version of a deep copy utility that uses these classes:
import java.io.IOException; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.ObjectOutputStream; import java.io.ObjectInputStream; /** * Utility for making deep copies (vs. clone()'s shallow copies) of * objects. Objects are first serialized and then deserialized. Error * checking is fairly minimal in this implementation. If an object is * encountered that cannot be serialized (or that references an object * that cannot be serialized) an error is printed to System.err and * null is returned. Depending on your specific application, it might * make more sense to have copy(...) re-throw the exception. */ public class DeepCopy { /** * Returns a copy of the object, or null if the object cannot * be serialized. */ public static Object copy(Object orig) { Object obj = null; try { // Write the object out to a byte array FastByteArrayOutputStream fbos = new FastByteArrayOutputStream(); ObjectOutputStream out = new ObjectOutputStream(fbos); out.writeObject(orig); out.flush(); out.close(); // Retrieve an input stream from the byte array and read // a copy of the object back in. ObjectInputStream in = new ObjectInputStream(fbos.getInputStream()); obj = in.readObject(); } catch(IOException e) { e.printStackTrace(); } catch(ClassNotFoundException cnfe) { cnfe.printStackTrace(); } return obj; } }
Figure 6. Deep-copy implementation using optimized byte array streams
The extent of the speed boost will depend on a number of factors in your specific application (more on this later), but the simple class shown in Figure 7 tests the optimized and unoptimized versions of the deep copy utility by repeatedly copying a large object.
import java.util.Hashtable; import java.util.Vector; import java.util.Date; public class SpeedTest { public static void main(String[] args) { // Make a reasonable large test object. Note that this doesn't // do anything useful -- it is simply intended to be large, have // several levels of references, and be somewhat random. We start // with a hashtable and add vectors to it, where each element in // the vector is a Date object (initialized to the current time), // a semi-random string, and a (circular) reference back to the // object itself. In this case the resulting object produces // a serialized representation that is approximate 700K. Hashtable obj = new Hashtable(); for (int i = 0; i < 100; i++) { Vector v = new Vector(); for (int j = 0; j < 100; j++) { v.addElement(new Object[] { new Date(), "A random number: " + Math.random(), obj }); } obj.put(new Integer(i), v); } int iterations = 10; // Make copies of the object using the unoptimized version // of the deep copy utility. long unoptimizedTime = 0L; for (int i = 0; i < iterations; i++) { long start = System.currentTimeMillis(); Object copy = UnoptimizedDeepCopy.copy(obj); unoptimizedTime += (System.currentTimeMillis() - start); // Avoid having GC run while we are timing... copy = null; System.gc(); } // Repeat with the optimized version long optimizedTime = 0L; for (int i = 0; i < iterations; i++) { long start = System.currentTimeMillis(); Object copy = DeepCopy.copy(obj); optimizedTime += (System.currentTimeMillis() - start); // Avoid having GC run while we are timing... copy = null; System.gc(); } System.out.println("Unoptimized time: " + unoptimizedTime); System.out.println(" Optimized time: " + optimizedTime); } }
Figure 7. Testing the two deep copy implementations.
A few notes about this test:
- The object that we are copying is large. While somewhat random, it will generally have a serialized size of around 700 Kbytes.
- The most significant speed boost comes from avoid extra copying of data in
FastByteArrayOutputStream
. This has several implications:- Using the unsynchronized
FastByteArrayInputStream
speeds things up a little, but the standardjava.io.ByteArrayInputStream
is nearly as fast. - Performance is mildly sensitive to the initial buffer size in
FastByteArrayOutputStream
, but is much more sensitive to the rate at which the buffer grows. If the objects you are copying tend to be of similar size, copying will be much faster if you initialize the buffer size and tweak the rate of growth.
- Using the unsynchronized
- Measuring speed using elapsed time between two calls to
System.currentTimeMillis()
is problematic, but for single-threaded applications and testing relatively slow operations it is sufficient. A number of commercial tools (such as JProfiler) will give more accurate per-method timing data. - Testing code in a loop is also problematic, since the first few iterations will be slower until HotSpot decides to compile the code. Testing larger numbers of iterations aleviates this problems.
- Garbage collection further complicates matters, particularly in cases where lots of memory is allocated. In this example, we manually invoke the garbage collector after each copy to try to keep it from running while a copy is in progress.
These caveats aside, the performance difference is sigificant. For example, the code as shown in Figure 7 (on a 500Mhz G3 Macintosh iBook running OSX 10.3 and Java 1.4.1) reveals that the unoptimized version requires about 1.8 seconds per copy, while the optimized version only requires about 1.3 seconds. Whether or not this difference is signficant will, of course, depend on the frequency with which your application does deep copies and the size of the objects being copied.
For very large objects, an extension to this approach can reduce the peak memory footprint by serializing and deserializing in parallel threads. See "Low-Memory Deep Copy Technique for Java Objects" for more information.
转:http://javatechniques.com/blog/faster-deep-copies-of-java-objects/
参考:http://alvinalexander.com/java/java-deep-clone-example-source-code
相关推荐
这篇博客将探讨如何在Java中对List进行深度复制。 首先,我们来理解一下什么是浅复制和深复制。在Java中,当我们使用`clone()`方法或`System.arraycopy()`进行复制时,通常得到的是浅复制的结果。这意味着原始对象...
### Java深度复制源代码知识点解析 ...综上所述,该Java深度复制工具类通过反射机制实现了JavaBean对象的深度复制,适用于多种场景下的对象复制需求,但在实际应用中需要注意性能和特殊对象结构的处理问题。
浅拷贝仅复制对象本身,而不复制它引用的对象,这意味着改变副本中的引用对象会影响到原始对象。而深度拷贝则会创建一个完全独立的新对象,包括所有引用的对象也进行了复制,改变副本不会影响到原始对象。 在Java中...
- **Dozer库**:允许使用注解来定义复制规则,支持深度复制和类型转换,可以实现不同对象间的复制。 - **ModelMapper库**:提供了`ModelMapper`类,可以通过配置注解自动映射对象,简化了复制过程。 4. **集合与...
标签中的“复制java对象”、“深度克隆”和“深度复制实例”强调了我们要关注的是Java中如何实现对象的深拷贝,并提供了实际操作的例子。在编写代码时,应该遵循良好的编程实践,包括适当的命名、注释和错误处理,以...
这篇博文“MyBatisDemo && JAVA把一个对象的全部属性复制到另一个相同的对象”探讨的是如何在Java编程中实现对象属性的深度复制。MyBatis是一个流行的Java持久层框架,它简化了数据库操作,而对象复制则是处理业务...
### Java深度克隆知识点详解 #### 一、深度克隆概念 在Java中,深度克隆是一种用于创建对象副本的方法,它可以确保复制出的新对象与原对象之间没有任何引用关系,也就是说,新对象中的所有成员变量(包括引用类型)...
在Java编程中,深度复制是一种创建对象副本的方法,它不仅复制对象本身,还复制对象引用的所有内部对象。这种复制方式确保原始对象和副本之间没有共享任何引用,它们各自拥有独立的内存空间,修改其中一个对象不会...
总之,Java中的深度克隆是通过复制对象及其引用的对象来创建一个完全独立的新对象的过程。可以通过实现`Serializable`接口并进行序列化和反序列化,或者使用自定义的拷贝构造函数和工厂方法来实现。选择哪种方法取决...
《Java模式:Java深度历险》是一本专为Java开发者设计的深度学习书籍,它深入探讨了Java编程中的各种设计模式和技术。这本书的核心目标是帮助读者理解和应用在实际项目中广泛使用的模式,以提高代码质量和可维护性。...
而深复制(Deep Copy)则是不仅复制对象本身,还递归地复制它引用的对象,确保新对象与原对象及引用的对象完全独立。 在Android中,我们常常需要复制Parcelable或Serializable类型的对象,因为它们是Android系统...
"clone"方法就是用于复制对象的一种方式,尤其在Java等支持此功能的语言中。本文将深入探讨"深度克隆"这一概念,以及它与普通克隆的区别,并讨论其在实际应用中的优缺点。 深度克隆,也称为完全克隆,是一种创建新...
深度复制和浅复制是复制对象的两种方式。浅复制只是复制对象的引用,而深复制则会创建一个新的对象,并复制原对象的所有属性,包括嵌套的对象。对于非基本类型的成员变量,如果进行浅复制,新复制的对象和原对象会...
浅克隆只复制对象本身,而不复制它所引用的对象,而深度克隆则会递归地复制对象及其引用的所有子对象。 当对象包含复杂的数据结构,如集合或嵌套的对象时,简单的赋值操作或使用`clone()`方法通常不足以实现完全...
深克隆更适合于需要完整复制对象及其引用的所有子对象的场景,例如在复杂的数据结构中,确保每个克隆对象都是独立的。 6. 注意事项: - 克隆操作可能会引发并发问题,尤其是在多线程环境中,需要谨慎处理。 - 对于...
《Java深度历险》这本书是Java开发者们探索编程世界的宝贵指南,它涵盖了Java语言的核心概念、高级特性以及实际开发中的重要应用。通过本书,读者可以深入理解Java编程的精髓,从而提升自己的技能水平。 首先,Java...
《Java深度历险》这本书是Java开发者深入了解JVM(Java虚拟机)的重要参考资料,它以繁体中文的形式呈现,旨在帮助读者深入理解Java程序在运行时的内部机制。JVM作为Java平台的核心部分,负责执行字节码并提供运行...
- 在实际应用中,考虑使用第三方库,如Apache Commons Lang的`DeepCloneable`接口,或者Google的`Gson`库,它们提供了更方便的深度复制功能。 理解并正确使用浅复制和深复制对于开发高效、健壮的Java应用程序至关...
对象克隆可以分为浅克隆和深克隆,浅克隆仅复制对象的引用,而深克隆则复制对象的所有属性。 对象克隆的原因是为了在不同的场景下使用同一个对象,而不影响原始对象的状态。例如,在游戏开发中,需要创建多个相同的...