I recently needed to do deep cloning of my Java objects and began to do the old-school style I’ve used for years: Use ObjectOutputStream and ObjectInputStream to do a “poor-boys” deep clone. However, the performance of this on large Objects is absolutely awful not to mention the amount of memory and CPU it takes to accomplish this. Even if I could get past all of that, the fact that every single Object in the chain MUST implement Serializable just pushed me over the edge…
I thought to myself, “Surely someone else realizes how dumb this is and has a better solution that having to implement Cloneable on everything”, but alas, searching online didn’t return much other than explanations how to use Serialization to do deep cloning and a little tutorial on increasing the performance on Serialization cloning:
I found that interesting, but it really didn’t provide that great of a performance gain, and still didn’t overcome the other issues, so I set out to make a rounder wheel. Perhaps it’s the years of Reflection I’ve got hammered into me, or just the fact I think Reflection is awesome, but I turned to my old friend for a faster solution. It really is ironic, since so many people refuse to use Reflection because of performance reasons that I would turn to Reflection to increase performance, but lets see how this goes. 🙂
So, I took the benchmark test that is referenced in the article above and decided to apply it to my own test using Reflection and see the performance difference. The difference is staggering. After modifying the benchmark to do better timing (System.nanoTime instead of System.currentTimeMillis and a couple other tweaks) I bumped up the iterations to 100 and got the following results:
Unoptimized time (standard Serialization): 13,518 milliseconds
Optimized time (the tutorial Serialization): 12,941 milliseconds
Reflection time (my happy little test): 139 milliseconds
Yes, I did expect the performance to increase, but about 100 times faster is a startling increase in performance. So I bumped up the test another digit to see how well it scaled:
Unoptimized time (standard Serialization): 125,754 milliseconds
Optimized time (the tutorial Serialization): 121,434 milliseconds
Reflection time (my happy little test): 1,359 milliseconds
Okay, this seems to scale at least as well as the alternative and doesn’t require the objects to be Serializable. The only problem is that there is substantially more code involved to make this work well since there are peculiarities with certain objects (arrays, Maps, etc.) that have to be explicitly handled. To deal with this scenario I created an interface, “Cloner” that has a simple method:
public Object deepClone(Object original, Map<Object, Object> cache) throws Exception;
This is relatively self-explanatory, but the “cache” map is the only really complex part. This is necessary in order to handle cyclical references (ex. ObjectA contains ObjectB that contains a reference back to ObjectA) that would otherwise cause an endless loop until either memory is exhausted, or more commonly the case, a StackOverflowException is thrown. Upon instantiation of an Object I immediately assign the original Object as the key in the cache and the cloned Object as the value in the cache. I check at the beginning of cloning of each Object if a cache reference already exists and if so I re-use that.
The Cloner instances are pre-defined in the CloneUtilities Class in a static Map<Class<?>, Cloner>. This not only abstracts the implementation details of cloning per specific case, but also allow extensibility for anyone that needs to handle special Objects in their own cloning scenarios.
If you’re interested in seeing the final results of this, I have added it to my jCommon project so anyone can use it:
I’ve included several cloning options in the CloneUtilities Class to allow testing between different options. However, it would seem that reflection is significantly faster than any other known alternative (apart from explicitly writing Clonable support in your Objects and providing some extra functionality for deep cloning). There is still a lot to add to this in the long-run, but I believe this is a good start.
Please drop me some feedback if you have any standard Cloner implementations to add.
EDIT: This post has received a lot of traffic and the jcommon project is no longer being hosted at the referenced URL, it can be found here: jcommon on googlecode. You can also find a newer yet less complete version of this in the xjava project: clone package
2 thoughts on “Fastest Deep Cloning”
This looks great, and is just what I was looking for.
However, I can't seem to get the code to run properly?
I get the same error running after compiling the code from both CommonJ and xJava.
The first two tests run correctly (I'm currently using the optimised serialised deep clone) but the final clone test, using reflection, throws the error:
Exception during clone (returning original): [Ljava.lang.Object;
This happens with the test object created by default in the code and on a custom object.
I'm using Java 1.6 and Eclipse.
Any help would be much appreciated!
Unfortunately this is old code and I followed some bad coding practices when I wrote it not throwing a proper exception back. If you change ObjectCloner in xjava on line 44 to say: throw new RuntimeException("Exception during clone", exc);
That should hopefully give you some more information to understanding why that's failing.