BinarySerializer and Strings in .Net 1.1
I've just verified that this problem does appear to have been rectified in .Net 2.0 so undoubtedly it is "old news" to many people but this one had us guessing for a while at work this week...
When using the .Net Remoting over tcp/binary with a "large" collection of objects (6,000) we found that our application was going into meltdown and taking over 6 minutes to return the results - yikes!
The actual population time of the custom collection from the DataReader was negligible - it was when it hit the "return" statement to leave the application server domain that the CPU utilisation shot up to 100% and the lag began. The objects being serialized had around 40 fields containing a few Int32 and DateTime data types with the bulk being strings.
As a colleague very funnily cracked at the time - asking the users to go Tools -> Options -> Tiers -> 2 Tiers was probably not going to be acceptable... ;-)
After much shagging around with a profiler and Reflector I eventually figured out that it was to do with the way the BinarySerializer works. It maintains an internal object table so as to ensure it only serializes each object in the graph once and avoid cyclic references no doubt. The problem is that it uses the hashcode of each object as a starting point for where to store it in that internal object table. If it finds the same object instance at that hash position it can return it, otherwise it adds another hash and tries again in the next "bucket" and so on until it finds a match (or a free space to put this object).
However when you have an object graph with lots of System.String field instances that all have the same value this design decision fell to bits. Such is the case when populating your objects using DataReader.GetString() which gives you a new string instance each time. As the hashcode for a string is based on it's contents you now get an exponential increase in hashing collisions the more rows you have in your result set.
As it turns out this is a known problem for which there is the dreaded hotfix dated from Dec 2004. I say "dreaded" as like most large companies out there we have absolutely no show of getting a hotfix deployed due to the logistics involved. When will Microsoft EVER get their act together over their .Net service pack strategy (i.e. more than once every 3 years would be a start!).
Our workaround? Well, as I have hinted at above what breaks the serializer is instances of strings having the same value. So why not have a cache of the string instances that you retrieve through when using DataReader.GetString()? That way the same value "xyz" is serialized only once instead of once per row - smaller object graph, faster serialization performance.
Sure enough after the change our total round trip performance dropped to under a couple of seconds for those 6,000 records (the serialization itself in under 0.2 seconds). Nothing like that great feeling on a Friday afternoon of having spanked a problem like this... That we hadn't really hit the problem previously was just luck - either screens had not many string columns in the objects; the string values were mostly unique; or the resultsets were not very large.
Another solution would have been to use ISerializable and take control ourselves, for which there are some good articles on CodeProject like this one. However the cost is it introduces another maintenance point in each of your entity classes. For large development teams in the early phases of a project with a continually evolving data model like my current one that's bound to go subtly wrong at some point. We may yet need to resort to that - but I would prefer to hold off until we know we need that extra few drops of performance!
As I said at the top this problem seems to have been rectified in .Net 2.0 from my quick testing this morning - had we been using that at work it would have avoided many hours of frustration this week. Then again as per my .Net 2.0 TreeView performance problem post we would likely have had some other problems to deal with. I guess this is why we get paid the big bucks right?
Filed in: remoting performance
0 Comments:
Post a Comment
<< Home