1Gig-Tech (#23) – Performance, DSC, PowerShell

January 31, 2016 1Gig Tech , ,

Welcome to 1Gig Tech update!

Happy New Year! The New Year edition has 16 articles on technology, news, open source, community on the fantastic and ever evolving technology world.

  • How to compute the Hash value of a file using #CSharp? (Kunal Chowdhury)
    When downloading from a remote location, many time we need to know whether the file downloaded properly or not. There are many techniques to know about it but among them, one of the approach is to check the hash value of the file. The Cryptography APIs of .NET can help you to calculate the same.
  • To base() or not to base(), that is the question (jonskeet)
    Today I’ve been reviewing the ECMA-334 C# specification, and in particular the section about class instance constructors. If a class contains no instance constructor declarations, a default instance constructor is automatically provided.
  • Performance Doesn’t Matter (Unless You Can Prove That It Does)
    The interesting thing about all of these questions is that they each have a defined, measurable answer. Almost certainly, .Any() will be faster than .Count() (for an IEnumerable, as we’ll see below). Almost certainly, in simple cases, Redis will be faster for reads than SQL Server.
  • VerbalExpressions/CSharpVerbalExpressions
    VerbalExpressions is a CSharp library that helps to construct difficult regular expressions. When first building the solution there will be external libraries that are missing since GitHub doesn’t include DLLs. The best way to get these libraries into your solution is to use NuGet.
  • Machine Learning
    Less than a year ago we decided to acquire Revolution Analytics, the leading commercial provider of software and services for R, the world’s most widely used programming language for statistical computing and predictive analytics.
  • Fixing Spaghetti: How to Work With Legacy Code
    What is Legacy Code? Legacy code is software that generates value for a business but is difficult for developers to change. The terms “code rot” and “spaghetti code” refer to legacy code that is tangled up in poor quality.
  • Gone Mobile 29: Push Notifications
    This episode covers pretty much everything there is to know about Push Notifications. From Apple’s APNS to Google’s C2DM and GCM, learn about what they are and how they work.
  • hack.summit() 2016
    You are now registered to hack.summit(). Please find your unique ticket number below. You will need it to watch the live conference. Don’t worry, we are also sending this to your e-mail right now, along with an automated reminder email 30 minutes before the event.
  • PowerShell Classes for Developers (Punit Ganshani)
    The DevOps Zone is brought to you in partnership with New Relic. Learn more about the common barriers to DevOps adoption so that you can come up with ways to win over the skeptics and kickstart DevOps.

You can also follow these updates on Facebook Page or can also read previous editions at 1Gig Tech


Optimizing performance of your WCF Services

February 3, 2014 Visual Studio, WCF , ,

Performance Optimization process starts when you are provided with Non-Functional Requirements such as Availability, Concurrent Users, Scalability and so on.  If you are not provided with NFRs, one tends to ignore them and continues build application using Visual Studio Wizards that generate code recipes.  One tends to ignore them until faced upon a production issue that raises some questions on why a recently development application is poorly designed.  As the experts say, the earlier you detect your issues in a life-cycle of a product the cheaper is it to resolve them.  So its better to be aware of design principles and be cautious of code that can cause significant drop in performance.

When it comes to Windows Communication Foundation (WCF), an architect has to take several design decisions.  This article will outline some of the design decisions an architect/lead has to take when he is designing a WCF service.  The pre-requisite for this article is having preliminary knowledge of WCF.   You can refer to MSDN articles on WCF

The right Binding


Choosing the right binding is not difficult.  You can read an article on WCF Bindings in depth on MSDN if you want more information.  If we have to summarize the types and some of the overheads in using them, then it would be

  • Basic binding: The BasicHttpBinding is designed to expose a WCF service as a legacy ASMX web service, so that old clients or cross platform clients can work with the new services hosted either over Intranet/ Internet.   This binding, by default, does not enable any security.  The default message encoding is text/XML.
  • Web Service (WS) binding: The WSHttpBinding class uses HTTP or HTTPS for transport, and is designed to offer a variety of features such as reliability, transactions, and security over the Internet.  This means that if a BasicHttpBinding takes 2 network calls (Request & Response) to complete a request, a WSHttpBinding may take over 5 network calls which makes is slower than BasicHttpBinding.  If your application is consuming the services hosted on the same machine, to achieve scalable performance it is preferred to use IPC Binding instead of WsHttpBinding.
  • Federated WS binding:The WSFederationHttpBinding binding is a specialization of the WS binding, offering support for federated security.
  • Duplex WS binding: The WSDualHttpBinding binding, this is similar to the WS binding except it also supports bidirectional communication from the service to the client.  Reliable sessions are enabled by default.
  • TCP binding: The NetTcpBinding is primarily used for cross-machine communication on the Intranet and supports variety of features, including reliability, transactions, and security, and is optimized for WCF-to-WCF communication – only .NET clients can communicate to .NET services using this binding. This is an ideal replacement of socket-based communication.  To achieve greater performance, try changing the following settings
    • Set the value of serviceThrottling to highest
    • Increase the maxItemsInObjectGraph to 2147483647
    • Increase the values of listenBacklog, maxConnections, and maxBuffer
  • Peer network binding:The NetPeerTcpBinding is used for peer networking as a transport. The peer network-enabled client and services all subscribe to the same grid and broadcast messages to it.
  • IPC binding: The NetNamedPipeBinding class, this uses named pipes as a transport for same-machine communication. It is the most secure binding since it cannot accept calls from outside the machine and it supports a variety of features similar to the TCP binding.  It can be used efficiently for cross product communication.
  • MSMQ binding: The NetMsmqBinding uses MSMQ for transport and is designed to offer support for disconnected queued calls.
  • MSMQ integration binding: The MsmqIntegrationBinding converts WCF messages to and from MSMQ messages, and is designed to interoperate with legacy MSMQ clients.

The right Encoder

Once you have decided on the binding you are going to use, the first level of optimization can be done at the message level.   There are 3 message encoders available out of the box in .NET framework.

  • Text – A default encoder for BasicHttpBinding and WsHttpBinding bindings – it uses Uses a Text-based (UTF-8 by default) XML encoding
  • MTOM – An interoperable format(though less broadly supported then text) that allows for a more optimized transmission of binary blobs, as they don’t get base64 encoded.
  • Binary – A default encoder for NetTcpBinding and NetNamedPipeBinding bindings – it avoids base64 encoding your binary blobs, and also uses a dictionary-based algorithm to avoid data duplication. Binary supports “Session Encoders” that get smarter about data usage over the course of the session (through pattern recognition).

Having said that, the best match for you is decided based on

  • Size of the encoded message – as it is going to be transferred over wire.  A smaller message size with not much of hierarchy in objects would be transmitted best in text/XML format.
  • CPU load – while encoding the messages and also process your operations contracts
  • Simplicity – Messages once converted into binary do not remain readable by naked eyes.  If you do not want to log the messages and want faster transmission, binary is the format to go for
  • Interoperable – MTOM does not ensure 100% interoperability with other non-WCF Services.  If you do not require interoperability, binary is the format to go for

Binary encoder, so far, seems to be the fastest encoder and if you are using NetTcpBinding or NetNamedPipeBinding a binary encoder will do wonders for you!    Why?  Over a period of time, “Session encoders” become smarter (by using dictionary and analysing pattern) and perform optimizations to achieve faster speed.

Final Words – A text encoder converts binary into Base64 format which is an overhead (around 4-5 times) and can be avoided by using binary or MTOM encoders.  If there is no binary data in the message, MTOM encoder seems to slower down the performance as it has an overhead of converting the message into MIME format.  So try out different message encoders to check what suits your requirement!

The right Compression


Choosing a right encoder can reduce the message size by 4-5 times.  But what if the message size is still in MBs?  There are ways to compress your message and make it compact.  If your WCF Services are hosted on IIS or WAS (Windows Server 2008/2012), you can opt for IIS Compression.  IIS Compression enables you to perform compression on all outgoing/incoming messages using GZip compression.  To enable IIS Configuration, you need to follow steps mentioned by Scot Hanselman’s  article – Enabling dynamic compression (gzip, deflate) for WCF Data Feeds, OData and other custom services in IIS7

Data Caching


Caching any data on which your service depends avoids dependency issues and gets you faster access to same data.  There are several frameworks available to cache your data.  If your application is smaller (non-clustered, non-scalable, etc) and your service is not-stateless (you might want to make it stateless), you might want to consider In-Memory caching; however, with large scale applications you might want to check out AppFabric, Coherence, etc.

  • In-memory Caching –  If your WCF services are hosted on IIS or in WAS you can enable ASP.NET Caching by adding AspNetCompatibilityRequirements attribute to your service and setting aspNetCompatibilityEnabled to true in Web.Config file to use ASP.NET Caching block.   If you are using self-hosted applications, you can use Enterprise Library Caching block.  If your application is built on .NET 4.0 or later, you can use Runtime Caching
  • AppFabric – Use AppFabric for dedicated and distributed caching to increase the service performance.  This will help you overcome several problems of in-memory caching such as sticky sessions, caching in each component/service on a server,  synchronization of cache when any data changes and alike.

When storing objects in cache, prefer caching serializable objects.  It can help you to switch caching providers at any time.

Load Balance


Load balance should not just be seen as a means to achieve scalability.  While it definitely increases scalability, many times an increased performance is a driving force towards load balancing services.   There is an excellent article on MSDN on Load Balancing WCF services

Accelerated using GPU


There are many open-source GPU APIs available out there that can enhance the performance of high data computational tasks or image processing.  Data computation tasks also involve data sorting, filtering and selection – operations that we do using LINQ or PLINQ.  In one of my projects, we observed that operations that took 20,000 milli-seconds on a i5 processor using PLINQ barely took 900 milli-seconds on the same processor using GPU.  So leveraging the power of GPU in your Data-layer based WCF Services can boost the performance of your application by at least 500 times!

Some of the APIs that I recommend are Accelerator by Microsoft and CUDA by NVIDIA.  Both of them support development in C# language.



With the right binding and encoder you can expect a 10% increase in the performance but when bundled up with data caching and GPU acceleration, the performance can shoot by a minimum 110%.  So if you are facing issues with optimizing the performance of your WCF service, experiment with the above steps and get started.  Some links that may interest you are,

Let me know if you need any other information on WCF

Why is StringBuilder faster in string concatenations?

January 15, 2014 CSharp, Visual Studio ,

Almost every developer who is new to development using C# faces a question as to which one is better – string.Concat, + (plus sign), string.Format or StringBuilder for performing string concatenation.  The most easiest way to find a correct an answer is to Google and get views of many experts.  Few of the hot links which every developer stumbles upon are

I don’t want to iterate what’s mentioned in the above articles, so I’ll just give a gist (from: MSDN)

The performance of a concatenation operation for a String or StringBuilder object depends on how often a memory allocation occurs. A String concatenation operation always allocates memory, whereas a StringBuilder concatenation operation only allocates memory if the StringBuilder object buffer is too small to accommodate the new data. Consequently, the String class is preferable for a concatenation operation if a fixed number of String objects are concatenated. In that case, the individual concatenation operations might even be combined into a single operation by the compiler. A StringBuilder object is preferable for a concatenation operation if an arbitrary number of strings are concatenated; for example, if a loop concatenates a random number of strings of user input.

So that makes few important conclusions

  • String is immutable, hence every time we use it (either its object, or any of its methods) it internally allocates a new memory location and stores the new value in the memory location.   When we perform repeated modifications, using a string object is an overhead
  • When we have a finite number of text concatenations, we could use either of the following formats,
string finalStringUsingPlusSymbol = @"this is a new string"
          + "with a lot of words"
          + "together forming a sentence. This is used in demo"
          + "of string concatenation.";

string finalStringUsingStringConcat =
    String.Concat(new[] {
          @"this is a new string"
        , "with a lot of words"
        , "together forming a sentence. This is used in demo"
        , "of string concatenation."

  • For concatenations in a loop (where count > 2), prefer a StringBuilder.  Now that’s what is a known fact. Let’s see why and how it is so.

Step-Into StringBuilder class


When an object of StringBuilder is created, either with a default string value or with a default constructor, a char buffer (read: array) of capacity 0x10 or length of string passed in constructor whichever greater is created internally.  This buffer has a maximum capacity of 0x7fffffff unless specified explicitly by you while constructing an object of StringBuilder.

If a string value has been assigned in the constructor, it copies the characters of string in the memory using wstrcpy (internal) method of System.String.  Now when you call the method Append(string) in your code the code snippet below gets executed.

  1. if (value != null)
  2. {
  3.     char[] chunkChars = this.m_ChunkChars;
  4.     int chunkLength = this.m_ChunkLength;
  5.     int length = value.Length;
  6.     int num3 = chunkLength + length;
  7.     if (num3 < chunkChars.Length)
  8.     {
  9.         if (length <= 2)
  10.         {
  11.             if (length > 0)
  12.             {
  13.                 chunkChars[chunkLength] = value[0];
  14.             }
  15.             if (length > 1)
  16.             {
  17.                 chunkChars[chunkLength + 1] = value[1];
  18.             }
  19.         }
  20.         else
  21.         {
  22.             fixed (char* str = ((char*)value))
  23.             {
  24.                 char* smem = str;
  25.                 fixed (char* chRef = &(chunkChars[chunkLength]))
  26.                 {
  27.                     string.wstrcpy(chRef, smem, length);
  28.                 }
  29.             }
  30.         }
  31.         this.m_ChunkLength = num3;
  32.     }
  33.     else
  34.     {
  35.         this.AppendHelper(value);
  36.     }
  37. }


Check for Line 9 where it checks if length <= 2 then assign the first two characters of the string manually in the character array (the buffer).  Otherwise, as line 22-29 suggest, it first fixes the location of a pointer variable (to understand better, read fixed keyword) so that the GC does not relocate it and then copies the characters of the string using wstrcpy (which is an internal method of System.String).  So performance and strategy of StringBuilder primarily relies on the method wstrcpy.  The core code of wstrcpy deals with using integer pointers to copy from source (the object passed in the Append method, whose location is referred as smem) to the destination (the character buffer, whose destination is referred as dmem)

  1. while (charCount >= 8)
  2. {
  3.     *((int*)dmem) = *((uint*)smem);
  4.     *((int*)(dmem + 2)) = *((uint*)(smem + 2));
  5.     *((int*)(dmem + 4)) = *((uint*)(smem + 4));
  6.     *((int*)(dmem + 6)) = *((uint*)(smem + 6));
  7.     dmem += 8;
  8.     smem += 8;
  9.     charCount -= 8;
  10. }


String.Format is another StringBuilder


Yes, String.Format internally uses StringBuilder and creates a buffer of size format.Length + (args.Length * 8).

  1. public static string Format(IFormatProvider provider, string format, params object[] args)
  2. {
  3.     if ((format == null) || (args == null))
  4.     {
  5.         throw new ArgumentNullException((format == null) ? "format" : "args");
  6.     }
  7.     StringBuilder sb = StringBuilderCache.Acquire(format.Length + (args.Length * 8));
  8.     sb.AppendFormat(provider, format, args);
  9.     return StringBuilderCache.GetStringAndRelease(sb);
  10. }


This has two advantages over using a plain-vanilla StringBuilder.

  • It creates a buffer of bigger size than by a default 0x10 size
  • It uses StringBuilderCache class that maintains a copy of StringBuilder as a static variable.  When Acquire method is invoked, it clears up the cache value (but does not create a new object) and returns the object of StringBuilder.  This reduces the time required to create an object of StringBuilder

So my preference of usage for repeated concatenations would be to try using String.Format followed by StringBuilder, then String.Concat or + (plus, operator overload)

Performance check


I did a small performance check to verify our understanding and the results when 100,000 concatenations were performed in a loop on a quad processor machine were

Time taken using + : 93071.0034
Time taken using StringBuilder: 14.0182
Time taken using StringBuilder with Format: 24.0155
Time taken using String.Format and + : 24.0155
Time taken using StringBuilder with Format and clear: 38.0249

Method, Delegate and Event Anti-Patterns in C#

October 28, 2013 CSharp, Visual Studio , , , , ,

No enterprise application exists without a method, event, delegate and every developer would have written methods in his/her application.  While defining these methods/delegates/events, we follow a standard definitions.  Apart from these definitions there are some best practices that one should follow to ensure that method does not leave any object references, other methods are called in appropriate way, arguments to parameters are validated and many more. 

This article outlines some of the anti-patterns while using Method, Delegate and Event and in-effect highlights the best practices to be followed to get the best performance of your application and have a very low memory footprint.

The right disposal of objects

We have seen multiple demonstrations that implementing IDisposable interface (in class BaseClass) and wrapping its object instance in ‘using’ block is sufficient to have a good clean-up process.  While this is true in most of the cases, this approach does not guarantee that derived classes (let’s say DerivedClass) will have the same clean-up behaviour as that of the base class.

To ensure that all derived classes take responsibility of cleaning up their resources, it is advisable to add an additional virtual method in the BaseClass that is overridden in the DerivedClass and cleanup is done appropriately.  One such implementation would look like,

  1. public class BaseClass : IDisposable
  2. {
  3.     protected virtual void Dispose(bool requiresDispose)
  4.     {
  5.         if (requiresDispose)
  6.         {
  7.             // dispose the objects
  8.         }
  9.     }
  11.     public void Dispose()
  12.     {
  13.         Dispose(true);
  14.         GC.SuppressFinalize(this);
  15.     }
  17.     ~BaseClass()
  18.     {
  19.         Dispose(false);
  20.     }
  21. }
  23. public class DerivedClass: BaseClass
  24. {
  25.     // some members here    
  27.     protected override void Dispose(bool requiresDispose)
  28.     {
  29.         // Dispose derived class members
  30.         base.Dispose(requiresDispose);
  31.     }
  32. }

This implementation assures that the object is not stuck in finalizer queue when the object is wrapped in ‘using’ block and members of both BaseClass and DerivedClass are freed from the memory

The return value of a method can cause a leak

While most of our focus is on freeing the resources used inside the method, it is the return value of the method that also occupies memory space.   If you are returning an object, the memory space occupied (but not used) is large

Let’s see some bad piece of code that can leave some unwanted objects in memory. 

  1. public void MethodWhoseReturnValueIsNotUsed(string input)
  2. {
  3.     if (!string.IsNullOrEmpty(input))
  4.     {
  5.         // value is not used any where
  6.         input.Replace(" ", "_");
  8.         // another example
  9.         new MethodAntiPatterns();
  10.     }
  11. }

Most of the string methods like Replace, Trim (and its variants), Remove, IndexOf and alike return a ‘new’ string value instead of manipulating the ‘input’ string.  Even if the output of these methods is not used, CLR will create a variable and store it in memory.  Another similar example is creation of an object that is never used (ref: MethodAntiPattern object in the example)

Virtual methods in constructor can cause issues

The heading speaks for itself.  When calling virtual methods from constructor of ABaseClass, you can not guarantee that the ADerivedClass would have been instantiated.

  1. public partial class ABaseClass
  2. {
  3.     protected bool init = false;
  4.     public ABaseClass()
  5.     {
  6.         Console.WriteLine(".ctor – base");
  7.         DoWork();
  8.     }
  10.     protected virtual void DoWork()
  11.     {
  12.         Console.WriteLine("dowork – base >> "
  13.             + init);
  14.     }
  15. }
  17. public partial class ADerivedClass: ABaseClass
  18. {
  19.     public ADerivedClass()
  20.     {
  21.         Console.WriteLine(".ctor – derived");
  22.         init = true;
  23.     }
  25.     protected override void DoWork()
  26.     {
  27.         Console.WriteLine("dowork – derived >> "
  28.             + init);
  30.         base.DoWork();
  31.     }
  32. }


Use SecurityCritical attribute for code that requires elevated privileges

Accessing of critical code from a non-critical block is not a good practice.

Mark methods and delegates that require elevated privileges with SecurityCritical attribute and ensure that only the right (with elevated privileges) code can call those methods or delegates

  1. [SecurityCritical]
  2. public delegate void CriticalDelegate();
  4. public class DelegateAntiPattern
  5. {
  6.     public void Experiment()
  7.     {
  8.         CriticalDelegate critical  = new CriticalDelegate(CriticalMethod);
  10.         // Should not call a non-critical method or vice-versa
  11.         CriticalDelegate nonCritical = new CriticalDelegate(NonCriticalMethod);
  12.     }
  14.     // Should not be called from non-critical delegate
  15.     [SecurityCritical]
  16.     private void CriticalMethod() {}
  18.     private void NonCriticalMethod() { }
  19. }


Override GetHashCode when using overriding Equals method

When you are overriding the Equals method to do object comparisons, you would typically choose one or more (mandatory) fields to check if 2 objects are same.  So your Equal method would look like,

  1. public class User
  2. {
  3.     public string Name { get; set; }
  4.     public int Id { get; set; }
  6.     //optional for comparison
  7.     public string PhoneNumber { get; set; }
  9.     public override bool Equals(object obj)
  10.     {
  11.         if (obj == null) return false;
  13.         var input = obj as User;
  14.         return input != null &&
  15.             (input.Name == Name && input.Id == Id);
  16.     }
  17. }


Now this approach checks if all mandatory field values are same.  This looks good in an example for demonstration, but when you are dealing with business entities this method becomes an anti-pattern.  The best approach for such comparisons would be to rely on GetHashCode to find out if the object references are the same

  1. public override bool Equals(object obj)
  2. {
  3.     if (obj == null) return false;
  5.     var input = obj as User;
  6.     return input == this;
  7. }
  9. public override int GetHashCode()
  10. {
  11.     unchecked
  12.     {
  13.         // 17 and 23 are combinations for XOR
  14.         // this algorithm is used in C# compiler
  15.         // for anonymous types
  16.         int hash = 17;
  17.         hash = hash * 23 + Name.GetHashCode();
  18.         hash = hash * 23 + Id.GetHashCode();
  19.         return hash;
  20.     }
  21. }

You can use any hashing algorithm here to compute a hash of an object.  In this case, comparisons happen between computed hash of objects (int values) which will be more accurate, faster and scalable when you are adding new properties for comparison.

Detach the events when not in use

Is it necessary to remove event handler explicitly in C#?  Yes if you are looking for lower memory footprint of your application.  Leaving the events subscribed is an anti-pattern.

Let’s understand the reason by an example

  1. public class Publisher
  2. {
  3.     public event EventHandler Completed;
  4.     public void Process()
  5.     {
  6.         // do something
  7.         if (Completed != null)
  8.         {
  9.             Completed(this, EventArgs.Empty);
  10.         }
  11.     }
  12. }
  14. public class Subscriber
  15. {
  16.     public void Handler(object sender, EventArgs args) { }
  17. }

Now we will attach the Completed event of Published to Handler method of Subscriber to understand the clean up.

  1. Publisher pub = new Publisher();
  2. Subscriber sub = new Subscriber();
  3. pub.Completed += sub.Handler;
  5. // this will invoke the event
  6. pub.Process();
  8. // frees up the event & references
  9. pub.Completed -= sub.Handler;
  11. // will not invoke the event
  12. pub.Process();
  14. // frees up the memory
  15. pub = null; sub = null;

After the Process method has been executed the Handler method would have got the execution flow and completed the processing.  However, the event is still live and so are its references.  If you again call Process method, the Handler method will be invoked.  Now when we unsubscribe (-=) the Handler method, the event association and its references are freed up from the memory but objects pub and sub are not freed yet.  When pub and sub objects are assigned null, they are marked for collection by GC.

If we do not unsubscribe (-=) and keep other code AS-IS – GC will check for any live references for pub and sub and it will find a live event.  It will not collect these objects and they will cause a memory leak.  This common anti-pattern is more prevalent in UI based solutions where the UI events are attached/hooked to code-behind / view-models / facade.

Following these practices will definitely reduce your application’s footprint and make it faster.

Which one is better : JSON vs. Xml serialization?

October 23, 2013 CSharp, Visual Studio , , , ,

One of the hot topics of discussion in building enterprise applications is whether one should use JSON or XML based serialization for

  • data serialization and deserialization
  • data storage
  • data transfer

To illustrate these aspects, let’s write some code that can help build the facts.  Our code will involve creating an entity User

  1. public class User
  2. {
  3.     public string Name { get; set; }
  4.     public int Id { get; set; }
  5.     public UserType Type { get; set; }
  6. }
  8. public enum UserType { Tech, Business, Support }


To test performance and other criteria we will use standard XML serialization, and for JSON we will evaluate these parameters using 2 open source frameworks Newtonsoft.Json and ServiceStack.Text

To create dummy data, our code looks like


  1. private Random rand = new Random();
  2. private char[] letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ".ToCharArray();
  4. private List<User> GetDummyUsers(int max)
  5. {
  6.     var users = new List<User>(max);
  7.     for (int i = 0; i < max; i++)
  8.     {
  9.         users.Add(new User { Id = i, Name = GetRandomName(), Type = UserType.Business });
  10.     }
  12.     return users;
  13. }
  15. private string GetRandomName()
  16. {
  17.     int maxLength = rand.Next(1, 50);
  18.     string name = string.Empty;
  19.     for (int i = 0; i < maxLength; i++)
  20.     {
  21.         name += letters[rand.Next(26)];
  22.     }
  23.     return name;
  24. }
  26. private long Compress(byte[] data)
  27. {
  28.     using (var output = new MemoryStream())
  29.     {
  30.         using (var compressor = new GZipStream(output, CompressionMode.Compress, true))
  31.         using (var buffer = new BufferedStream(compressor, data.Length))
  32.         {
  33.             for (int i = 0; i < data.Length; i++)
  34.                 buffer.WriteByte(data[i]);
  35.         }
  36.         return output.Length;
  37.     }
  38. }


The serialization logic to convert List<User> to serialized string and gather the statistics is


  1. public void Experiment()
  2. {
  3.     DateTime dtStart = DateTime.Now;
  4.     List<User> users = GetDummyUsers(20000); // change the number here
  5.     Console.WriteLine("Data generation  \t\t took mSec: "
  6.         + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  8.     Console.WriteLine("—-");
  10.     dtStart = DateTime.Now;
  11.     var xml = MyXmlSerializer.Serialize<List<User>>(users, Encoding.UTF8);
  12.     Console.WriteLine("Length (XML):      \t" + xml.Length + " took mSec: "
  13.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  14.     xml = MyXmlSerializer.Serialize<List<User>>(users, Encoding.UTF8);
  15.     Console.WriteLine("Length (XML):      \t" + xml.Length + " took mSec: "
  16.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  18.     dtStart = DateTime.Now;
  19.     var json = JsonConvert.SerializeObject(users);
  20.     Console.WriteLine("Length (JSON.NET): \t" + json.Length + " took mSec: "
  21.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  22.     dtStart = DateTime.Now;
  23.     json = JsonConvert.SerializeObject(users);
  24.     Console.WriteLine("Length (JSON.NET): \t" + json.Length + " took mSec: "
  25.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  27.     var serializer2 = new JsvSerializer<List<User>>();
  28.     dtStart = DateTime.Now;
  29.     var json2 = serializer2.SerializeToString(users);
  30.     Console.WriteLine("Length (JSON/ST) : \t" + json2.Length + " took mSec: "
  31.         + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  32.     dtStart = DateTime.Now;
  33.     json2 = serializer2.SerializeToString(users);
  34.     Console.WriteLine("Length (JSON/ST) : \t" + json2.Length + " took mSec: "
  35.         + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  37.     Console.WriteLine("—-");
  39.     var xmlBytes = Converter.ToByte(xml);
  40.     Console.WriteLine("Bytes (XML):     \t" + xmlBytes.Length);
  42.     var jsonBytes = Converter.ToByte(json);
  43.     Console.WriteLine("Bytes (JSON):    \t" + jsonBytes.Length);
  45.     var jsonBytes2 = Converter.ToByte(json);
  46.     Console.WriteLine("Bytes (JSON/ST): \t" + jsonBytes2.Length);
  48.     Console.WriteLine("—-");
  50.     var compressedBytes = Compress(xmlBytes);
  51.     Console.WriteLine("Compressed Bytes (XML):     \t" + compressedBytes);
  53.     compressedBytes = Compress(jsonBytes);
  54.     Console.WriteLine("Compressed Bytes (JSON):    \t" + compressedBytes);
  56.     compressedBytes = Compress(jsonBytes2);
  57.     Console.WriteLine("Compressed Bytes (JSON/ST): \t" + compressedBytes);
  59.     Console.WriteLine("—-");
  61.     dtStart = DateTime.Now;
  62.     MyXmlSerializer.Deserialize<List<User>>(xml, Encoding.UTF8);
  63.     Console.WriteLine("Deserialized (XML): \t took mSec "
  64.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  65.     MyXmlSerializer.Deserialize<List<User>>(xml, Encoding.UTF8);
  66.     Console.WriteLine("Deserialized (XML): \t took mSec "
  67.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  69.     dtStart = DateTime.Now;
  70.     JsonConvert.DeserializeObject<List<User>>(json);
  71.     Console.WriteLine("Deserialized (JSON): \t took mSec "
  72.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  73.     dtStart = DateTime.Now;
  74.     JsonConvert.DeserializeObject<List<User>>(json);
  75.     Console.WriteLine("Deserialized (JSON): \t took mSec "
  76.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  78.     dtStart = DateTime.Now;
  79.     serializer2.DeserializeFromString(json2);
  80.     Console.WriteLine("Deserialized (JSON/ST): \t took mSec "
  81.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  82.     serializer2.DeserializeFromString(json2);
  83.     Console.WriteLine("Deserialized (JSON/ST): \t took mSec  "
  84.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  86. }


On running this program on a Quad-Core processor with 6 GB RAM, the statistics look as below. 




What the statistics mean?


  • Serialization and Deserialization Performance

    For XML Serialization, there is no decrease in serialization time in the subsequent serialization requests even after using the same XmlSerializer object.  When using JSON, we see that the frameworks reduce the serialization time drastically.  JSON serialization appears to give us an gain of 50-97% in serialization time. 

    When dealing with deserialization, XML deserialization gives a better performance consistently with both data sets (20K and 200K).  JSON deserialization seems to take more time even when averaged. 

    Every application requires both serialization and deserialization features.  Considering the performance statistics, it looks like using ServiceStack.Text outperforms other two libraries. 

    Winner: JSON with ServiceStack.Text by taking 91% of time taken by Xml serialization + deserialization

  • Data Storage

    Looking at data storage aspect, Xml based string definitely requires more storage space.  So if you are looking at storing string, JSON is the clear choice at benefit.

    However, when you apply GZip compression on serialized strings generated by XML / JSON frameworks it appears that there is no major difference in the storage size.  JSON still saves some bytes for you!  This is one reason some NoSQL databases uses JSON based storage instead of XML based storage.  However, for quicker retrieval you need to apply some indexing mechanisms too.

    Winner: JSON without compression; With compression, minor gain by using JSON

  • Data Transfer

    Data transfer comes in picture when you are transferring your objects on EMS / MQ / WebServices.  Keeping other parameters such as network latency, availability, bandwidth, throughput, etc. as constants in both cases amount of data transfer becomes a function of data length or protocol used over network. 

    For EMS / MQ – Data length, as in statistics, is lesser in case of JSON when sent as string and almost same when sending as compressed bytes. 

    For WebServices / WCF – Data transfer depends on the protocol used.  If you are using SOAP based services, apart from your serialized Xml you will also have SOAP headers that will form the payload.  But if you are using REST protocol, you can return plain Xml / JSON and in that case a JSON string will have lesser payload than XML string.

    Winner: Depends on the protocol of transfer and compression technique used


Note: The performance may vary slightly when using any other C# libraries. But, hopefully, the % change should be neutralized.


Hope this article helps you to choose the right protocol and technique for your application.  If you have any questions, I’ll be happy to help you!

Performance oriented Xml and JSON serialization in .NET

November 16, 2011 CSharp, Visual Studio, WCF , , , ,

Microsoft .NET framework provides multiple out-of-the-box data serializers for data transformations.  The most famous one used since .NET 1.0 version is XmlSerializer, while one that has got more famous since .NET 3.0 framework is DataContractSerializer.   But they are not the only two serializers the framework offers.  So in this essay, let’s see the different serializers .NET framework offers and how they are different from each other.  So here’s the list of serializers

  • XmlSerializer: The most commonly used xml serializer
  • JavaScriptSerializer:  Introduced in ASP.NET Ajax Extensions in .NET 2.0, and now marked as obsolete, primarily provided JSON serializer
  • DataContractSerializer: Introduced in .NET 3.0 with Windows Communication Foundation (WCF), this is default serializer in WCF.
  • NetDataContractSerializer: A not-too-famous serializer that includes CLR type information in serialized xml which DataContractSerializer does not.
  • DataContractJsonSerializer:  Introduced in .NET 3.5, this class is handy in generating JSON output of an entity

Next, let’s define a class Employee and implement a serializer class and try out XmlSerializer, DataContractSerializer, NetDataContractSerializer and DataContractJsonSerializer with examples.

Step 1- Defining Employee class

The structure of Employee class is different for different serializers.  So make a note that XmlSerializer requires a parameterless or a default constructor, while other serializers do not require so.  Other serialisers based on DataContractSerializer require DataContract and DataMember attribute on the class and its members, while XmlSerializer requires either a native type, or ISerizable implemented complex class (read: class)

Employee Class
  1. ///<summary>
  2. /// Employee class for all other serializers
  3. ///</summary>
  4. [DataContract]
  5. public class Employee
  6. {
  7.     [DataMember]
  8.     public string Name { get; set; }
  9.     [DataMember]
  10.     public int EmployeeId { get; set; }
  11.     ///<summary>
  12.     /// Note: Default constructor is not mandatory
  13.     ///</summary>
  14.     public Employee(string name, int employeeId)
  15.     {
  16.         this.Name = name;
  17.         this.EmployeeId = employeeId;
  18.     }
  19. }
  20. ///<summary>
  21. /// Employee class for XmlSerializer
  22. ///</summary>
  23. public class Employee
  24. {
  25.     public string Name { get; set; }
  26.     public int EmployeeId { get; set; }
  27.     ///<summary>
  28.     /// Parameter-less constructor is mandatory
  29.     ///</summary>
  30.     public Employee() { }
  31.     public Employee(string name, int employeeId)
  32.     {
  33.         this.Name = name;
  34.         this.EmployeeId = employeeId;
  35.     }
  36. }

Step 2 – Defining the Serialization Factory

To define the serialization factory, we will define an enum SerializerType, and a factory class SerializerFactory and add reference to System.Runtime.Serialization using “Add Reference” option

  1. public enum SerializerType
  2.     {
  3.         ///<summary>
  4.         /// XmlSerializer
  5.         ///</summary>
  6.         Xml,
  7.         ///<summary>
  8.         /// DataContractJsonSerializer
  9.         ///</summary>
  10.         JSON,
  11.         ///<summary>
  12.         /// DataContractSerializer
  13.         ///</summary>
  14.         WCF,
  15.         ///<summary>
  16.         /// NetDataContractSerializer
  17.         ///</summary>
  18.         CLR
  19.     }

The factory class could be plain vanilla object creation based on the enum (SerializerType) value, however creation of serialization object is heavy on performance. Hence, we would like to cache it in the memory for re-use.  So the factory class has been optimized for better performance using a Dictionary of serializers.

Serialization Factory
  1. public static class SerializerFactory
  2.     {
  3.         private static Dictionary<Type, Dictionary<SerializerType, object>> _knownObjects;
  4.         static SerializerFactory()
  5.         {
  6.             _knownObjects = new Dictionary<Type, Dictionary<SerializerType, object>>();
  7.         }
  8.         internal static ISerializer<T1> Create<T1>(SerializerType serializerType)
  9.         {
  10.             Type type = typeof(T1);
  11.             if (_knownObjects.ContainsKey(type))
  12.             {
  13.                 if (_knownObjects[type].ContainsKey(serializerType))
  14.                     return ((ISerializer<T1>)_knownObjects[type][serializerType]);
  15.             }
  16.             ISerializer<T1> returnValue = null;
  17.             switch (serializerType)
  18.             {
  19.                 case SerializerType.Xml:
  20.                     returnValue = new XmlSerializer<T1>();
  21.                     break;
  22.                 case SerializerType.JSON:
  23.                     returnValue = new JsonSerializer<T1>();
  24.                     break;
  25.                 case SerializerType.WCF:
  26.                     returnValue = new WcfSerializer<T1>();
  27.                     break;
  28.                 case SerializerType.CLR:
  29.                     returnValue = new ClrSerializer<T1>();
  30.                     break;
  31.                 default:
  32.                     throw new NotSupportedException(“Unknown serializer type”);
  33.                     break;
  34.             }
  35.             if (_knownObjects.ContainsKey(type) == false)
  36.                 _knownObjects.Add(type, new Dictionary<SerializerType, object>());
  37.             _knownObjects[type].Add(serializerType, returnValue);
  38.             return returnValue;
  39.         }
  40.     }


Step 3 – The Main Program (consuming application)

Our main program should be able to support serialization of Employee class, or a list of employee class as shown below:

Main Program
  1. class Program
  2.     {
  3.         static void Main(string[] args)
  4.         {
  5.             List<Employee> employees = new List<Employee>()
  6.             {
  7.                 new Employee(“Tim”, 1392902),
  8.                 new Employee(“Shawn”, 156902),
  9.             };
  10.             ISerializer<List<Employee>> xmlSerializer = SerializerFactory.Create<List<Employee>>(SerializerType.Xml);
  11.             string xml = xmlSerializer.Serialize(employees);
  12.             ISerializer<List<Employee>> jsonSerializer = SerializerFactory.Create<List<Employee>>(SerializerType.JSON);
  13.             string json = jsonSerializer.Serialize(employees);
  14.             ISerializer<List<Employee>> clrSerializer = SerializerFactory.Create<List<Employee>>(SerializerType.CLR);
  15.             string clr = clrSerializer.Serialize(employees);
  16.             ISerializer<List<Employee>> wcfSerializer = SerializerFactory.Create<List<Employee>>(SerializerType.WCF);
  17.             string wcf = wcfSerializer.Serialize(employees);
  18.             Console.ReadKey();
  19.         }
  20.     }


Step 4 – The Serializer implementations

To make this essay shorter and easy to comprehend, only two implementations have been mentioned here: Xml and JSON serializer.  The other two have been included in the source code.

Implementing XmlSerializer

As mentioned earlier, the Xml Serializer requires a default constructor without which the program will throw a runtime exception.

Implementation: XmlSerializer
  1. public class XmlSerializer<T> : ISerializer<T>
  2.     {
  3.         System.Xml.Serialization.XmlSerializer _xmlSerializer =
  4.             new System.Xml.Serialization.XmlSerializer(typeof(T));
  5.         public string Serialize(T value)
  6.         {
  7.             MemoryStream memoryStream = new MemoryStream();
  8.             XmlTextWriter xmlTextWriter = new XmlTextWriter(memoryStream, Encoding.UTF8);
  9.             _xmlSerializer.Serialize(xmlTextWriter, value);
  10.             memoryStream = (MemoryStream)xmlTextWriter.BaseStream;
  11.             return memoryStream.ToArray().ToStringValue();
  12.         }
  13.         public T Deserialize(string value)
  14.         {
  15.             MemoryStream memoryStream = new MemoryStream(value.ToByteArray());
  16.             XmlTextWriter xmlTextWriter = new XmlTextWriter(memoryStream, Encoding.UTF8);
  17.             return (T)_xmlSerializer.Deserialize(memoryStream);
  18.         }
  19.     }


Implementing JSON Serializer

A Json Serializer is very handy serializer specially when dealing with REST services, or JavaScript, or cross-platform messaging applications.  In recent times, JSON has gained more adoptability considering the ease to understand the serialized output and the cleanliness

Implementation:JSON Serializer
  1. public class JsonSerializer<T> : ISerializer<T>
  2.     {
  3.         DataContractJsonSerializer _jsonSerializer = new DataContractJsonSerializer(typeof(T));
  4.         public string Serialize(T value)
  5.         {
  6.             MemoryStream ms = new MemoryStream();
  7.             _jsonSerializer.WriteObject(ms, value);
  8.             string retVal = ms.ToArray().ToStringValue();
  9.             ms.Dispose();
  10.             return retVal;
  11.         }
  12.         public T Deserialize(string value)
  13.         {
  14.             MemoryStream ms = new MemoryStream(value.ToByteArray());
  15.             T obj = (T)_jsonSerializer.ReadObject(ms);
  16.             ms.Close();
  17.             ms.Dispose();
  18.             return obj;
  19.         }
  20.     }


Step 5 – Comparing the serialization results


<?xml version=”1.0″ encoding=”utf-8″?>
<ArrayOfEmployee xmlns:xsi=”” xmlns:xsd=””>

A default schema/namespace defined by is added in the root node and the collection is named as ArrayOfEmployee. The output is always a valid Xml.





There is no schema added to the serialized string and the string is more clean and readable.  Items are grouped by parenthesis { } and the collection is encapsulated within Box brackets [ ]


<ArrayOfEmployee xmlns=”” xmlns:i=””>

A default schema/namespace defined by Microsoft and is added in the root node and the collection is named as ArrayOfEmployee. The output is always a valid Xml.


<ArrayOfEmployee z:Id=”1″ z:Type=”System.Collections.Generic.List`1[[Serializers.Employee, Serializers, Version=, Culture=neutral, PublicKeyToken=null]]” z:Assembly=”0″ xmlns=”” xmlns:i=”” xmlns:z=””>
<_items z:Id=”2″ z:Size=”4″>
<Employee z:Id=”3″>
<Name z:Id=”4″>Tim</Name>
<Employee z:Id=”5″>
<Name z:Id=”6″>Shawn</Name>
<Employee i:nil=”true”/>
<Employee i:nil=”true”/>

A default schema/namespace defined by Microsoft and is added in the root node and the collection is named as ArrayOfEmployee. The output is always a valid Xml, however the Xml nodes also define CLR metadata such as Type, Size, Version, Id, etc.

[Updated] Performance benchmarks

I modified the example to add 200K employees to the collection to benchmark the performance results.  For the first time, serialization took more time as the serialization object was not cached, but for the subsequent times there was 17-44% improvement in the performance.

XmlSerializer (1): Time to executed 1142.0654 mSec
XmlSerializer (2): Time to executed 635.0364 mSec

DataContractJsonSerializer (1): Time to executed 847.0484 mSec
DataContractJsonSerializer (2): Time to executed 611.0349 mSec
CLR (1): Time to executed 2179.1246 mSec
CLR (2): Time to executed 1914.1095 mSec
DataContractSerializer (1): Time to executed 539.0308 mSec
DataContractSerializer (2): Time to executed 413.0236 mSec
What is worth noticing is the that DataContractSerializer is the fastest serializer, followed by DataContractJsonSerializer and XmlSerializer.  Unless absolutely required NetDataContractSerializer should not be used.

I hope this essay helps in understanding serializers better!

Download the source code [] from SkyDrive

Resolved: WordPress hosting on GoDaddy Windows Server

October 8, 2011 Wordpress , ,

Two months ago I transferred my domain to GoDaddy from a local web hosting service provider with whom I had been for last 7 years.  The transfer process was very quick and I was very happy to see a good console panel, and 24×7 support until I noticed in Google Analytics that my site visitors had dropped by 50%. This was really sad and I knew nothing to get that number back to the original.

I started visiting my website and, my God, it took more than 1 minute to load and many a times it never loaded.  When I filled in the contact form, it never sent me an email!  I started googling and found many links each contradicting the other.  So finally I installed few plugins

  • Installed WP Total Cache plugin (to speed up Go-Daddy)
  • Installed Fix Rss Feeds (since RSS feeds were not working)
  • Installed Configure SMTP (since mails stopped working)

And I slept in peace thinking my site would work now! After a week, I again visited my site and it took lesser than 1 minute.  But it was still slower than earlier (with my local hosting provider).  I started following up with GoDaddy Support.  The support team is very supportive and have great deal of patience, but somehow they don’t have technical expertise on WordPress.  My website was hosted on Windows- 4GH hosting on GoDaddy, and they suggested me to migrate to Linux.  Now, PHP works great on Windows and I needed Windows hosting to be able to host other services.  So switching to Linux was not an option!  On top of this, their Windows server in Singapore data center, on which my website was hosted, was down!  I had >3 days of website downtime and was not able to find a way out.

I dig into the WordPress code, and understood the internals.  Now I am not a PHP expert but having worked on many languages since last 15 years I can fairly understand PHP to be able to find out issues.  My observations that got me started working are:

  • Each web request to a page had many re-routes.  First, the caching block redirected it to a static HTML page (generally placed in \wp-content\cache folder).URL 1: was translated to
    URL 2:

    Which then routed to internal WordPress cache engine

    URL 2: was translated to
    URL 3:

    Now your actual URL 1 becomes URL 3 which WordPress will never find for you in the database and you will get a 404 page.

    Solution: Disable WP Total Cache plugin and all plugins to caching.  With newer versions of WordPress they are not required.

    • Once you have disabled Cache plugins, recycle the App Pool
    • Delete the folder \wp-content\cache
    • Delete the plugin folder from \wp-content\plugins
    • Edit the wp-config.php in the blog root and ensure that WP_CACHE is disabled
      define(‘WP_CACHE’, false);
  • Activate any another theme and then re-activate back to the original theme.  This is just to enable that site settings are altered and caching is over-ridden.
  • Disable and uninstall YARP – Yet Another Related Post Plugin from WordPress.  That takes time to load the results related to a post.
  • Install plugin WP-Optimize to optimize the database
    • Delete the entries that have cache
    • Delete the entries that have YARP settings
  • Install and activate plugin Clean UP to clean up the database – post versions
With these settings, your website will be much faster even on GoDaddy.  My site now takes ~11 seconds to load.  I use following additional tools to check speed of my website
  • Web Page Test – – This website runs a test on your website and tells us how many requests were generated to visit your website, and how much time each request took.  In a way, it also tells you how much time JavaScript, Images, Plugins took.  Much of it you can get using Fiddler2 as well.
  • Browser Compatibility – – This website checks the compatibility of your website on 76+ browsers
So yes when you are done with the steps mentioned above, you will find your site gradually picking up speed and you will start enjoying the experience.  I wish GoDaddy support had more technical knowledge on WordPress, it would have saved my time.
I hope this helps many who are struggling with this!

CInject – Code Injection with Runtime intelligence

October 4, 2011 CSharp, Open Source, Visual Studio , , , ,

CInject adds more value to your existing applications by injecting runtime intelligence.  You can use injectors provided by with CInject, or define your own injector.

This article will highlight some cases where you can directly use CInject

Existing application has no or very little logging
Application is performing slower than I expected
Don't know what gets passed as arguments to few methods

Using Injectors with CInject

September 28, 2011 CSharp, Open Source, Visual Studio, Winform , , , , ,

If you don’t know what is CInject, I would recommend you to read this article and get the latest version of CInject from CodePlex website.

There are few injectors that are shipped with CInject

  • LogInjector – Allows injecting logging in any .NET assembly/executable even if you do not have code of the assembly/executable.  LogInjector is built on .NET 4.0 Client Profile and uses log4net for logging purpose
  • PerformanceInjector – Allows logging the time required to execute injected method in the target assemblies
  • ObjectValueInjector – Allows getting any property of arguments passed to the injected method

You can get started with directly using them without writing a single line of code.


Quick Guide to using CInject

So, to get started all you would require is to

  • Download the latest stable version of CInject and unzip it to, lets say, C:\CInject
  • Locate the assemblies (DLL) / executables (EXE) that need to be injected with along with their configuration files
  • Locate your injectors assembly (DLL) and their configuration files.
  • Open the downloaded CInject application [C\CInject\Cinject.exe]
  • Choosing the Target Assembly
    • If you are targeting a single assembly (to be injected), click on Assembly > Load Assembly (F3).  As soon as you select the assembly, CInject will try to create a tree of classes and methods in the assembly.
    • If you are targeting multiple assemblies (to be injected), click on Assembly > Select Directory (F2).  CInject will browse through DLLs and EXEs in the directory and create a tree of classes and methods in each assembly or executable
  • Choosing the Injector Assembly
    • Auto Load – If you have copied the injector assemblies into CInject directory [here, C:\CInject], they will be automatically loaded in CInject
    • Manually Load – Click on Assembly > Load Injector (F4).  This will load the injectors
  • Select all the classes/methods (in the blue panel) that you would want to inject. Then, select the LogInjector (in the yellow panel)
  • You can click on ‘Add selected injectors to selected methods in target assembly’.  This will add some entries in the grid (Assembly, Method, Injector).  However, this does not mean that the assembly has been injected.  If by mistake you have selected a wrong injector or a method, you can remove them by selecting it in the grid and clicking ‘Remove selected row’
  • Select Inject > Run (F5) to inject and proceed ahead to inject the target assemblies.
  • You will be prompted when the injection has been completed.  With CInject 1.4, all the required files for each injector will be automatically copied to the folder of target assembly
  • You can now execute the target assembly. This is the new assembly with selected injectors

LogInjector Configuration

You can alter the log configuration by editing LogInject.log4net.xml based on the Apache log4net configuration.

Currently, LogInjector supports DEBUG, INFO and ERROR modes

  • Debug:   Prints additional information such as calls to the destructor of the method, type of parameter to each method invoked
  • Info: Just logs the name method invoked
  • Error: Logs exceptions if LogInjector fails at any point

Apart from the method information, it logs Data-Time, Thread Id, and Assembly.  You can configure these details in the xml file accompanied with the CInject executables.

In case you would want to use any other version of .NET, you would have to rebuild the code in Visual Studio.

PerformanceInjector Configuration

PerformanceInjector also uses log4net for logging the performance of the methods.  The configuration can be mentioned in loginject.log4net.xml file.

PerformanceInjector supports only INFO level of logging.

ObjectValueInjector Configuration

ObjectValueInjector uses ObjectSearch.xml to define which property needs to be retrieved in the arguments to the injected methods.

If the defined property (in configuration) does not exist in the argument, this injector does not throw any exception.  If the defined property (in configuration) exist in the argument, injector will try to get the value of the property.  If property value is NULL, it will log <null>

ObjectValueInjector uses DEBUG level of logging

CInject – Quick Guide to get started with building your injectors

September 28, 2011 CSharp, Open Source, Visual Studio , , , , ,

If you don’t know what is CInject, I would recommend you to read this article and get the download latest version of CInject from CodePlex website.

Creating a Basic Injector

Once you have the latest executable, you can follow through these steps to create your own injector

Create a new Visual Studio Class Library Project (C# or VB.NET) and add a Reference to CInject.Injections.dll.  You will find this assembly with the CInject application


Add a class called ‘MyInjector’ that derives from ICInject interface.  This interface is part of CInject.Injections reference

The MyInjector class would look like


using CInject.Injections.Interfaces;

namespace InjectorLib
    public class MyInjector : ICInject
        public void OnComplete()
            // This is called before exit of injected method

        public void OnInvoke(CInject.Injections.Library.CInjection injection)
            // Called at the entry of injected method

            // To get the arguments passed to injected method
            var arguments = injection.Arguments;

            // To get value of property Text in any argument
            var propertyValues = injection.GetPropertyValue("Text");

        public void Dispose()
            // dispose here if you want to

Compile the Visual Studio Project to create a DLL named MyInjector.dll and follow the steps mentioned in Using Injectors in CInject

Adding Configuration to your Injector

Adding configuration to an injector is similar to adding a configuration to any other project.  You can choose from one of the following ways

  1. Rely on the configuration to be added in the configuration (app.config / web.config) of the target assembly / executable
  2. Add an independent configuration file (say, myconfig.xml) that needs to be copied in the target assembly/executable folder
  3. Add an independent configuration file and retrieve it from a Network Shared drive

I would not recommend Point -1 as it may cause conflicting configurations.  To use step 2, you can use a very handy feature of CInject

Let’s say that your configuration file is myinjector-configuration.xml and it is added to the Visual Studio Project

You need to change the property of the configuration file to ‘Copy Always’ and decorate your injector class with DependentFiles attribute

using CInject.Injections.Attributes;
using CInject.Injections.Interfaces;

namespace InjectorLib
    public class MyInjector : ICInject
        #region Code

        // Other code here


When CInject applies this injector to any assembly, it ensures that it copies myinjector-configuration.xml to root folder of target assembly / executable.

You can access this configuration file in code using standard .NET libraries


What if I some dependent / referenced assemblies as well?

Dependent / Referenced assemblies in your injector work same as configuration files.

You just have to mention them in the DependentFiles attribute and ensure that they are in the same directory as that of the injector.





Follow on Feedly