CSharp

Cross Origin Resource Sharing with WCF JSON REST Services

February 3, 2015 CSharp, WCF , ,

My KonfDB platform provides a reliable way of configuration management as a service for cross-platform multi-tenant applications. When we refer to cross-platform capabilities, one of the ways to allow clients built using native technologies is by the way of REST services. WCF allows us to host a service and expose multiple endpoints using different protocols. So when KonfDB was in the design phase, I chose WCF as a tech-stack to support multiple endpoints and protocols.

I had written an article REST services with Windows Phone which should be a good starting point to understand WCF-REST services. Now, when you want this service to be accessible from different platforms – web, mobile, or across domains (in particular, Ajax requests) then we need to design few interceptors and behaviours that could allow Cross Origin Resource Sharing (CORS)

For this post, I will use the code from my own KonfDB platform. So those interested can actually visit the GitHub repository and explore more as well.

First, how CORS works

 

CORS works by providing specific instructions (sent from server) to the browsers which the browsers respect. These specific instructions are “additional” HTTP headers which are based on HTTP methods – GET or POST with specific MIME types. When we have HTTP POST method with specific MIME, the browser needs to “preflight” the request. Preflight means that the browser first sends an HTTP OPTIONS request header. Upon approval from the server, browser then sends the actual HTTP request.

So in a nutshell, we need some provision to handle these additional HTTP headers. In this post, we will see how we can change a RESTful service to support CORS.

REST Service Interface

 

A typical non-REST service interface defines methods and decorates them with OperationContract attribute. A REST service requires an additional attribute – one of these WebGet, WebPut or WebInvoke. So in the below example, to support Cross Origin Resource Sharing (CORS), we will decorate the method with attribute WebInvoke and set its Method=”*”

 
[ServiceContract(Namespace = ServiceConstants.Schema, Name = "ICommandService")]
public interface ICommandService : IService
{
        [OperationContract(Name = "Execute")]
        [WebInvoke(Method = "*", ResponseFormat = WebMessageFormat.Json,
            BodyStyle = WebMessageBodyStyle.Bare,
            UriTemplate = "/Execute?cmd={command}&token={token}")]
        ServiceCommandOutput ExecuteCommand(string command, string token);
}

RESTful Behaviour and Endpoint

 

In KonfDB, WCF service is hosting in the Windows Service container. To provide consistent behaviour to bindings and for purpose of future extensibility, I have derived bindings from the native bindings available in .NET framework. So my REST binding looks like,

 

    public class RestBinding : WebHttpBinding
    {
        public RestBinding()
        {
            this.Namespace = ServiceConstants.Schema;
            this.Name = ServiceConstants.ServiceName;
            this.CrossDomainScriptAccessEnabled = true;
        }
    }

 

The important point to note is CrossDomainScriptAccessEnabled is set to true. This is very essential for WCF service to work with CORS – and yes, it is safe!

Defining a CORS Message Inspector and Header

As said in the earlier part of the post, we need a mechanism to intercept the request and add additional HTTP headers to tell the browser that the service does support CORS. Since this functionality is required at an endpoint level, we will define an endpoint behaviour for this. The code for the EnableCorsEndpointBehavior looks like,

 
public class EnableCorsEndpointBehavior : BehaviorExtensionElement, IEndpointBehavior
    {
        public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) {        }

        public void ApplyClientBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.ClientRuntime clientRuntime) { }

        public void ApplyDispatchBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher)
        {
            var requiredHeaders = new Dictionary();

            requiredHeaders.Add("Access-Control-Allow-Origin", "*");
            requiredHeaders.Add("Access-Control-Request-Method", "POST,GET,PUT,DELETE,OPTIONS");
            requiredHeaders.Add("Access-Control-Allow-Headers", "X-Requested-With,Content-Type");

            var inspector = new CustomHeaderMessageInspector(requiredHeaders);
            endpointDispatcher.DispatchRuntime.MessageInspectors.Add(inspector);
        }

        public void Validate(ServiceEndpoint endpoint) { }

        public override Type BehaviorType
        {
            get { return typeof(EnableCorsEndpointBehavior); }
        }

        protected override object CreateBehavior()
        {
            return new EnableCorsEndpointBehavior();
        }
    }

 

Few important points to note

  • First, here are the headers Access-Control-Allow-Origin=* and Access-Control-Request-Method has OPTIONS set in it. If you want requests only from a particular domain name, you can change the value of Access-Control-Allow-Origin=http://www.mydomain.com and it should work correctly.
  • Second, we have passed these additional headers to a MessageInspector using another class CustomHeaderMessageInspector

The CustomHeaderMessageInspector class, that acts as a DispatchInspector, has the functionality to add these headers to the reply so that the client is aware of CORS. The CustomHeaderMessageInspector class looks like,

internal class CustomHeaderMessageInspector : IDispatchMessageInspector
    {
        private readonly Dictionary _requiredHeaders;
        public CustomHeaderMessageInspector(Dictionary headers)
        {
            _requiredHeaders = headers ?? new Dictionary();
        }

        public object AfterReceiveRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel, System.ServiceModel.InstanceContext instanceContext)
        {
            return null;
        }

        public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
        {
            var httpHeader = reply.Properties["httpResponse"] as HttpResponseMessageProperty;
            foreach (var item in _requiredHeaders)
            {
                httpHeader.Headers.Add(item.Key, item.Value);
            }
        }
    }

 

Last bit, adding this behaviour to the endpoint. Since the service is self-hosted and there is no WCF configuration file, the code looks like

             
            var serviceEndpoint = host.AddServiceEndpoint(typeof (T), binding.WcfBinding, endpointAddress);
            serviceEndpoint.Behaviors.Add(new WebHttpBehavior());
            serviceEndpoint.Behaviors.Add(new FaultingWebHttpBehavior());
            serviceEndpoint.Behaviors.Add(new EnableCorsEndpointBehavior());
            return serviceEndpoint;

Hosting and Testing this service

Using usual ServiceHost you can host this service and the service should run perfectly. To test this service, you can write a jQuery code

            $('#btnGet').click(function () {
                var requestUrl = 'http://localhost:8882/CommandService/Execute?cmd=someCommand&token=alpha';
                var token = null;
                $.ajax({
                    url: requestUrl,
                    type: "GET",
                    contentType: "application/json; charset=utf-8",
                    success: function (data) {
                        var outputData = $.parseJSON(data.Data);
                        token = outputData.Token;

                        ExecuteOtherRequests(token);
                    },
                    error: function (e) {
                        alert('error:' + JSON.stringify(e));
                    }
                });
            });

If you test this on Chrome with Inspector (F12), the network interaction would appear as shown in the screenshot below,

For a single Ajax request, as expected there is an HTTP OPTIONS request followed by HTTP GET request. If CrossDomainScriptAccessEnabled is not set to true in RestBinding, then we would get a HTTP 403 error – METHOD NOT FOUND.

If we look into the headers of the first request, we see that our WCF service (CustomHeaderMessageInspector) has added additional headers (highlighted) back into the request.

Since the browser got a HTTP Status Code = 200, it initiated the second (actual) request which is a HTTP GET request.

You can view the source code in KonfDB GitHub repository.

Why is StringBuilder faster in string concatenations?

January 15, 2014 CSharp, Visual Studio ,

Almost every developer who is new to development using C# faces a question as to which one is better – string.Concat, + (plus sign), string.Format or StringBuilder for performing string concatenation.  The most easiest way to find a correct an answer is to Google and get views of many experts.  Few of the hot links which every developer stumbles upon are

I don’t want to iterate what’s mentioned in the above articles, so I’ll just give a gist (from: MSDN)

The performance of a concatenation operation for a String or StringBuilder object depends on how often a memory allocation occurs. A String concatenation operation always allocates memory, whereas a StringBuilder concatenation operation only allocates memory if the StringBuilder object buffer is too small to accommodate the new data. Consequently, the String class is preferable for a concatenation operation if a fixed number of String objects are concatenated. In that case, the individual concatenation operations might even be combined into a single operation by the compiler. A StringBuilder object is preferable for a concatenation operation if an arbitrary number of strings are concatenated; for example, if a loop concatenates a random number of strings of user input.

So that makes few important conclusions

  • String is immutable, hence every time we use it (either its object, or any of its methods) it internally allocates a new memory location and stores the new value in the memory location.   When we perform repeated modifications, using a string object is an overhead
  • When we have a finite number of text concatenations, we could use either of the following formats,
string finalStringUsingPlusSymbol = @"this is a new string"
          + "with a lot of words"
          + "together forming a sentence. This is used in demo"
          + "of string concatenation.";

string finalStringUsingStringConcat =
    String.Concat(new[] {
          @"this is a new string"
        , "with a lot of words"
        , "together forming a sentence. This is used in demo"
        , "of string concatenation."
    });

  • For concatenations in a loop (where count > 2), prefer a StringBuilder.  Now that’s what is a known fact. Let’s see why and how it is so.

Step-Into StringBuilder class

 

When an object of StringBuilder is created, either with a default string value or with a default constructor, a char buffer (read: array) of capacity 0x10 or length of string passed in constructor whichever greater is created internally.  This buffer has a maximum capacity of 0x7fffffff unless specified explicitly by you while constructing an object of StringBuilder.

If a string value has been assigned in the constructor, it copies the characters of string in the memory using wstrcpy (internal) method of System.String.  Now when you call the method Append(string) in your code the code snippet below gets executed.

  1. if (value != null)
  2. {
  3.     char[] chunkChars = this.m_ChunkChars;
  4.     int chunkLength = this.m_ChunkLength;
  5.     int length = value.Length;
  6.     int num3 = chunkLength + length;
  7.     if (num3 < chunkChars.Length)
  8.     {
  9.         if (length <= 2)
  10.         {
  11.             if (length > 0)
  12.             {
  13.                 chunkChars[chunkLength] = value[0];
  14.             }
  15.             if (length > 1)
  16.             {
  17.                 chunkChars[chunkLength + 1] = value[1];
  18.             }
  19.         }
  20.         else
  21.         {
  22.             fixed (char* str = ((char*)value))
  23.             {
  24.                 char* smem = str;
  25.                 fixed (char* chRef = &(chunkChars[chunkLength]))
  26.                 {
  27.                     string.wstrcpy(chRef, smem, length);
  28.                 }
  29.             }
  30.         }
  31.         this.m_ChunkLength = num3;
  32.     }
  33.     else
  34.     {
  35.         this.AppendHelper(value);
  36.     }
  37. }

 

Check for Line 9 where it checks if length <= 2 then assign the first two characters of the string manually in the character array (the buffer).  Otherwise, as line 22-29 suggest, it first fixes the location of a pointer variable (to understand better, read fixed keyword) so that the GC does not relocate it and then copies the characters of the string using wstrcpy (which is an internal method of System.String).  So performance and strategy of StringBuilder primarily relies on the method wstrcpy.  The core code of wstrcpy deals with using integer pointers to copy from source (the object passed in the Append method, whose location is referred as smem) to the destination (the character buffer, whose destination is referred as dmem)

  1. while (charCount >= 8)
  2. {
  3.     *((int*)dmem) = *((uint*)smem);
  4.     *((int*)(dmem + 2)) = *((uint*)(smem + 2));
  5.     *((int*)(dmem + 4)) = *((uint*)(smem + 4));
  6.     *((int*)(dmem + 6)) = *((uint*)(smem + 6));
  7.     dmem += 8;
  8.     smem += 8;
  9.     charCount -= 8;
  10. }

 


String.Format is another StringBuilder

 

Yes, String.Format internally uses StringBuilder and creates a buffer of size format.Length + (args.Length * 8).

  1. public static string Format(IFormatProvider provider, string format, params object[] args)
  2. {
  3.     if ((format == null) || (args == null))
  4.     {
  5.         throw new ArgumentNullException((format == null) ? "format" : "args");
  6.     }
  7.     StringBuilder sb = StringBuilderCache.Acquire(format.Length + (args.Length * 8));
  8.     sb.AppendFormat(provider, format, args);
  9.     return StringBuilderCache.GetStringAndRelease(sb);
  10. }

 

This has two advantages over using a plain-vanilla StringBuilder.

  • It creates a buffer of bigger size than by a default 0x10 size
  • It uses StringBuilderCache class that maintains a copy of StringBuilder as a static variable.  When Acquire method is invoked, it clears up the cache value (but does not create a new object) and returns the object of StringBuilder.  This reduces the time required to create an object of StringBuilder

So my preference of usage for repeated concatenations would be to try using String.Format followed by StringBuilder, then String.Concat or + (plus, operator overload)


Performance check

 

I did a small performance check to verify our understanding and the results when 100,000 concatenations were performed in a loop on a quad processor machine were

Time taken using + : 93071.0034
Time taken using StringBuilder: 14.0182
Time taken using StringBuilder with Format: 24.0155
Time taken using String.Format and + : 24.0155
Time taken using StringBuilder with Format and clear: 38.0249

Using Directives and Namespace in C#

January 14, 2014 CSharp, Visual Studio , , ,

UsingDirectivesMustBePlacedWithinNamespace is one of the rules of StyleCop and I was trying to figure out why StyleCop recommends having using directives defined within the namespace. 

Most of the books and articles (including mine) we have read do not really have using blocks within the namespace.  So is it really a good practice?  If yes, why?  Let’s check that in this article.

To illustrate the difference, I would choose a very simple example with 2 classes – Program (default one when a console application is created) and Environment (with the name same as System.Environment.)

  1. using System;
  2. namespace MyNS.Application
  3. {
  4.     internal class Program
  5.     {
  6.         private static void Main(string[] args)
  7.         {
  8.             Console.WriteLine(Environment.GetName());
  9.             Console.WriteLine(Math.Round(2.2, 0));
  10.         }
  11.     }
  12. }
  13.  
  14. namespace MyNS
  15. {
  16.     public class Environment
  17.     {
  18.         public static string GetName()
  19.         {
  20.             return "MyEnvironment";
  21.         }
  22.     }
  23. }

In the above example, when I refer to Environment on Line 8,  I am referring to my own class MyNS.Environment and not to System.Environment.  This is because the namespace System is defined outside the namespaces (NS and NS.Application).  In other words,

When the using directive is defined outside the namespace, the compiler will first search for the class (here, Environment) within the namespaces.  If it does not find the class locally (as for Math class on Line 9), it will try to search for the class in the references

So what happens if I move the using block inside the namespace (MyNS.Application)?  Will it break the above code?  Yes, because it will give precedence to System.Environment over MyNS.Environment and it will not find the method GetName in System.Environment

  1. namespace MyNS.Application
  2. {
  3.     using System;
  4.     internal class Program
  5.     {
  6.         private static void Main(string[] args)
  7.         {
  8.             Console.WriteLine(Environment.GetName());
  9.             Console.WriteLine(Math.Round(2.2, 0));
  10.         }
  11.     }
  12. }

So is the principle of StyleCop is correct, then it would break our code if we have used class names that are also part of the System namespaces (or in the referenced assemblies).  And it would require fixing by creating aliases like

  1. namespace MyNS.Application
  2. {
  3.     using System;
  4.     // Create an alias here
  5.     using Environment = MyNS.Environment;
  6.     internal class Program
  7.     {
  8.         private static void Main(string[] args)
  9.         {
  10.             Console.WriteLine(Environment.GetName());
  11.             Console.WriteLine(Math.Round(2.2, 0));
  12.         }
  13.     }
  14. }

This ensures that your code is StyleCop compliant and also refers to your own classes instead of System defined classes or those in the referenced assemblies

Upgrading to .NET 4.5.x for Enterprise Applications

January 4, 2014 ASP.NET, CSharp, Visual Studio , , ,

Microsoft launched .NET 4.5 in 2012 and later added an upgrade to .NET 4.5.1 in Dec, 2013.  So considering the timelines, there has been over an year that .NET 4.5 has been launched.  It means .NET 4.5.x has been tested thoroughly (by general public/developers), bugs have been identified and fixed and hence should be an easy upgrade for enterprise applications.

Upgrading to .NET 4.5.x may be very tempting to developers due to several reasons (many times IDE specific); however, its important to evaluate the waters before jumping in.  In this article, let’s see some of the areas that you would want to consider before upgrading your application suite to .NET 4.5.x

Environment Accessibility and Support

First thing to check before planning any development or upgrade activity is whether your production (or ‘live’) environment supports installation of .NET 4.5.x framework.  Some of checks that you should do before committing to upgrade are –

  • Certification and Packaging of .NET 4.5.x framework – Organizations (specially those concerned about security) tend to test and certify application suites before re-packaging (setup files) them so that applications/framework can be installed on desktop / servers in an automated way.  This does not mean that the original packaging (setup) was not done efficiently.  The re-packaging is done to ensure standardization in installation process. 
  • Mechanism to promote / install framework upgrades – Considering that the product life cycle has reduced it is important to know how much time the packaging team would require to re-packaging updates and promote them.

If your application is a n-tier web application (ASP.NET WebForms / MVC), you need to check your hardware capabilities

  • Supported Physical / Virtual infrastructure – .NET frameworks do not require more than 512 MB RAM.

    But for server applications, .NET 4.5.x requires Windows Server 2012 as operating system.  Windows Server 2012, for its smooth operation, requires 2GB RAM (recommended: 8GB) with 1.3 Ghz single 64-bit core processor.  So there is a dependency on availability of Windows Server 2012 packaging in your organization.

    .NET 4.5 needs a Windows Server 2008 variant and requires lesser hardware capabilities to run your application efficiently.  If you are not using any of these server operating systems, or are not on 64-bit architecture you will not be able to leverage the capabilities of .NET 4.5.x and will have to restrict your upgrades to .NET 4.0

If your application is a desktop application, there are several constraints to upgrade to a newer framework as you have several production machines (each desktop).  Unless you have a strategy to push .NET 4.5.x to all desktops, you shouldn’t think of upgrading your framework.

Understand the upgrade

No, I am not referring to understand the features of .NET 4.5.x.  I’ll talk about that in the next part of this article.  Here I am referring to how .NET 4.5.x fits in Microsoft’s release schema.  Until now, for every .NET framework release there has been a new folder created in %windir%\Microsoft.NET\Framework

In case of .NET 4.5.x, it is not the case.  When you install .NET 4.0,  the installer creates v4.0 folder.  When you install .NET 4.5.x, it updates the assemblies in v4.0 folder.  So in a way, .NET 4.5.x is not just an upgrade to .NET 4.0 but a replacement (with enhancements.) So unless you actually refer to registry keys you would not come to know if .NET 4.5.x is installed on your machine. Refer to MSDN article for more details on this.

image

What this means to a developer?

Simple.  You can not have disparate installations on developer’s machine and your deployment machines (system integration, QA, user acceptance, parallel production, production, DR, etc.) Let’s understand why.

Let’s assume that a developer has .NET 4.5.1 installed on his machine with VS 2013 as IDE.  The system integration environment has also been upgraded but others have not been upgraded yet.  Let’s say your code is still using .NET 4.0 and you trigger a build on your development machine for a production release.  What would happen here is that your build will use references of new assemblies installed in v4.0 folder (mentioned above.)  When this code gets packaged (on your machine) and is deployed in a non-upgraded environment, it may give you some runtime errors as it does not find .NET 4.5.x assemblies in GAC. 

So it’s necessary to understand the impact of this upgrade. 

Process Automation – Continuous Integration

Well, we understood the impact of the upgrade in the previous section.  The solution to above problem lies in creating an isolated environment that resembles the non-upgraded environment.  The Build Automation Environment.  There are two ways to resolve this without having a DLL Hell –

  1. Keep the build environment on .NET 4.0 so that you can trigger the builds using plain vanilla .NET 4.0 framework instead of .NET 4.5.x assemblies
  2. If you still upgrade this environment to .NET 4.5.x, you will need to alter the MSBuild targets to use

    C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0
    instead of
    %windir%\Microsoft.NET\Framework\v4.0

    This would ensure that the output of your build automation works for .NET 4.0 / 4.5.x.

Application level breaking changes

Not many, but there are a few breaking changes.  To validate this, you can check for file size of System.Runtime.Serialization.dll in two folders.  The Reference Assemblies (in Program Files folder) are the original framework assemblies and the Framework folder (in Windows directory) are the replaced assemblies if .NET 4.5.x is installed.

C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.0\
and
C:\Windows\Microsoft.net\Framework\v4.0.30319\

Why is this radical change in the way .NET worked?  Earlier to .NET 4.0, reference assemblies were a direct copy of GAC assemblies.  That means with every minor upgrade, you can unknowingly use a newer assembly.  And this program would fail on an unpatched machine.  To resolve this, in .NET 4.0 Reference Assemblies (in Program Files folder) act as redirecting assemblies and have only metadata.  They do not have IL code as it is moved to base .NET framework (mscorlib).

So in a nutshell, .NET 4.5 does not have its own independent CLR.  If you have different .NET frameworks installed on different machines using same CLR, you will have to test your application under these breaking changes. 

Refer to MSDN article on .NET 4.5 breaking changes and .NET 4.5.x breaking changes.  One of the problem, often faced, is serialization exception

Common Language Runtime detected an invalid program.

The easiest technique to fix this issue without changing any code is to add following block in the configuration

  1. <configuration>
  2.   <system.xml.serialization>
  3.     <xmlSerializer useLegacySerializerGeneration="true"/>
  4.   </system.xml.serialization>
  5. </configuration>

 

What happens to .NET version on IIS hosted websites?


When installing Web Server Role (IIS), you need to explicitly select the versions on ASP.NET you want to support.

image

By default, the wizard will create following Application Pools

image

Please note the .NET framework version for ASP.NET v4.5 is v4.0.

Hope this helps in taking an informed decision of upgrade and its impact.

Microsoft Most Valuable Professional Award

January 2, 2014 Accomplishments, Personal ,

Received an award notification email from Microsoft on 1st Jan 2014 – Marks as the perfect beginning of 2014 for me!

My MVP profile can be visited at – http://mvp.microsoft.com/en-us/mvp/Punit%20Ganshani-5000572

MVP-Email

Method, Delegate and Event Anti-Patterns in C#

October 28, 2013 CSharp, Visual Studio , , , , ,

No enterprise application exists without a method, event, delegate and every developer would have written methods in his/her application.  While defining these methods/delegates/events, we follow a standard definitions.  Apart from these definitions there are some best practices that one should follow to ensure that method does not leave any object references, other methods are called in appropriate way, arguments to parameters are validated and many more. 

This article outlines some of the anti-patterns while using Method, Delegate and Event and in-effect highlights the best practices to be followed to get the best performance of your application and have a very low memory footprint.

The right disposal of objects

We have seen multiple demonstrations that implementing IDisposable interface (in class BaseClass) and wrapping its object instance in ‘using’ block is sufficient to have a good clean-up process.  While this is true in most of the cases, this approach does not guarantee that derived classes (let’s say DerivedClass) will have the same clean-up behaviour as that of the base class.

To ensure that all derived classes take responsibility of cleaning up their resources, it is advisable to add an additional virtual method in the BaseClass that is overridden in the DerivedClass and cleanup is done appropriately.  One such implementation would look like,

  1. public class BaseClass : IDisposable
  2. {
  3.     protected virtual void Dispose(bool requiresDispose)
  4.     {
  5.         if (requiresDispose)
  6.         {
  7.             // dispose the objects
  8.         }
  9.     }
  10.  
  11.     public void Dispose()
  12.     {
  13.         Dispose(true);
  14.         GC.SuppressFinalize(this);
  15.     }
  16.  
  17.     ~BaseClass()
  18.     {
  19.         Dispose(false);
  20.     }
  21. }
  22.  
  23. public class DerivedClass: BaseClass
  24. {
  25.     // some members here    
  26.  
  27.     protected override void Dispose(bool requiresDispose)
  28.     {
  29.         // Dispose derived class members
  30.         base.Dispose(requiresDispose);
  31.     }
  32. }

This implementation assures that the object is not stuck in finalizer queue when the object is wrapped in ‘using’ block and members of both BaseClass and DerivedClass are freed from the memory

The return value of a method can cause a leak

While most of our focus is on freeing the resources used inside the method, it is the return value of the method that also occupies memory space.   If you are returning an object, the memory space occupied (but not used) is large

Let’s see some bad piece of code that can leave some unwanted objects in memory. 

  1. public void MethodWhoseReturnValueIsNotUsed(string input)
  2. {
  3.     if (!string.IsNullOrEmpty(input))
  4.     {
  5.         // value is not used any where
  6.         input.Replace(" ", "_");
  7.  
  8.         // another example
  9.         new MethodAntiPatterns();
  10.     }
  11. }

Most of the string methods like Replace, Trim (and its variants), Remove, IndexOf and alike return a ‘new’ string value instead of manipulating the ‘input’ string.  Even if the output of these methods is not used, CLR will create a variable and store it in memory.  Another similar example is creation of an object that is never used (ref: MethodAntiPattern object in the example)

Virtual methods in constructor can cause issues

The heading speaks for itself.  When calling virtual methods from constructor of ABaseClass, you can not guarantee that the ADerivedClass would have been instantiated.

  1. public partial class ABaseClass
  2. {
  3.     protected bool init = false;
  4.     public ABaseClass()
  5.     {
  6.         Console.WriteLine(".ctor – base");
  7.         DoWork();
  8.     }
  9.  
  10.     protected virtual void DoWork()
  11.     {
  12.         Console.WriteLine("dowork – base >> "
  13.             + init);
  14.     }
  15. }
  16.  
  17. public partial class ADerivedClass: ABaseClass
  18. {
  19.     public ADerivedClass()
  20.     {
  21.         Console.WriteLine(".ctor – derived");
  22.         init = true;
  23.     }
  24.  
  25.     protected override void DoWork()
  26.     {
  27.         Console.WriteLine("dowork – derived >> "
  28.             + init);
  29.             
  30.         base.DoWork();
  31.     }
  32. }

 

Use SecurityCritical attribute for code that requires elevated privileges

Accessing of critical code from a non-critical block is not a good practice.

Mark methods and delegates that require elevated privileges with SecurityCritical attribute and ensure that only the right (with elevated privileges) code can call those methods or delegates

  1. [SecurityCritical]
  2. public delegate void CriticalDelegate();
  3.  
  4. public class DelegateAntiPattern
  5. {
  6.     public void Experiment()
  7.     {
  8.         CriticalDelegate critical  = new CriticalDelegate(CriticalMethod);
  9.  
  10.         // Should not call a non-critical method or vice-versa
  11.         CriticalDelegate nonCritical = new CriticalDelegate(NonCriticalMethod);
  12.     }
  13.  
  14.     // Should not be called from non-critical delegate
  15.     [SecurityCritical]
  16.     private void CriticalMethod() {}
  17.         
  18.     private void NonCriticalMethod() { }
  19. }

 

Override GetHashCode when using overriding Equals method

When you are overriding the Equals method to do object comparisons, you would typically choose one or more (mandatory) fields to check if 2 objects are same.  So your Equal method would look like,

  1. public class User
  2. {
  3.     public string Name { get; set; }
  4.     public int Id { get; set; }
  5.  
  6.     //optional for comparison
  7.     public string PhoneNumber { get; set; }
  8.  
  9.     public override bool Equals(object obj)
  10.     {
  11.         if (obj == null) return false;
  12.  
  13.         var input = obj as User;
  14.         return input != null &&
  15.             (input.Name == Name && input.Id == Id);
  16.     }
  17. }

 

Now this approach checks if all mandatory field values are same.  This looks good in an example for demonstration, but when you are dealing with business entities this method becomes an anti-pattern.  The best approach for such comparisons would be to rely on GetHashCode to find out if the object references are the same

  1. public override bool Equals(object obj)
  2. {
  3.     if (obj == null) return false;
  4.  
  5.     var input = obj as User;
  6.     return input == this;
  7. }
  8.  
  9. public override int GetHashCode()
  10. {
  11.     unchecked
  12.     {
  13.         // 17 and 23 are combinations for XOR
  14.         // this algorithm is used in C# compiler
  15.         // for anonymous types
  16.         int hash = 17;
  17.         hash = hash * 23 + Name.GetHashCode();
  18.         hash = hash * 23 + Id.GetHashCode();
  19.         return hash;
  20.     }
  21. }

You can use any hashing algorithm here to compute a hash of an object.  In this case, comparisons happen between computed hash of objects (int values) which will be more accurate, faster and scalable when you are adding new properties for comparison.

Detach the events when not in use

Is it necessary to remove event handler explicitly in C#?  Yes if you are looking for lower memory footprint of your application.  Leaving the events subscribed is an anti-pattern.

Let’s understand the reason by an example

  1. public class Publisher
  2. {
  3.     public event EventHandler Completed;
  4.     public void Process()
  5.     {
  6.         // do something
  7.         if (Completed != null)
  8.         {
  9.             Completed(this, EventArgs.Empty);
  10.         }
  11.     }
  12. }
  13.  
  14. public class Subscriber
  15. {
  16.     public void Handler(object sender, EventArgs args) { }
  17. }

Now we will attach the Completed event of Published to Handler method of Subscriber to understand the clean up.

  1. Publisher pub = new Publisher();
  2. Subscriber sub = new Subscriber();
  3. pub.Completed += sub.Handler;
  4.  
  5. // this will invoke the event
  6. pub.Process();
  7.             
  8. // frees up the event & references
  9. pub.Completed -= sub.Handler;
  10.  
  11. // will not invoke the event
  12. pub.Process();
  13.  
  14. // frees up the memory
  15. pub = null; sub = null;

After the Process method has been executed the Handler method would have got the execution flow and completed the processing.  However, the event is still live and so are its references.  If you again call Process method, the Handler method will be invoked.  Now when we unsubscribe (-=) the Handler method, the event association and its references are freed up from the memory but objects pub and sub are not freed yet.  When pub and sub objects are assigned null, they are marked for collection by GC.

If we do not unsubscribe (-=) and keep other code AS-IS – GC will check for any live references for pub and sub and it will find a live event.  It will not collect these objects and they will cause a memory leak.  This common anti-pattern is more prevalent in UI based solutions where the UI events are attached/hooked to code-behind / view-models / facade.

Following these practices will definitely reduce your application’s footprint and make it faster.

Which one is better : JSON vs. Xml serialization?

October 23, 2013 CSharp, Visual Studio , , , ,

One of the hot topics of discussion in building enterprise applications is whether one should use JSON or XML based serialization for

  • data serialization and deserialization
  • data storage
  • data transfer

To illustrate these aspects, let’s write some code that can help build the facts.  Our code will involve creating an entity User

  1. public class User
  2. {
  3.     public string Name { get; set; }
  4.     public int Id { get; set; }
  5.     public UserType Type { get; set; }
  6. }
  7.  
  8. public enum UserType { Tech, Business, Support }

 

To test performance and other criteria we will use standard XML serialization, and for JSON we will evaluate these parameters using 2 open source frameworks Newtonsoft.Json and ServiceStack.Text

To create dummy data, our code looks like

 

  1. private Random rand = new Random();
  2. private char[] letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ".ToCharArray();
  3.  
  4. private List<User> GetDummyUsers(int max)
  5. {
  6.     var users = new List<User>(max);
  7.     for (int i = 0; i < max; i++)
  8.     {
  9.         users.Add(new User { Id = i, Name = GetRandomName(), Type = UserType.Business });
  10.     }
  11.  
  12.     return users;
  13. }
  14.  
  15. private string GetRandomName()
  16. {
  17.     int maxLength = rand.Next(1, 50);
  18.     string name = string.Empty;
  19.     for (int i = 0; i < maxLength; i++)
  20.     {
  21.         name += letters[rand.Next(26)];
  22.     }
  23.     return name;
  24. }
  25.  
  26. private long Compress(byte[] data)
  27. {
  28.     using (var output = new MemoryStream())
  29.     {
  30.         using (var compressor = new GZipStream(output, CompressionMode.Compress, true))
  31.         using (var buffer = new BufferedStream(compressor, data.Length))
  32.         {
  33.             for (int i = 0; i < data.Length; i++)
  34.                 buffer.WriteByte(data[i]);
  35.         }
  36.         return output.Length;
  37.     }
  38. }

 

The serialization logic to convert List<User> to serialized string and gather the statistics is

 

  1. public void Experiment()
  2. {
  3.     DateTime dtStart = DateTime.Now;
  4.     List<User> users = GetDummyUsers(20000); // change the number here
  5.     Console.WriteLine("Data generation  \t\t took mSec: "
  6.         + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  7.  
  8.     Console.WriteLine("—-");
  9.  
  10.     dtStart = DateTime.Now;
  11.     var xml = MyXmlSerializer.Serialize<List<User>>(users, Encoding.UTF8);
  12.     Console.WriteLine("Length (XML):      \t" + xml.Length + " took mSec: "
  13.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  14.     xml = MyXmlSerializer.Serialize<List<User>>(users, Encoding.UTF8);
  15.     Console.WriteLine("Length (XML):      \t" + xml.Length + " took mSec: "
  16.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  17.  
  18.     dtStart = DateTime.Now;
  19.     var json = JsonConvert.SerializeObject(users);
  20.     Console.WriteLine("Length (JSON.NET): \t" + json.Length + " took mSec: "
  21.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  22.     dtStart = DateTime.Now;
  23.     json = JsonConvert.SerializeObject(users);
  24.     Console.WriteLine("Length (JSON.NET): \t" + json.Length + " took mSec: "
  25.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  26.  
  27.     var serializer2 = new JsvSerializer<List<User>>();
  28.     dtStart = DateTime.Now;
  29.     var json2 = serializer2.SerializeToString(users);
  30.     Console.WriteLine("Length (JSON/ST) : \t" + json2.Length + " took mSec: "
  31.         + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  32.     dtStart = DateTime.Now;
  33.     json2 = serializer2.SerializeToString(users);
  34.     Console.WriteLine("Length (JSON/ST) : \t" + json2.Length + " took mSec: "
  35.         + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  36.  
  37.     Console.WriteLine("—-");
  38.     
  39.     var xmlBytes = Converter.ToByte(xml);
  40.     Console.WriteLine("Bytes (XML):     \t" + xmlBytes.Length);
  41.  
  42.     var jsonBytes = Converter.ToByte(json);
  43.     Console.WriteLine("Bytes (JSON):    \t" + jsonBytes.Length);
  44.  
  45.     var jsonBytes2 = Converter.ToByte(json);
  46.     Console.WriteLine("Bytes (JSON/ST): \t" + jsonBytes2.Length);
  47.  
  48.     Console.WriteLine("—-");
  49.  
  50.     var compressedBytes = Compress(xmlBytes);
  51.     Console.WriteLine("Compressed Bytes (XML):     \t" + compressedBytes);
  52.  
  53.     compressedBytes = Compress(jsonBytes);
  54.     Console.WriteLine("Compressed Bytes (JSON):    \t" + compressedBytes);
  55.  
  56.     compressedBytes = Compress(jsonBytes2);
  57.     Console.WriteLine("Compressed Bytes (JSON/ST): \t" + compressedBytes);
  58.  
  59.     Console.WriteLine("—-");
  60.  
  61.     dtStart = DateTime.Now;
  62.     MyXmlSerializer.Deserialize<List<User>>(xml, Encoding.UTF8);
  63.     Console.WriteLine("Deserialized (XML): \t took mSec "
  64.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  65.     MyXmlSerializer.Deserialize<List<User>>(xml, Encoding.UTF8);
  66.     Console.WriteLine("Deserialized (XML): \t took mSec "
  67.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  68.  
  69.     dtStart = DateTime.Now;
  70.     JsonConvert.DeserializeObject<List<User>>(json);
  71.     Console.WriteLine("Deserialized (JSON): \t took mSec "
  72.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  73.     dtStart = DateTime.Now;
  74.     JsonConvert.DeserializeObject<List<User>>(json);
  75.     Console.WriteLine("Deserialized (JSON): \t took mSec "
  76.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  77.  
  78.     dtStart = DateTime.Now;
  79.     serializer2.DeserializeFromString(json2);
  80.     Console.WriteLine("Deserialized (JSON/ST): \t took mSec "
  81.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  82.     serializer2.DeserializeFromString(json2);
  83.     Console.WriteLine("Deserialized (JSON/ST): \t took mSec  "
  84.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  85.  
  86. }

Statistics

On running this program on a Quad-Core processor with 6 GB RAM, the statistics look as below. 

 

image

 

What the statistics mean?

 

  • Serialization and Deserialization Performance

    For XML Serialization, there is no decrease in serialization time in the subsequent serialization requests even after using the same XmlSerializer object.  When using JSON, we see that the frameworks reduce the serialization time drastically.  JSON serialization appears to give us an gain of 50-97% in serialization time. 

    When dealing with deserialization, XML deserialization gives a better performance consistently with both data sets (20K and 200K).  JSON deserialization seems to take more time even when averaged. 

    Every application requires both serialization and deserialization features.  Considering the performance statistics, it looks like using ServiceStack.Text outperforms other two libraries. 

    Winner: JSON with ServiceStack.Text by taking 91% of time taken by Xml serialization + deserialization

  • Data Storage

    Looking at data storage aspect, Xml based string definitely requires more storage space.  So if you are looking at storing string, JSON is the clear choice at benefit.

    However, when you apply GZip compression on serialized strings generated by XML / JSON frameworks it appears that there is no major difference in the storage size.  JSON still saves some bytes for you!  This is one reason some NoSQL databases uses JSON based storage instead of XML based storage.  However, for quicker retrieval you need to apply some indexing mechanisms too.

    Winner: JSON without compression; With compression, minor gain by using JSON

  • Data Transfer

    Data transfer comes in picture when you are transferring your objects on EMS / MQ / WebServices.  Keeping other parameters such as network latency, availability, bandwidth, throughput, etc. as constants in both cases amount of data transfer becomes a function of data length or protocol used over network. 

    For EMS / MQ – Data length, as in statistics, is lesser in case of JSON when sent as string and almost same when sending as compressed bytes. 

    For WebServices / WCF – Data transfer depends on the protocol used.  If you are using SOAP based services, apart from your serialized Xml you will also have SOAP headers that will form the payload.  But if you are using REST protocol, you can return plain Xml / JSON and in that case a JSON string will have lesser payload than XML string.

    Winner: Depends on the protocol of transfer and compression technique used

 

Note: The performance may vary slightly when using any other C# libraries. But, hopefully, the % change should be neutralized.

 

Hope this article helps you to choose the right protocol and technique for your application.  If you have any questions, I’ll be happy to help you!

WCF NetTcp Port Sharing on Windows 8+

October 21, 2013 CSharp, Visual Studio, WCF , ,

In Windows 8, when hosting WCF in a managed application running under a normal user privileges and NetTcp binding, you might get an exception

Verify that the current user is granted access in the appropriate allowAccounts section of SMSvcHost.exe.config

This is due to enhanced security in Windows 7+ operating systems. 

There are 3 ways to get away with this exception

Run as Administrator

Please note that this problem occurs only when you are running a WCF service using a user account.  If you run the service using a Local System, Network Service, Local Service or Administrator, there will be no issues at all.

When running the application as an Administrator, NetTcp Port Sharing service assumes that you are authenticated and can run the service and share data on the TCP layer.

However, if you do not have admin rights you can take the next approach

Downgrade your NET.TCP Port Sharing service

Now this problem occurs for .NET 4.0 framework that upgraded the NET.TCP Port Sharing.  So if you have installed .NET 4.0, you will face this issue.

The easiest way is to change some registry configuration to use v3.0

Key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\NetTcpPortSharing

New Value:
C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMSvcHost.exe

When you try to compare the configuration files SMSvcHost.exe.config in v3.0 and v4.0.30319, you will not find any changes that would stop port sharing so I believe, it is the way SMSvcHost works that has made the changes

Grant port sharing rights to yourself

This process is a lengthy process so if you follow through these steps accurately you can get away with this error

  • Visit http://technet.microsoft.com/en-us/sysinternals/bb897417 and download the  PsTool.  We are interested in PsGetSid that gives you unique security identifier for a user or a group.  So if you are targeting this single user, you should be interested in getting SID of a user; otherwise, you can request for a group having all your target audience.  Run PsGetSid <username> to your SID
  • Open the SMSvcHost.exe.config (of .NET 4.0 version, usually in C:\Windows\Microsoft.NET\Framework\v4.0.30319 folder)
  • You will require to use Admin rights to open this config file.  The best way is to open Command Prompt (Run As Administrator) and then type

    notepad C:\Windows\Microsoft.NET\Framework\v4.0.30319\SMSvcHost.exe.config

  • The configuration file has a section called system.serviceModel.activation and a sub-section net.tcp.  This section has security identifiers of LocalSystem, LocalService, NetworkService and Administrators.  We need to add your SID in this configuration file
  • Without changing anything else, add following line in the configuration file (just next to the LocalSystem account

<add securityidentifier="your-SID-that-starts-with-S">

Restart the Net.Tcp Port Sharing service and you should be good to go.

What is the difference between System.String and string?

October 15, 2013 CSharp, Visual Studio , , , ,

One of the questions that lot of developers ask is – Is there any difference between string and System.String and what should be used?

 

Short Answer

There is no difference between the two.  You can use either of them in your code.

 

Explanation

 

System.String is a class (reference type) defined the mscorlib in the namespace System.  In other words, System.String is a type in the CLR.

string is a keyword in C#

 

Before we understand the difference, let us understand BCL and FCL terms.

BCL is Common Language Infrastructure (CLI) available to languages like C#, A#, Boo, Cobra, F#, IronRuby, IronPython and other CLI languages.  It includes common functions such as File Read/Write or IO and database/XML interactions.  BCL was first implemented in Microsoft .NET in the form of mscorlib.dll

FCL is standard Microsoft .NET specific library containing reusable classes/assets like System, System.CodeDom, System.Collections, System.Diagnostics, System.Globalization, System.IO, System.Resources and System.Text

Now in C#, string (keyword in BCL) directly maps to System.String (an FCL type).  Similarly, int maps directly to System.Int32. 

Here int is mapped to a integer type that is 32 bit.  But in other language, you could probably map int (keyword in BCL) to a 64 bit integer (FCL type).

So the fact that using string and System.String in C# makes no difference is well established.

 

Is it better to still use string instead of System.String?

 

There is no universally agreed answer to this.

But, as per me, even though both string and System.String mean the same and have no difference in performance of the application, it is better to use string.  This is because string is a C# language specific keyword.

Also C# language specification states,

As a matter of style, use of the keyword is favored over use of the complete system type name

Following this practice ensures that your code consistently uses keywords wherever possible rather than having a code with BCL and FCL types used.

HTTP/S WCF : Commonly faced access issues and solutions

September 5, 2013 CSharp, Visual Studio, WCF , , , ,

When running WCF services on Windows 7+ (actually, includes Vista too) operating systems when you write a simple code at the service side to open the service host, most users experience this issue

HTTP could not register URL http://+:8010/. Your process does not have access rights to this namespace (see http://go.microsoft.com/fwlink/?LinkId=70353 for details).

When you visit the link in the above error, you do not necessarily get the required information to solve this issue quickly.  So let’s see the solution to this commonly faced problem.   HTTP/S services are usually hosted on IIS or self-hosted in an application.   When registering this service on an operating system that has enhanced security (Win7+, Win 2008/2012) you need to perform some administration work such as mentioned below.   Some of you may get several other issues like endpoint not found, or SSL certificate issues. 

This article explains what is required to fix these issues

 

Namespace registration

Namespace registration grants access rights to a specific URL to a specified group/user on a domain or computer.   This one time activity ensures that only authorized users can open up endpoints on a computer/server – this is definitely more secure.

How to authorize your service/user account:

Local user account:

netsh http add urlacl url=http://+:8010/ user=ComputerName\Username

Domain user account:

netsh http add urlacl url=http://+:8010/ user=DomainName\Username

Built-in Network Service account

netsh http add urlacl url=http://+:8010/ user="NT AUTHORITY\NETWORK SERVICE"

Most likely this should directly solve your problem “HTTP could not register URL”  but I would advise going through the other steps on a production machine (actually, any restricted environment) to ensure that there are no hiccups.

 

Firewall Exception

Most organizations have Firewall Restrictions on HTTP communication on Production machines.  In that case, you would have to ensure that the port on which you are doing communication is added to the Firewall Exception list

 

SSL Certificate Store

 

HTTPS services rely on certificate exchange between server and client (in case of mutual authentication) to authenticate the the request and also to encrypt the request data.  These certificates are stored in a certificate store and it is important to configure the certificate access to a port

You can use following command to bind SSL certificate to a port (8010)

httpcfg set ssl -i 0.0.0.0:8010 –h thumprint-of-certificate-in-certificate-store

The thumbprint of a certificate can be retrieved by viewing the certificate properties in Certificate Management Console (mmc)

 

Changing these settings should help you resolve all service access issues

Follow on Feedly