design

Application Design: Going Stateless on Azure

May 20, 2015 Azure , ,

Disclaimer:
I am glad to say that I authored this exclusive article for Microsoft Press Blog and MVP Award Program Blog and it was first published on 4th May, 2015.
This article is available on my website for archival purpose

 

The components of a cloud application are distributed and deployed among multiple cloud resources (virtual machines) to benefit from the elastic demand driven environment. One of the most important factor in this elastic cloud is the ability to add or remove application components and resources as and when required to fulfil scalability needs.

However, while removing the components, this internal state or information may be lost.

That’s when the application needs to segregate their internal state from an in-memory store to a persistent data store so that the scalability and reliability are assured even in case of reduction of components as well as in the case of failures.  In this article, we will understand ‘being stateless’ and will explore strategies like Database-driven State Management, and Cache driven State Management.

 

Being stateless

 

Statelessness refers to the fact that no data is preserved in the application memory itself between multiple runs of the strategy (i.e. action). When same strategy is executed multiple times, no data from a run of strategy is carried over to another. Statelessness allows our system to execute the first run of the strategy on a resource (say X) in cloud, the second one on another available resource (say Y, or even on X) in cloud and so on.

This doesn’t mean that applications should not have any state. It merely means that the actions should be designed to be stateless and should be provided with the necessary context to build up the state.

If our application has a series of such actions (say A1, A2, A3…) to be performed, each action (say A1) receives context information (say C1), executes the action and builds up the context (say C2) for next action (say A2). However, Action A2 should not necessarily depend on Action A1 and should be able to be executed independently using context C2 available to it.

How can we make our application stateless?

 

The conventional approach to having stateless applications is to push the state from web/services out of the application tier to somewhere else – either in configuration or persistent store. As shown in diagram below, the user request is routed through App Tier that can refer to the configuration to decide the persistent store (like, database) to store the state. Finally, an application utility service (preferably, isolated from application tier) can perform state management

 

 

The App Utility Service (in the above diagram) takes the onus of state management. It requires the execution context from App Tier so that it can trigger either a data-driven state machine or an event-drive state machine. An example of state machine for bug management system would have 4 states as shown below

 

To achieve this statelessness in application, there are several strategies to push the application state out of the application tier. Let’s consider few of them.

 

Database-drive State Management

 

Taking the same bug management system as an example, we can derive the state using simple data structures stored in database tables.

Current State

Event

Action

Next State

START

NewBug

OpenNew

Bug Opened

Bug Opened

Assigned

AssignForFix

Fix Needed

Not A Bug

MarkClosed

Bug Closed

Fix Needed

Resolved

MarkResolved

Bug Fixed

ReOpened

AssignForFix

Fix Needed

Bug Fixed

Tested

MarkClosed

Bug Closed

ReOpened

MarkOpen

Fix Needed

Bug Closed

END

 

The above structure only defines the finite states that a bug resolution can visit. Each action needs to be context-aware (i.e. minimal bug information and sometimes the state from which the action was invoked) so that it can independently process the bug and identify the next state (especially when multiple end-states are possible).

When we look at database-drive state management on Azure, we can leverage one of these out-of-the-box solutions

  • Azure SQL Database: The Best choice when we want to work with relational & structured data using relations, indexes, constraints, etc. It is a complete suite of MS-SQL database hosted on Azure.
  • Azure Storage Tables: Works great when we want to work with structured data without relationships, possibly with larger volumes. A lot of times better performance at lower cost is observed with Storage Tables especially when used for data without relationships. Further read on this topic – SQL Azure and Microsoft Azure Table Storage by Joseph Fultz
  • DocumentDB: DocumentDB, a NoSQL database, pitches itself as a solution to store unstructured data (schema-free) and can have rich query capabilities at blazing speeds. Unlike other document based NoSQL databases, it allows creation of stored procedures and querying with SQL statements.

Depending on our tech stack, size of the state and the expected number of state retrievals, we can choose one of the above solutions.

While moving the state management to database works for most of the scenarios, there are times when these read-writes to database may slow down the performance of our application. Considering state is transient data and most of it is not required to be persisted across two sessions of the user, there is a need of a cache system that provides us state objects with low-latency speeds.

 

Cache driven state management

 

To persist state data using a cache store is also an excellent option available to developers.  Web developers have been storing state data (like, user preferences, shopping carts, etc.) in cache stores ever since ASP.NET was introduced.  By default, ASP.NET allows state storage in memory of the hosting application pool.  In-memory state storage is required following reasons:

  • The frequency at which ASP.NET worker process recycles is beyond the scope of application and it can cause the in-memory cache to be wiped off
  • With a load balancer in the cloud, there isn’t any guarantee that the host that processed first request will also receive the second one. So there are chances that the in-memory information on multiple servers may not be in sync

The typical in-memory state management is referred as ‘In-role’ cache when this application is hosted on Azure platform.

Other alternatives to in-memory state management are out-of-proc management where state is managed either by a separate service or in SQL server – something that we discussed in the last section.  This mechanism assures resiliency at the cost of performance.  For every request to be processed, there will be additional network calls to retrieve state information before the request is processed, and another network call to store the new state.

The need of the hour is to have a high-performance, in-memory or distributed caching service that can leverage Azure infrastructure to act as a low-latency state store – like, Azure Redis Cache.

Based on the tenancy of the application, we can have a single node or multiple node (primary/secondary) node of Redis Cache to store data types such as lists, hashed sets, sorted sets and bitmaps.


Azure Redis cache supports master-slave replication with very fast non-blocking first synchronization and auto-reconnection on net split. So, when we choose multiple-nodes for Redis cache management, we are ensuring that our application state is not managed on single server. Our application state get replicated on multiple nodes (i.e. slaves) at real-time. It also promises to bring up the slave node automatically when the master node is offline.

 

Fault tolerance with State Management Strategies

With both database-driven state management and cache-driven state management, we also need to handle temporary service interruptions – possibly due to network connections, layers of load-balancers in the cloud or some backbone services that these solutions use. To give a seamless experience to our end-users, our application design should cater to handle these transient failures.

Handling database transient errors

Using Transient Fault Handling Application Block, with plain vanilla ADO.NET, we can define policy to retry execution of database command and wait period between tries to provide a reliable connection to database. Or, if our application is using any version of Entity Framework, we can include SqlAzureExecutionStrategy, an execution strategy that configures the policy to retry 3 times with an exponential wait between tries.

Every retry consumes computation power and slows down the application performance. So, we should define a policy, a circuit breaker that prevents throttling of service by processing the failed requests. There is no-one-size-fits all solution to breaking the retries.

There are 2 ways to implement a circuit breaker for state management –

  • Fallback or Fail silent– If there is a fallback mechanism to complete the requested functionality without the state management, the application should attempt executing it. For example, when the database is not available, the application can fallback on cache object. If no fallback is available, our application can fail silent (i.e. return a void state for a request).
  • Fail fast – Error out the user to avoid flooding the retry service and provide a friendly response to try later.     

Handling cache transient errors

Azure Redis cache internally uses ConnectionMultiplexer that automatically reconnects to Redis cache should there be disconnection or Internet glitch. However, the StackExchange.Redis does not retry for the get and set commands. To overcome this limitation, we can use library such as Polly that provide policies like Retry, Retry Forever, Wait and Retry and Circuit Breaker in a fluent manner.

The take-away!

The key take-away is to design applications considering that the infrastructure in cloud is elastic and that our applications should be designed to leverage its benefits without compromising the stability and user experience. It is, hence, utmost important to think about application information storage, its access mechanisms, exception handling and dynamic demand.

First published on 4th May, 2015 on Microsoft Press Blog and MVP Award Program Blog

Optimizing performance of your WCF Services

February 3, 2014 Visual Studio, WCF , ,

Performance Optimization process starts when you are provided with Non-Functional Requirements such as Availability, Concurrent Users, Scalability and so on.  If you are not provided with NFRs, one tends to ignore them and continues build application using Visual Studio Wizards that generate code recipes.  One tends to ignore them until faced upon a production issue that raises some questions on why a recently development application is poorly designed.  As the experts say, the earlier you detect your issues in a life-cycle of a product the cheaper is it to resolve them.  So its better to be aware of design principles and be cautious of code that can cause significant drop in performance.

When it comes to Windows Communication Foundation (WCF), an architect has to take several design decisions.  This article will outline some of the design decisions an architect/lead has to take when he is designing a WCF service.  The pre-requisite for this article is having preliminary knowledge of WCF.   You can refer to MSDN articles on WCF

The right Binding

 

Choosing the right binding is not difficult.  You can read an article on WCF Bindings in depth on MSDN if you want more information.  If we have to summarize the types and some of the overheads in using them, then it would be

  • Basic binding: The BasicHttpBinding is designed to expose a WCF service as a legacy ASMX web service, so that old clients or cross platform clients can work with the new services hosted either over Intranet/ Internet.   This binding, by default, does not enable any security.  The default message encoding is text/XML.
  • Web Service (WS) binding: The WSHttpBinding class uses HTTP or HTTPS for transport, and is designed to offer a variety of features such as reliability, transactions, and security over the Internet.  This means that if a BasicHttpBinding takes 2 network calls (Request & Response) to complete a request, a WSHttpBinding may take over 5 network calls which makes is slower than BasicHttpBinding.  If your application is consuming the services hosted on the same machine, to achieve scalable performance it is preferred to use IPC Binding instead of WsHttpBinding.
  • Federated WS binding:The WSFederationHttpBinding binding is a specialization of the WS binding, offering support for federated security.
  • Duplex WS binding: The WSDualHttpBinding binding, this is similar to the WS binding except it also supports bidirectional communication from the service to the client.  Reliable sessions are enabled by default.
  • TCP binding: The NetTcpBinding is primarily used for cross-machine communication on the Intranet and supports variety of features, including reliability, transactions, and security, and is optimized for WCF-to-WCF communication – only .NET clients can communicate to .NET services using this binding. This is an ideal replacement of socket-based communication.  To achieve greater performance, try changing the following settings
    • Set the value of serviceThrottling to highest
    • Increase the maxItemsInObjectGraph to 2147483647
    • Increase the values of listenBacklog, maxConnections, and maxBuffer
  • Peer network binding:The NetPeerTcpBinding is used for peer networking as a transport. The peer network-enabled client and services all subscribe to the same grid and broadcast messages to it.
  • IPC binding: The NetNamedPipeBinding class, this uses named pipes as a transport for same-machine communication. It is the most secure binding since it cannot accept calls from outside the machine and it supports a variety of features similar to the TCP binding.  It can be used efficiently for cross product communication.
  • MSMQ binding: The NetMsmqBinding uses MSMQ for transport and is designed to offer support for disconnected queued calls.
  • MSMQ integration binding: The MsmqIntegrationBinding converts WCF messages to and from MSMQ messages, and is designed to interoperate with legacy MSMQ clients.

The right Encoder

Once you have decided on the binding you are going to use, the first level of optimization can be done at the message level.   There are 3 message encoders available out of the box in .NET framework.

  • Text – A default encoder for BasicHttpBinding and WsHttpBinding bindings – it uses Uses a Text-based (UTF-8 by default) XML encoding
  • MTOM – An interoperable format(though less broadly supported then text) that allows for a more optimized transmission of binary blobs, as they don’t get base64 encoded.
  • Binary – A default encoder for NetTcpBinding and NetNamedPipeBinding bindings – it avoids base64 encoding your binary blobs, and also uses a dictionary-based algorithm to avoid data duplication. Binary supports “Session Encoders” that get smarter about data usage over the course of the session (through pattern recognition).

Having said that, the best match for you is decided based on

  • Size of the encoded message – as it is going to be transferred over wire.  A smaller message size with not much of hierarchy in objects would be transmitted best in text/XML format.
  • CPU load – while encoding the messages and also process your operations contracts
  • Simplicity – Messages once converted into binary do not remain readable by naked eyes.  If you do not want to log the messages and want faster transmission, binary is the format to go for
  • Interoperable – MTOM does not ensure 100% interoperability with other non-WCF Services.  If you do not require interoperability, binary is the format to go for

Binary encoder, so far, seems to be the fastest encoder and if you are using NetTcpBinding or NetNamedPipeBinding a binary encoder will do wonders for you!    Why?  Over a period of time, “Session encoders” become smarter (by using dictionary and analysing pattern) and perform optimizations to achieve faster speed.

Final Words – A text encoder converts binary into Base64 format which is an overhead (around 4-5 times) and can be avoided by using binary or MTOM encoders.  If there is no binary data in the message, MTOM encoder seems to slower down the performance as it has an overhead of converting the message into MIME format.  So try out different message encoders to check what suits your requirement!

The right Compression

 

Choosing a right encoder can reduce the message size by 4-5 times.  But what if the message size is still in MBs?  There are ways to compress your message and make it compact.  If your WCF Services are hosted on IIS or WAS (Windows Server 2008/2012), you can opt for IIS Compression.  IIS Compression enables you to perform compression on all outgoing/incoming messages using GZip compression.  To enable IIS Configuration, you need to follow steps mentioned by Scot Hanselman’s  article – Enabling dynamic compression (gzip, deflate) for WCF Data Feeds, OData and other custom services in IIS7

Data Caching

 

Caching any data on which your service depends avoids dependency issues and gets you faster access to same data.  There are several frameworks available to cache your data.  If your application is smaller (non-clustered, non-scalable, etc) and your service is not-stateless (you might want to make it stateless), you might want to consider In-Memory caching; however, with large scale applications you might want to check out AppFabric, Coherence, etc.

  • In-memory Caching –  If your WCF services are hosted on IIS or in WAS you can enable ASP.NET Caching by adding AspNetCompatibilityRequirements attribute to your service and setting aspNetCompatibilityEnabled to true in Web.Config file to use ASP.NET Caching block.   If you are using self-hosted applications, you can use Enterprise Library Caching block.  If your application is built on .NET 4.0 or later, you can use Runtime Caching
  • AppFabric – Use AppFabric for dedicated and distributed caching to increase the service performance.  This will help you overcome several problems of in-memory caching such as sticky sessions, caching in each component/service on a server,  synchronization of cache when any data changes and alike.

When storing objects in cache, prefer caching serializable objects.  It can help you to switch caching providers at any time.

Load Balance

 

Load balance should not just be seen as a means to achieve scalability.  While it definitely increases scalability, many times an increased performance is a driving force towards load balancing services.   There is an excellent article on MSDN on Load Balancing WCF services

Accelerated using GPU

 

There are many open-source GPU APIs available out there that can enhance the performance of high data computational tasks or image processing.  Data computation tasks also involve data sorting, filtering and selection – operations that we do using LINQ or PLINQ.  In one of my projects, we observed that operations that took 20,000 milli-seconds on a i5 processor using PLINQ barely took 900 milli-seconds on the same processor using GPU.  So leveraging the power of GPU in your Data-layer based WCF Services can boost the performance of your application by at least 500 times!

Some of the APIs that I recommend are Accelerator by Microsoft and CUDA by NVIDIA.  Both of them support development in C# language.

Conclusion

 

With the right binding and encoder you can expect a 10% increase in the performance but when bundled up with data caching and GPU acceleration, the performance can shoot by a minimum 110%.  So if you are facing issues with optimizing the performance of your WCF service, experiment with the above steps and get started.  Some links that may interest you are,

Let me know if you need any other information on WCF

Method, Delegate and Event Anti-Patterns in C#

October 28, 2013 CSharp, Visual Studio , , , , ,

No enterprise application exists without a method, event, delegate and every developer would have written methods in his/her application.  While defining these methods/delegates/events, we follow a standard definitions.  Apart from these definitions there are some best practices that one should follow to ensure that method does not leave any object references, other methods are called in appropriate way, arguments to parameters are validated and many more. 

This article outlines some of the anti-patterns while using Method, Delegate and Event and in-effect highlights the best practices to be followed to get the best performance of your application and have a very low memory footprint.

The right disposal of objects

We have seen multiple demonstrations that implementing IDisposable interface (in class BaseClass) and wrapping its object instance in ‘using’ block is sufficient to have a good clean-up process.  While this is true in most of the cases, this approach does not guarantee that derived classes (let’s say DerivedClass) will have the same clean-up behaviour as that of the base class.

To ensure that all derived classes take responsibility of cleaning up their resources, it is advisable to add an additional virtual method in the BaseClass that is overridden in the DerivedClass and cleanup is done appropriately.  One such implementation would look like,

  1. public class BaseClass : IDisposable
  2. {
  3.     protected virtual void Dispose(bool requiresDispose)
  4.     {
  5.         if (requiresDispose)
  6.         {
  7.             // dispose the objects
  8.         }
  9.     }
  10.  
  11.     public void Dispose()
  12.     {
  13.         Dispose(true);
  14.         GC.SuppressFinalize(this);
  15.     }
  16.  
  17.     ~BaseClass()
  18.     {
  19.         Dispose(false);
  20.     }
  21. }
  22.  
  23. public class DerivedClass: BaseClass
  24. {
  25.     // some members here    
  26.  
  27.     protected override void Dispose(bool requiresDispose)
  28.     {
  29.         // Dispose derived class members
  30.         base.Dispose(requiresDispose);
  31.     }
  32. }

This implementation assures that the object is not stuck in finalizer queue when the object is wrapped in ‘using’ block and members of both BaseClass and DerivedClass are freed from the memory

The return value of a method can cause a leak

While most of our focus is on freeing the resources used inside the method, it is the return value of the method that also occupies memory space.   If you are returning an object, the memory space occupied (but not used) is large

Let’s see some bad piece of code that can leave some unwanted objects in memory. 

  1. public void MethodWhoseReturnValueIsNotUsed(string input)
  2. {
  3.     if (!string.IsNullOrEmpty(input))
  4.     {
  5.         // value is not used any where
  6.         input.Replace(" ", "_");
  7.  
  8.         // another example
  9.         new MethodAntiPatterns();
  10.     }
  11. }

Most of the string methods like Replace, Trim (and its variants), Remove, IndexOf and alike return a ‘new’ string value instead of manipulating the ‘input’ string.  Even if the output of these methods is not used, CLR will create a variable and store it in memory.  Another similar example is creation of an object that is never used (ref: MethodAntiPattern object in the example)

Virtual methods in constructor can cause issues

The heading speaks for itself.  When calling virtual methods from constructor of ABaseClass, you can not guarantee that the ADerivedClass would have been instantiated.

  1. public partial class ABaseClass
  2. {
  3.     protected bool init = false;
  4.     public ABaseClass()
  5.     {
  6.         Console.WriteLine(".ctor – base");
  7.         DoWork();
  8.     }
  9.  
  10.     protected virtual void DoWork()
  11.     {
  12.         Console.WriteLine("dowork – base >> "
  13.             + init);
  14.     }
  15. }
  16.  
  17. public partial class ADerivedClass: ABaseClass
  18. {
  19.     public ADerivedClass()
  20.     {
  21.         Console.WriteLine(".ctor – derived");
  22.         init = true;
  23.     }
  24.  
  25.     protected override void DoWork()
  26.     {
  27.         Console.WriteLine("dowork – derived >> "
  28.             + init);
  29.             
  30.         base.DoWork();
  31.     }
  32. }

 

Use SecurityCritical attribute for code that requires elevated privileges

Accessing of critical code from a non-critical block is not a good practice.

Mark methods and delegates that require elevated privileges with SecurityCritical attribute and ensure that only the right (with elevated privileges) code can call those methods or delegates

  1. [SecurityCritical]
  2. public delegate void CriticalDelegate();
  3.  
  4. public class DelegateAntiPattern
  5. {
  6.     public void Experiment()
  7.     {
  8.         CriticalDelegate critical  = new CriticalDelegate(CriticalMethod);
  9.  
  10.         // Should not call a non-critical method or vice-versa
  11.         CriticalDelegate nonCritical = new CriticalDelegate(NonCriticalMethod);
  12.     }
  13.  
  14.     // Should not be called from non-critical delegate
  15.     [SecurityCritical]
  16.     private void CriticalMethod() {}
  17.         
  18.     private void NonCriticalMethod() { }
  19. }

 

Override GetHashCode when using overriding Equals method

When you are overriding the Equals method to do object comparisons, you would typically choose one or more (mandatory) fields to check if 2 objects are same.  So your Equal method would look like,

  1. public class User
  2. {
  3.     public string Name { get; set; }
  4.     public int Id { get; set; }
  5.  
  6.     //optional for comparison
  7.     public string PhoneNumber { get; set; }
  8.  
  9.     public override bool Equals(object obj)
  10.     {
  11.         if (obj == null) return false;
  12.  
  13.         var input = obj as User;
  14.         return input != null &&
  15.             (input.Name == Name && input.Id == Id);
  16.     }
  17. }

 

Now this approach checks if all mandatory field values are same.  This looks good in an example for demonstration, but when you are dealing with business entities this method becomes an anti-pattern.  The best approach for such comparisons would be to rely on GetHashCode to find out if the object references are the same

  1. public override bool Equals(object obj)
  2. {
  3.     if (obj == null) return false;
  4.  
  5.     var input = obj as User;
  6.     return input == this;
  7. }
  8.  
  9. public override int GetHashCode()
  10. {
  11.     unchecked
  12.     {
  13.         // 17 and 23 are combinations for XOR
  14.         // this algorithm is used in C# compiler
  15.         // for anonymous types
  16.         int hash = 17;
  17.         hash = hash * 23 + Name.GetHashCode();
  18.         hash = hash * 23 + Id.GetHashCode();
  19.         return hash;
  20.     }
  21. }

You can use any hashing algorithm here to compute a hash of an object.  In this case, comparisons happen between computed hash of objects (int values) which will be more accurate, faster and scalable when you are adding new properties for comparison.

Detach the events when not in use

Is it necessary to remove event handler explicitly in C#?  Yes if you are looking for lower memory footprint of your application.  Leaving the events subscribed is an anti-pattern.

Let’s understand the reason by an example

  1. public class Publisher
  2. {
  3.     public event EventHandler Completed;
  4.     public void Process()
  5.     {
  6.         // do something
  7.         if (Completed != null)
  8.         {
  9.             Completed(this, EventArgs.Empty);
  10.         }
  11.     }
  12. }
  13.  
  14. public class Subscriber
  15. {
  16.     public void Handler(object sender, EventArgs args) { }
  17. }

Now we will attach the Completed event of Published to Handler method of Subscriber to understand the clean up.

  1. Publisher pub = new Publisher();
  2. Subscriber sub = new Subscriber();
  3. pub.Completed += sub.Handler;
  4.  
  5. // this will invoke the event
  6. pub.Process();
  7.             
  8. // frees up the event & references
  9. pub.Completed -= sub.Handler;
  10.  
  11. // will not invoke the event
  12. pub.Process();
  13.  
  14. // frees up the memory
  15. pub = null; sub = null;

After the Process method has been executed the Handler method would have got the execution flow and completed the processing.  However, the event is still live and so are its references.  If you again call Process method, the Handler method will be invoked.  Now when we unsubscribe (-=) the Handler method, the event association and its references are freed up from the memory but objects pub and sub are not freed yet.  When pub and sub objects are assigned null, they are marked for collection by GC.

If we do not unsubscribe (-=) and keep other code AS-IS – GC will check for any live references for pub and sub and it will find a live event.  It will not collect these objects and they will cause a memory leak.  This common anti-pattern is more prevalent in UI based solutions where the UI events are attached/hooked to code-behind / view-models / facade.

Following these practices will definitely reduce your application’s footprint and make it faster.

Using Claims-Identity with SimpleMembership in ASP.NET MVC

May 20, 2013 CSharp, Visual Studio , , , , , ,

About an year ago, I had a chance to work on ASP.NET MVC and Claims-based identities for an enterprise application.   Claims-based identity, though introduced a decade ago, has got more focus in .NET world only after Microsoft introduced Windows Identity Foundation (WIF).  With WIF/Claims, achieving loose coupling between authentication models (forms, windows, etc.) and claim management has become fairly simple and robust.  This article provides the  easiest way to implement Claims Identity for an Internet application that uses SimpleMembershipProvider.  However, this example can be used for Intranet Applications for enterprises as well.

This article assumes that you are aware of the concept of claims.  If you wish to revisit the fundamentals of claims, you can refer to An Introduction to Claims on MSDN.

For this article, I would be using ASP.NET MVC 4 Razor-engine with .NET 4.5 framework; however, you can choose to implement this solution on any older version of ASP.NET MVC or .NET.   With .NET 4.5, Windows Identity Foundation from being a separate framework has moved to be a part of .NET framework.   So if your web application is built on an older version of .NET, you will have to install WIF separately and reference its assemblies.

So once you have created your Intranet-ASP.NET MVC 4 application with Razor engine, Visual Studio 2012 will automatically create default folders (Controllers, Views, Content, etc.).

Application Configuration

The first step requires altering web.config to reference WIF

<configSections>
  <section name="system.identityModel" type="System.IdentityModel.Configuration.SystemIdentityModelSection, System.IdentityModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" />
  <section name="system.identityModel.services" type="System.IdentityModel.Services.Configuration.SystemIdentityModelServicesSection, System.IdentityModel.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" />
  </configSections>
<configSections>


For this example, let’s keep things simple and assume that your application does not interact with Third-Party Applications and hence does not require any federation.  If your application requires claims-identity using federation then the configuration below would be little more complex.  The important setting here is “requireSsl” which is set to false.

<system.identityModel.services>
  <federationConfiguration>
      <cookieHandler requireSsl="false" persistentSessionLifetime="2"/>
  </federationConfiguration>
</system.identityModel.services>


The next in configuration is to add a HTTP module that uses Claims-identity for session management.  This SessionAuthenticationModule will be later used to manage sessions and to create/update Claims-Identity cookies

<system.webServer>
  <modules runAllManagedModulesForAllRequests="true">
    <add name="SessionAuthenticationModule"
          type="System.IdentityModel.Services.SessionAuthenticationModule, System.IdentityModel.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
  </modules>
  ....
  </system.webServer>

Any change required on authentication settings?  For this example, we will use Forms Authentication with SimpleMembership to keep the article short and aligned to Claims-identity only.   So there is no change required in the configuration file for authentication.

Note: You can use Windows Authentication, SSO, or any other authentication mode that your application requirements demand.  Configuration may change based on the type of authentication you are using.

 

Authentication Controller

 

Generally authentication involves actions such as Login and LogOff.  If you are using FormsAuthentication, you would also require Register action to allow users to register themselves.  I’ll keep the view part to the minimum and will use the same view that is created when a new ASP.NET MVC 4 application is created using Razor engine.  To create Claims-Identity, I prefer to encapsulate the data in a class called UserNonSensitiveData.  You can extend this class to add user roles, permissions, information, etc.

    [Serializable]
    public class UserNonSensitiveData
    {
        public int UserId { get; set; }
        public string Username { get; set; }
        public string Email { get; set; }

        public UserNonSensitiveData(int userId, string userName, string email)
        {
            this.UserId = userId;
            this.Username = userName;
            this.Email = email;
        }

        public UserNonSensitiveData()  { }
    }


For the register action,  we will first create an account using SimpleMembership (WebSecurity class) and re-login the user.  Once the user is authenticated, we can create retrieve user information in an object of UserNonSensitiveData and create a SessionAuthenticationCookie using the SessionAuthenticationModule defined in web.config

        [HttpPost]
        [AllowAnonymous]
        [ValidateAntiForgeryToken]
        public virtual ActionResult Register(RegisterModel model)
        {
            if (ModelState.IsValid)
            {
                try
                {
                    WebSecurity.CreateUserAndAccount(model.UserName, model.Password, new { });
                    if (WebSecurity.Login(model.UserName, model.Password))
                    {
                        int userId = WebSecurity.GetUserId(model.UserName);
                        var nonSensitiveCookieData = new UserNonSensitiveData(userId, model.UserName, model.UserName);
                        FederatedAuthentication.SessionAuthenticationModule.WriteSessionTokenToCookie(GetSecurityToken(nonSensitiveCookieData));
                    }
                    return RedirectToAction("Index", "Home");
                }
                catch (MembershipCreateUserException exception)
                {
                    AddError("", ErrorCodeToString(exception.StatusCode));
                }
                catch (Exception e)
                {
                    AddError("", e.Message);
                }
            }

            // If we got this far, something failed, redisplay form
            return View(model);
        }

This code refers to a method GetSecurityToken which converts a UserNonSensitiveData object  to SessionSecurityToken.  This token will be used to update the session.  Here, NameIdentified plays a very important role as it acts as a “key” to identify the user.  If you have any default roles to be assigned to a user who has registered on the website, you can add them just like we have added email address. 

  1. protected static SessionSecurityToken GetSecurityToken(UserNonSensitiveData nonSensitiveData)
  2. {
  3.     var claims = new List<Claim>
  4.                 {
  5.                     new Claim(ClaimTypes.NameIdentifier, nonSensitiveData.UserId.ToString(CultureInfo.InvariantCulture)),
  6.                     new Claim(ClaimTypes.Name, nonSensitiveData.Username),
  7.                     new Claim(ClaimTypes.Email, nonSensitiveData.Email)
  8.                 };
  9.  
  10.     var identity = new ClaimsIdentity(claims, "Forms");
  11.     var principal = new ClaimsPrincipal(identity);
  12.  
  13.     return new SessionSecurityToken(principal, TimeSpan.FromDays(2));
  14. }

Now, the next important user action is the Login action.  Login action essentially is a subset of Register action.

        [HttpPost]
        [AllowAnonymous]
        [ValidateAntiForgeryToken]
        public virtual ActionResult Login(LoginModel model, string returnUrl)
        {
            if (ModelState.IsValid && WebSecurity.Login(model.UserName, model.Password))
            {
                var userId = WebSecurity.GetUserId(model.UserName);
                var nonSensitiveCookieData = new UserNonSensitiveData(userId, model.UserName, model.UserName);
                FederatedAuthentication.SessionAuthenticationModule.WriteSessionTokenToCookie(GetSecurityToken(nonSensitiveCookieData));

                return RedirectToLocal(returnUrl);
            }

            // If we got this far, something failed, redisplay form
            AddError("", "The user name or password provided is incorrect.");
            return View(model);
        }

 

The LogOff action is just 2 liner code to ensure that the cookies are cleared

        [HttpPost]
        [ValidateAntiForgeryToken]
        public virtual ActionResult LogOff()
        {
            // For Claims-Cookie
            FederatedAuthentication.SessionAuthenticationModule.SignOut();
            WebSecurity.Logout(); // for SimpleMembership
            return RedirectToAction("Index", "Home");
        }


AntiForgeryToken Compatibility

Since we have changed the token to claims-token, our application must be aware of how the claims have to be identified.  This requires adding one line to Application_Start method in Global.asax.cs

// To ensure that claims authentication works with AntiForgeryToken
AntiForgeryConfig.UniqueClaimTypeIdentifier = ClaimTypes.NameIdentifier;

 

Execution

 

Running this code will take you to the Login page (/Account/Login).  When the user successfully registers/logins, a new token cookie is created and user is redirected to the home page.  This cookie has a lifetime which can be set in the code and in web.config file.

Once this cookie is set, you can use ClaimsPrincipal to get the value of any assigned claim as shown below.  If the cookie has expired, you will not get any claims.

        protected Claim GetClaim(string type)
        {
            return ClaimsPrincipal.Current.Claims.FirstOrDefault(c => c.Type.ToString(CultureInfo.InvariantCulture) == type);
        }

This is one of the ways to using Claims-Identity and SimpleMembershipProvider with ASP.NET MVC.

Extending this application

 

You can replace the SimpleMembershipProvider with any other provider (like SqlProvider, etc.) that suites your application requirement.  If your application uses SSO, you can wire up claims after you have received SSO token.  You can serialize SSO token and store it in claims token as well if you need to re-use the token.

All these changes require minimal code and are totally isolated from the Claims-identity management code.

Understanding Mock and frameworks – Part 4 of N

January 25, 2013 CSharp, Open Source, Unit Testing, Visual Studio , , , , , , , , ,

I am glad that this series has been liked by large audience and even got focus on Channel9 Video and that motivates to continue this series further on.

This series on Mock and frameworks takes you through Rhino Mocks, Moq and NSubstitute frameworks to implement mock and unit testing in your application.  This series of posts is full of examples, code and illustrations to make it easier to follow.  If you have not followed this series through, I would recommend you to read following articles

  1. Understanding Mock and frameworks – Part 1 of N – Understanding the need of TDD, Mock and getting your application Unit Test ready
  2. Understanding Mock and frameworks – Part 2 of N – Understanding Mock Stages & Frameworks – Rhino Mocks, NSubstitute and Moq
  3. Understanding Mock and frameworks – Part 3 of N -  Understanding how to Mock Methods calls and their Output – Rhino Mocks, NSubstitute and Moq

This part of the series will focus on exception management, and subscribing to events

Exception Management

 

Going back to the second post where we defined our Controller and Service interface,  the method GetObject throws an ArgumentNullException when a NULL value is passed to it.  The Exception Management with Mock allows you to change the exception type and hence, ignore the exception peacefully.

Using Rhino Mocks

Using the Stub method of Mocked Service, you can override the exception type.  So instead of getting a ArgumentNullException, we will change the exception to InvalidDataException.  But do not forget to add a ExceptedException attribute to the method (line 2 in example below) that ensures a peaceful exit.

  1. [TestMethod]
  2. [ExpectedException(typeof(InvalidDataException))]
  3. public void T05_TestExceptions()
  4. {
  5.     var service = MockRepository.GenerateMock<IService>();
  6.     // we are expecting that service will throw NULL exception, when null is passed
  7.     service.Stub(x => x.GetObject(null)).Throw(new InvalidDataException());
  8.     
  9.     Assert.IsNull(service.GetObject(null)); // throws an exception
  10. }

Using NSubstitute

NSubstitute does not expose a separate /special function like Throw. It treats exceptions as a return value of the test method. So u

  1. [TestMethod]
  2. [ExpectedException(typeof(InvalidDataException))]
  3. public void T05_TestExceptions()
  4. {
  5.     var service = Substitute.For<IService>();
  6.     // we are expecting that service will throw NULL exception, when null is passed
  7.     service.GetObject(null).Returns(args => { throw new InvalidDataException(); });
  8.     
  9.     Assert.IsNull(service.GetObject(null)); // throws an exception
  10. }

Using Moq

Moq exposes Throws method just like Rhino Mocks

  1. [TestMethod]
  2. [ExpectedException(typeof(InvalidDataException))]
  3. public void T05_TestExceptions()
  4. {
  5.     var service = new Mock<IService>();
  6.     // we are expecting that service will throw NULL exception, when null is passed
  7.     service.Setup(x => x.GetObject(null)).Throws(new InvalidDataException());
  8.     
  9.     Assert.IsNull(service.Object.GetObject(null)); // throws an exception
  10. }

Event Subscriptions

The next topic of interest in this series is managing Event Subscriptions with Mock frameworks. So the target here is to check if an event was called when some methods were called on the mocked object

Using Rhino Mocks

The code here is little complex compared to all previous examples.  So we are adding a ServiceObject to the controller which raises an event. Unlike real time examples, this event is hooked to the DataChanged method of the service. This means the controller will raise an event and a service method will be called dynamically.

  1. [TestMethod]
  2. public void T06_TestSubscribingEvents()
  3. {
  4.     var service = MockRepository.GenerateMock<IService>();
  5.     var controller = new Controller(service);
  6.  
  7.     controller.CEvent += service.DataChanged;
  8.     ServiceObject input =new ServiceObject("n1", Guid.NewGuid());
  9.     controller.Add(input);
  10.  
  11.     service.AssertWasCalled(x => x.DataChanged(Arg<Controller>.Is.Equal(controller),
  12.         Arg<ChangedEventArgs>.Matches(m=> m.Action == Action.Add &&
  13.                                               m.Data == input)));
  14. }

Let’s go into detail of the AssertWasCalled statement – it takes a method handler in the lambda expression. Some of the other methods of interest are Arg<T> Is.Equal and Matches. The method Arg<T> creates a fake argument object of a particular type. Is.Equal() method assigns a value to it and Matches method adds like a filter on the values of input parameters to the original ‘DataChanged’ method.

Using NSubstitute

NSubstitute has a different set of methods for event subscription.  The output of Received() method exposes all valid method calls available, Arg.Is<T>() creates fake object for mocked service.

  1. [TestMethod]
  2. public void T06_TestSubscribingEvents()
  3. {
  4.     var service = Substitute.For<IService>();
  5.     var controller = new Controller(service);
  6.  
  7.     controller.CEvent += service.DataChanged;
  8.     ServiceObject input = new ServiceObject("n1", Guid.NewGuid());
  9.     controller.Add(input);
  10.  
  11.     service.Received().DataChanged(controller,
  12.       Arg.Is<ChangedEventArgs>(m => m.Action == Action.Add && m.Data == input));
  13. }

Using Moq

Apart from having different names of methods from NSubstitute, Moq does exactly the same as other two frameworks

  1. public void T06_TestSubscribingEvents()
  2. {
  3.     var service = new Mock<IService>();
  4.     var controller = new Controller(service.Object);
  5.  
  6.     controller.CEvent += service.Object.DataChanged;
  7.     ServiceObject input = new ServiceObject("n1", Guid.NewGuid());
  8.     controller.Add(input);
  9.  
  10.     service.Verify(x => x.DataChanged(controller,
  11.         It.Is<ChangedEventArgs>(m => m.Action == Action.Add && m.Data == input)));
  12. }

The Code for Download

Understanding RhinoMocks, NSubstitute, Moq by Punit Ganshani

So that concludes how to use Rhino Mocks, NSubstitute and Moq for Exception Handling and Event Subscriptions.  Should you have any questions, feel free to comment on this post.

Understanding Mock and frameworks – Part 3 of N

January 22, 2013 CSharp, Open Source, Unit Testing, Visual Studio , , , , , , , ,

This series on Mock frameworks takes you through Rhino Mocks, Moq and NSubstitute frameworks to implement mock and unit testing in your application.  This series of posts is full of examples, code and illustrations to make it easier to follow.  If you have not followed this series through, I would recommend you reading following articles

  1. Understanding Mock and frameworks – Part 1 of N – Understanding the need of TDD, Mock and getting your application Unit Test ready
  2. Understanding Mock and frameworks – Part 2 of N – Understanding Mock Stages & Frameworks – Rhino Mocks, NSubstitute and Moq

In Part 1, we understood how to refactor our application so that it can be Unit tested with Mock frameworks and in Part 2, we understood 3 stages of Mocking – Arrange, Act and Assert with an example.  Now as understood in Part 2 ‘Arrange’ is the stage where implementation in all the three framework differs.  So in this post, we will focus on ‘Arrange’ and explore various types of mocking.

Mocking Method Calls

In the examples below, we will mock a method call GetCount of the service so that it returns value 10.  We will clear the controller and then check if GetCount method was actually called.  If this example was called without using Mock frameworks, the outputs would have been different. Please note that in the below examples all the mock methods are applied on the service; however, you can apply them on the controller as well.

Using Rhino Mocks

Rhino Mocks uses methods AssertWasCalled and AssertWasNotCalled to check if the ‘actual’ method was invoked.

  1. [TestMethod]
  2. publicvoid T03_TestMethodCall()
  3. {
  4.     var service = MockRepository.GenerateMock<IService>();
  5.     var controller = newController(service); // injection
  6.     service.Stub(x => x.GetCount()).Return(10);
  7.     controller.Clear();
  8.     service.AssertWasNotCalled(x => x.GetCount());
  9.     service.AssertWasCalled(x => x.Clear());
  10. }

Using NSubstitute

NSubstitute uses DidNotReceive and Received methods to check if the ‘actual’ methods got a call or not.  There are multiple similar methods available such as DidNotReceiveWithAnyArgs, ClearReceivedCalls, and ReceivedCalls.  Each of these functions can be used to manipulate or check the call stack.

  1. [TestMethod]
  2. publicvoid T03_TestMethodCall()
  3. {
  4.     var service = Substitute.For<IService>();
  5.     var controller = newController(service);
  6.     service.GetCount().Returns(10);
  7.     controller.Clear();
  8.     service.DidNotReceive().GetCount();
  9.     service.Received().Clear();
  10. }

Using Moq

Moq standardizes the verification of method calls with a single method Verify. The Verify method has many overloads allowing to even evaluate an expression.

  1. [TestMethod]
  2. publicvoid T03_TestMethodCall()
  3. {
  4.     var service = newMock<IService>();
  5.     var controller = newController(service.Object);
  6.     service.Setup(x => x.GetCount()).Returns(0);
  7.     controller.Clear();
  8.     service.Verify(x => x.GetCount(), Times.Never());
  9.     service.Verify(x => x.Clear());
  10. }

Mocking Method Calls and their Output

The next example focuses on a more real time scenario, where we will mock a method that takes input parameters and returns an output parameter.  The service will be mocked and we will test the functionality of our controller.

So going back to Part 2 of our series, if a NULL input is passed to Add method of the controller it returns back false.  For any non-NULL input, it gives a call to Add method of the service and returns back the value returned by the service.  So we will pass two different types of objects to the controller and return 2 different outputs based on our mock.  When we pass ServiceObject with name ‘m1’ we will mock it to return value ‘false’ while ServiceObject with name ‘m2’ should return value ‘true’

Using Rhino Mocks

What’s important to focus at his moment is that we altered the behavior and hence used the method Stub on the mocked service and then later we defined the expected result by the method Returns.

  1. [TestMethod]
  2. publicvoid T04_TestConditions()
  3. {
  4.     var service = MockRepository.GenerateMock<IService>();
  5.     var controller = newController(service); // injection
  6.     ServiceObject oneThatReturnsFalse = newServiceObject(“m1”, Guid.NewGuid());
  7.     ServiceObject oneThatReturnsTrue = newServiceObject(“m2”, Guid.NewGuid());
  8.     service.Stub(x => x.Add(oneThatReturnsFalse)).Return(false);
  9.     service.Stub(x => x.Add(oneThatReturnsTrue)).Return(true);
  10.     Assert.AreEqual(false, controller.Add(oneThatReturnsFalse));
  11.     Assert.AreEqual(true, controller.Add(oneThatReturnsTrue));
  12. }

Using NSubstitute

NSubstitute does not require any method like Stub. When you are defining any method on the mocked service object, it automatically assumes that the framework has to alter the behavior of the method.  Just like Rhino Mocks, method Returns defines the expected mocked output of that method.

  1. [TestMethod]
  2. publicvoid T04_TestConditions()
  3. {
  4.     var service = Substitute.For<IService>();
  5.     var controller = newController(service);
  6.     ServiceObject oneThatReturnsFalse = newServiceObject(“m1”, Guid.NewGuid());
  7.     ServiceObject oneThatReturnsTrue = newServiceObject(“m2”, Guid.NewGuid());
  8.     service.Add(oneThatReturnsFalse).Returns(false);
  9.     service.Add(oneThatReturnsTrue).Returns(true);
  10.     Assert.AreEqual(false, controller.Add(oneThatReturnsFalse));
  11.     Assert.AreEqual(true, controller.Add(oneThatReturnsTrue));
  12. }

Using Moq

Moq like Rhino Mocks requires a special method call Setup and then followed by another method call to Returns method

  1. [TestMethod]
  2. publicvoid T04_TestConditions()
  3. {
  4.     var service = newMock<IService>();
  5.     var controller = newController(service.Object);
  6.     ServiceObject oneThatReturnsFalse = newServiceObject(“m1”, Guid.NewGuid());
  7.     ServiceObject oneThatReturnsTrue = newServiceObject(“m2”, Guid.NewGuid());
  8.     service.Setup(x => x.Add(oneThatReturnsFalse)).Returns(false);
  9.     service.Setup(x => x.Add(oneThatReturnsTrue)).Returns(true);
  10.     Assert.AreEqual(false, controller.Add(oneThatReturnsFalse));
  11.     Assert.AreEqual(true, controller.Add(oneThatReturnsTrue));
  12. }

Strict Mock vs Normal Mock

 

Strict Mock creates brittle tests.   A strict mock is a mock that will throw an exception if you try to use any method that has not explicitly been set up to be used.  A dynamic (or loose, normal) mock will not throw an exception if you try to use a method that is not set up, it will simply return a default value from the method and keep going.  The concept can be applied to any mocking framework that supports these 2 types of mocking.

Let’s see an example with Rhino Mocks.  We will alter our controller class to add a method Find that invokes two methods of our IService interface

  1. publicpartialclassController
  2. {
  3.     publicServiceObject Find(string name)
  4.     {
  5.         if (string.IsNullOrEmpty(name))
  6.             thrownewArgumentNullException(name);
  7.         if (_service.GetCount() > 0)
  8.         {
  9.             /* Can have more business logic here */
  10.             return _service.GetObject(name);
  11.         }
  12.         else
  13.         {
  14.             returnnull;
  15.         }
  16.     }
  17. }

Now with a Strict Mock on the method GetObject and no mock on method GetCount, we should expect an exception to be thrown.  However, when using a dynamic/loose mock, GetCount will return its default value which is Zero (0 ) and hence Find Method will return NULL

  1. [TestMethod]
  2. publicvoid T10_TestStrictMock()
  3. {
  4.     // data
  5.     string name = “PunitG”;
  6.     ServiceObject output = newServiceObject(name, Guid.NewGuid());
  7.     // arrange
  8.     var service = MockRepository.GenerateMock<IService>();
  9.     var controller = newController(service);
  10.     var strictService = MockRepository.GenerateStrictMock<IService>();
  11.     var controllerForStrictMock = newController(strictService);
  12.     // act
  13.     service.Stub(x => x.GetObject(name)).Return(output);
  14.     // assert
  15.     Assert.AreEqual(null, controller.Find(name));
  16.     Assert.AreEqual(output, controllerForStrictMock.Find(name)); // exception expected
  17. }

When you execute this Test, the first assert will pass and the second assert will throw an exception.  Overall the test will fail.

The Code for Download

Understanding RhinoMocks, NSubstitute, Moq by Punit Ganshani

Hope this helps. In the next post, we will explore how to mock Exceptions and Events

Understanding Mock and frameworks – Part 2 of N

January 10, 2013 CSharp, Open Source, Unit Testing, Visual Studio , , , , , , , , ,

Continuing on where we left in the previous post – Understanding Mock and frameworks Part 1 – this post will take a very simple example and illustrate how to implement Mock frameworks.  I was mailed by one of our readers asking if I could define TDD – Test Driven Development in my words.

Test driven development is a methodology that states that test cases should be driving the design decisions and development.  This also that TDD advocates knowing the expected behaviour of our application (or a part of it), before we write the code.  It also enforces incremental design and allows us to write just enough code to pass our test. That means, no redundant code (YAGNI – You ain’t gonna need it) ships with our product, and our code is fairly simple (KISS – Keep it simple, stupid!) to understand. Sounds cool, isn’t it?

So you start writing tests first, design your application gradually and then start writing the implementation so that each test passes.  I prefer TDD over Test-After-Development (TAD) where you first design your application and write code.  Once your code is complete (or partially complete), you write the required test cases. Theoretically, a TAD system would appear to produce as high-quality product as TDD would.  But to my opinion, TAD involves taking design decisions up front – right from the start rather than taking incremental baby steps.  This also means that when time to delivery is short and critical, there are chances of not having a 100% code coverage by TAD approach. Also, a developer can fall victim to his own ego that “I don’t bad write code that needs to be tested”

That’s the way I see Test Driven Development (TDD) and Test-After-Development (TAD). So let’s get back to our topic of interest – an example to get started with Unit Testing with Mocks.

The service (or data source) interface

Assume that you are building an application that interfaces with just one service.  We have refactored our application to follow SOLID principles such as SRP, ISP, DIP and DI.  The service implements an interface IService.   This service interface could be interface of your internal service, or an external service whose behaviour and availability can not be controlled by you.

At this moment, we are not really interested in the implementation of IService and we will restrict our scope to the interface and the consumer (i.e. our application).

public interface IService
    {
        string ServiceName { get; set; }

        bool Add(ServiceObject input);
        bool Remove(ServiceObject input);
        void Clear();
        int GetCount();
        ServiceObject GetObject(string name);

        void DataChanged(object sender, ChangedEventArgs args);
    }

    public class ServiceObject
    {
        public string Name { get; set; }
        public Guid Ref { get; set; }
        public DateTime Created { get; set; }

        public ServiceObject(string name, Guid reference)
        {
            Name = name;
            Ref = reference;
            Created = DateTime.Now;
        }
    }

    public enum Action
    {
        Add,
        Remove,
        Clear
    }

The Controller (or our application)

 

The consuming class (let’s call it Controller) isolates the dependency on the implementation of service object and ensures that the consuming class injects the appropriate service object.

public class Controller
    {
        private IService _service;       

        public Controller(IService service)
        {
            _service = service;
        }
    }

In reality, the Controller class could be just a facade or can contain business logic, or data manipulation and a lot of other complex logic.  For this example, it is kept simple and restricted to only service calls.   So our controller class looks like,

  1. public class Controller
  2. {
  3.     private IService _service;
  4.  
  5.     /// <summary>
  6.     /// To demonstrate events in Mock
  7.     /// </summary>
  8.     public event EventHandler<ChangedEventArgs> CEvent;
  9.  
  10.     public Controller(IService service)
  11.     {
  12.         _service = service;
  13.     }    
  14.     
  15.     private void RaiseEvent(Action action,
  16.                     ServiceObject input)
  17.     {
  18.         /* Can have more business logic here */
  19.         if (CEvent != null)
  20.             CEvent(this, new ChangedEventArgs(action, input));
  21.     }
  22.  
  23.     /// <summary>
  24.     /// To demonstrate methods with parameters and return value
  25.     /// in Mock
  26.     /// </summary>
  27.     public bool Add(ServiceObject input)
  28.     {
  29.         if (input == null)
  30.             return false;
  31.  
  32.         /* Can have more business logic here */
  33.  
  34.         _service.Add(input);
  35.         RaiseEvent(Action.Add, input);
  36.  
  37.         return true;
  38.     }
  39.  
  40.     /// <summary>
  41.     /// To demonstrate simple method calls in Mock
  42.     /// </summary>
  43.     public void Clear()
  44.     {
  45.         _service.Clear();
  46.     }
  47. }

 

So to create an object of our controller, we need to pass an instance of a class that implements IService. As said earlier, we will not implement the interface IService to ensure that we do not have a functional service (to create a scenario of an unavailable service) for testing.  So let’s get started with our Mocking the interface IService.

 

Mocking – Step Arrange

All our tests will follow three step process

  • Arrange – This step involves creating a service/interface mock (using a Mock framework) and creating all required objects for that service/interface.  This step also includes faking the object behaviour.
  • Act – This step involves calling a method of the service, or performing any business functionality
  • Assert – This step usually asserts whether the expected result is obtained from the previous step (‘Act’) or not

Step Arrange: With Rhino Mocks

Rhino Mock framework maintains a repository of Mocks.   The MockRepository can be seen to follow a Factory design pattern to generate Mocks using Generics, object(s ) of Type, or stubs.  One of the ways to generate a Mock is,

  1. // Arrange
  2. var service = MockRepository.GenerateMock<IService>();
  3. var serviceStub = MockRepository.GenerateStub<IService>();

There is a minor difference between a Mock and a Stub. A discussion on StackOverflow explains it very appropriately or Martin Fowler has explained it brilliantly in his post on Mocks Aren’t Stubs. To keep it short and simple,

Mock objects are used to define expectations i.e: In this scenario I expect method A() to be called with such and such parameters. Mocks record and verify such expectations. Stubs, on the other hand have a different purpose: they do not record or verify expectations, but rather allow us to “replace” the behaviour, state of the “fake”object in order to utilize a test scenario.

If you want to verify the behaviour of the code under test, you will use a mock with the appropriate expectation, and verify that. If you want just to pass a value that may need to act in a certain way, but isn’t the focus of this test, you will use a stub.

Once the object has been created, we need to mock the service behaviour on this fake service object.  Let’s see how to fake a property

  1. // Arrange
  2. var service = MockRepository.GenerateMock<IService>();            
  3. service.Stub(x => x.ServiceName).Return("DataService");

Similarly, we can mock methods.  We will see mocking different types in detail later.

 

Step Arrange: With NSubstitute

NSubstitute makes it simpler to read, interpret and implement.

  1. // Arrange
  2. var service = Substitute.For<IService>();

Mocking property in NSubstitute is a lot easier to remember

  1. // Arrange
  2. var service = Substitute.For<IService>();            
  3. service.ServiceName.Returns("DataService");

Step Arrange: With Moq

Moq like Rhino Mocks also has MockRepository.  You can choose a mock to be a part of MockRepository or to let it hang individually.  Both the methods are shown as below.

  1.           // Arrange
  2.             var service = newMock<IService>();
  3.  
  4.           MockRepository repository = new MockRepository(MockBehavior.Default);
  5.           var serviceInRepository = repository.Create<IService>();

Mocking property is somewhat similar to Rhino Mocks

  1.           // Arrange
  2.             var service = newMock<IService>();
  3.           service.Setup(x => x.ServiceName).Returns("DataService");

Rhino Mocks and Moq both have two different types of behaviours’ – Strict and Loose (default).  We will look into what each of them means in the next article.

Mocking – Step Act, Assert

Once we have the fake service and have defined the expected behaviour, we can implement the next step.  This step involves calling a method of the service, or performing any business functionality. So here we can call a method of service, or use its output to do some work.  So our examples will vary from setting something in mocked service object to calling its methods.

Step Act and Assert: With Rhino Mocks and NSubstitute

As the output of the first step, both Rhino Mock and NSubstitute give you directly an object of IService (named, service).  So you can directly use this object as a substitute of the actual service.  The object service is a fake object whose ServiceName is expected to be "DataService".  Just to remind that in actual, neither the implementation of the service nor a real object of the service exists.  We are dealing with fake objects through all our examples

  1. // Act
  2. var actual = service.ServiceName;
  3.  
  4. // Assert
  5. Assert.AreEqual("DataService", actual);

This test on execution will give a PASS

Step Act and Assert: With Moq

Moq unlike the other two frameworks does not directly provide you a fake service object.  It provides you the fake service object in one of its properties called ‘Object’.  Everything else remains the same

  1. // Act
  2. var actual = service.Object.ServiceName;
  3.  
  4. // Assert
  5. Assert.AreEqual("DataService", actual);

If you put together the code, it forms our first Unit Test w/Mock frameworks.

The Code for Download

Understanding RhinoMocks, NSubstitute, Moq by Punit Ganshani

In the next article, we will look into creating Mocks for methods using these frameworks.

Understanding Mock and frameworks – Part 1 of N

December 31, 2012 CSharp, Open Source, Unit Testing, Visual Studio , , , , , ,

We have several posts on the Internet that focus on Test Driven Development and its benefits.  There is a group of SCRUM Masters around who emphasize on how Test Driven Development (TDD) is beneficial and can change the way you write and test code.  I completely agree that Agile and TDD can help you manage your teams, code, product quality and communication in a better way.  But this is not yet another post on Test Driven Development, or Unit Testing.  This is one of the articles on Test Mocking.  How is it different from others?  This series assumes that you have no knowledge of mocking and your application is not unit-test ready so it guides you on the concept of mocking, preparing your application, using mocking frameworks, comparing them and ensuring that your code is ‘Well-Tested and Covered’

So let’s get started!

Understanding Mock – What, Why?

Now if you are following the discipline of TDD and are building large scale enterprise application that interacts with services, databases, or other data sources, writing data driven unit tests becomes a challenge.  Test setup (for each test) becomes more complicated and challenging than the actual unit test.  This is where the concept of Mock comes into picture.

Mock – to deceive, delude, or disappoint.

As the English definition of the word Mock suggests we want to deceive the process of unit test with a substituted data rather than retrieving a data set from the actual service or database.  This also means that we are getting rid of various other aspects of service availability, environment setup, data masking, data manipulation and conversion.  It is like telling the unit test

Do not worry about the data source.  Stop worrying about the actual data and use the mocked up data.  Test the logic!

OK, this sets a premise on why mocking is required!   Do you need an alternate definition to understand this even better?

A mocking framework fakes objects to replace any dependencies you have and thereby, allow you to tell them (the mocked code) to behave as you want it to.  So even if the service may not exist if you have asked the mocking framework to mock it, the mocking framework will provide a fake service when requested by the consuming code.

Refactoring application to be Unit Test ready

 

Let’s assume that you were told one fine day that your application, for the simplicity is a typical 3-tier web application, needs to be unit tested.   So you start with the first step of analysing the most critical part that constitutes 80% of your business functionality.  Now there are two possibilities – either your application has been designed following the SOLID Principles or it is a legacy application built without considering these design principles.   The second possibility requires more effort on your side, so let’s assume that your application is not design correctly and you want to Unit Test it with Mocking.

Single Responsibility Principle – Isolate your data sources

First,  try isolating the all the data sources into repositories.  This means that if your application is reading configuration from XML file, business data from SQL database, real-time feeds from Web Services, and alike then check if the principle of isolation and single-responsibility is applied correctly.  This means that your application should not have a class that defines actions responsible for reading data from multiple sources, manipulate it, do some calculations, or store it.

Your application should have one class object per data source (DAL class) that is responsible for retrieving / storing data in that data source.  If the need is to do some calculations, manipulations on one or more data sources, then there should be another class object who should have this responsibility.

 

Interface Segregation Principle – Use interfaces instead of objects

Now that you have your data sources isolated and have single responsibility of interacting with one-and-only-one data source, your next step is to ensure that all the references of this class object (that represents the data source) is not referenced as an object in all the calling objects.

In simple words code like this,

  1. ConfigurationStore store = new ConfigurationStore();
  2. store.RefreshConfiguration();

Needs to be refactored into something like,

  1. IConfigurationStore store = new XmlConfigurationStore();
  2. store.RefreshConfiguration();

Please note the two changes in the above code. One – instead of creating an object of class ConfigurationStore we are now creating an object of an interface IConfigurationStore and have given more meaningful name to the class as XmlConfigurationStore.  This is also referred as ISP

 

Dependency Inversion and Injection – Refactoring the construction of data source

Now that you have isolation between classes representing data sources and have interfaces defining the definition of these classes, the next step is to ensure that there is no hard dependency on the implementation of these classes.  In other words,  we don’t want to make our consumer class dependent on the implementation of the class (representing data source) instead we want it to be dependent on the definition of this interface.

Higher level module (the consuming classes responsible for any data manipulations, representation, etc) should not depend on low level module (the object of data source/DAL) rather should depend on a layer of abstraction (like an interface)

This is known as DIP – Dependency Inversion Principle.  There are three ways to implement DIP – through constructor while constructing the higher level modules, through properties by assigning the lower level module objects to the property, or explicitly through methods.  We will not go into the details of implementation and would advise you to go through the MSDN article on Dependency Injection

So we have a cleaner code that can be referred as unit-test-ready code

Unit Test and Mock – How?

Do we need any special infrastructure? Any third-party frameworks required?

To get started with unit testing, all you need is Visual Studio 2010/2012.  Yes, that’s enough for basic testing. We build our unit tests using any one of NUnit, xUnit or MSTest and run them using appropriate tools.

But since you are keen at implementing Mocks, you will require a well tested and proven Mocking Framework.  Three frameworks that I’ve used are

  • Rhino Mocks – This is unarguably the most adopted and extensive mocking framework with lots of features.  Some developers find it difficult the adopt considering a wide range of functionalities available.  However, post the series of articles of Mocking using this should not be difficult.
  • NSubstitute – Implementing mocking is made extremely easy using this framework
  • Moq – This is a great framework when you are developing a Silverlight application

So, get ready to download one of the above frameworks to get your infrastructure ready.  If you are still confused which one, then

I would recommend staying connected to this post as in the subsequent articles in the series we will see the differences in the implementation of mocking using these 3 frameworks!  We will look more into the ‘How’ aspects of implementing these frameworks!

Follow on Feedly