development

What is the difference between System.String and string?

October 15, 2013 CSharp, Visual Studio , , , ,

One of the questions that lot of developers ask is – Is there any difference between string and System.String and what should be used?

 

Short Answer

There is no difference between the two.  You can use either of them in your code.

 

Explanation

 

System.String is a class (reference type) defined the mscorlib in the namespace System.  In other words, System.String is a type in the CLR.

string is a keyword in C#

 

Before we understand the difference, let us understand BCL and FCL terms.

BCL is Common Language Infrastructure (CLI) available to languages like C#, A#, Boo, Cobra, F#, IronRuby, IronPython and other CLI languages.  It includes common functions such as File Read/Write or IO and database/XML interactions.  BCL was first implemented in Microsoft .NET in the form of mscorlib.dll

FCL is standard Microsoft .NET specific library containing reusable classes/assets like System, System.CodeDom, System.Collections, System.Diagnostics, System.Globalization, System.IO, System.Resources and System.Text

Now in C#, string (keyword in BCL) directly maps to System.String (an FCL type).  Similarly, int maps directly to System.Int32. 

Here int is mapped to a integer type that is 32 bit.  But in other language, you could probably map int (keyword in BCL) to a 64 bit integer (FCL type).

So the fact that using string and System.String in C# makes no difference is well established.

 

Is it better to still use string instead of System.String?

 

There is no universally agreed answer to this.

But, as per me, even though both string and System.String mean the same and have no difference in performance of the application, it is better to use string.  This is because string is a C# language specific keyword.

Also C# language specification states,

As a matter of style, use of the keyword is favored over use of the complete system type name

Following this practice ensures that your code consistently uses keywords wherever possible rather than having a code with BCL and FCL types used.

Using Claims-Identity with SimpleMembership in ASP.NET MVC

May 20, 2013 CSharp, Visual Studio , , , , , ,

About an year ago, I had a chance to work on ASP.NET MVC and Claims-based identities for an enterprise application.   Claims-based identity, though introduced a decade ago, has got more focus in .NET world only after Microsoft introduced Windows Identity Foundation (WIF).  With WIF/Claims, achieving loose coupling between authentication models (forms, windows, etc.) and claim management has become fairly simple and robust.  This article provides the  easiest way to implement Claims Identity for an Internet application that uses SimpleMembershipProvider.  However, this example can be used for Intranet Applications for enterprises as well.

This article assumes that you are aware of the concept of claims.  If you wish to revisit the fundamentals of claims, you can refer to An Introduction to Claims on MSDN.

For this article, I would be using ASP.NET MVC 4 Razor-engine with .NET 4.5 framework; however, you can choose to implement this solution on any older version of ASP.NET MVC or .NET.   With .NET 4.5, Windows Identity Foundation from being a separate framework has moved to be a part of .NET framework.   So if your web application is built on an older version of .NET, you will have to install WIF separately and reference its assemblies.

So once you have created your Intranet-ASP.NET MVC 4 application with Razor engine, Visual Studio 2012 will automatically create default folders (Controllers, Views, Content, etc.).

Application Configuration

The first step requires altering web.config to reference WIF

<configSections>
  <section name="system.identityModel" type="System.IdentityModel.Configuration.SystemIdentityModelSection, System.IdentityModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" />
  <section name="system.identityModel.services" type="System.IdentityModel.Services.Configuration.SystemIdentityModelServicesSection, System.IdentityModel.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" />
  </configSections>
<configSections>


For this example, let’s keep things simple and assume that your application does not interact with Third-Party Applications and hence does not require any federation.  If your application requires claims-identity using federation then the configuration below would be little more complex.  The important setting here is “requireSsl” which is set to false.

<system.identityModel.services>
  <federationConfiguration>
      <cookieHandler requireSsl="false" persistentSessionLifetime="2"/>
  </federationConfiguration>
</system.identityModel.services>


The next in configuration is to add a HTTP module that uses Claims-identity for session management.  This SessionAuthenticationModule will be later used to manage sessions and to create/update Claims-Identity cookies

<system.webServer>
  <modules runAllManagedModulesForAllRequests="true">
    <add name="SessionAuthenticationModule"
          type="System.IdentityModel.Services.SessionAuthenticationModule, System.IdentityModel.Services, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
  </modules>
  ....
  </system.webServer>

Any change required on authentication settings?  For this example, we will use Forms Authentication with SimpleMembership to keep the article short and aligned to Claims-identity only.   So there is no change required in the configuration file for authentication.

Note: You can use Windows Authentication, SSO, or any other authentication mode that your application requirements demand.  Configuration may change based on the type of authentication you are using.

 

Authentication Controller

 

Generally authentication involves actions such as Login and LogOff.  If you are using FormsAuthentication, you would also require Register action to allow users to register themselves.  I’ll keep the view part to the minimum and will use the same view that is created when a new ASP.NET MVC 4 application is created using Razor engine.  To create Claims-Identity, I prefer to encapsulate the data in a class called UserNonSensitiveData.  You can extend this class to add user roles, permissions, information, etc.

    [Serializable]
    public class UserNonSensitiveData
    {
        public int UserId { get; set; }
        public string Username { get; set; }
        public string Email { get; set; }

        public UserNonSensitiveData(int userId, string userName, string email)
        {
            this.UserId = userId;
            this.Username = userName;
            this.Email = email;
        }

        public UserNonSensitiveData()  { }
    }


For the register action,  we will first create an account using SimpleMembership (WebSecurity class) and re-login the user.  Once the user is authenticated, we can create retrieve user information in an object of UserNonSensitiveData and create a SessionAuthenticationCookie using the SessionAuthenticationModule defined in web.config

        [HttpPost]
        [AllowAnonymous]
        [ValidateAntiForgeryToken]
        public virtual ActionResult Register(RegisterModel model)
        {
            if (ModelState.IsValid)
            {
                try
                {
                    WebSecurity.CreateUserAndAccount(model.UserName, model.Password, new { });
                    if (WebSecurity.Login(model.UserName, model.Password))
                    {
                        int userId = WebSecurity.GetUserId(model.UserName);
                        var nonSensitiveCookieData = new UserNonSensitiveData(userId, model.UserName, model.UserName);
                        FederatedAuthentication.SessionAuthenticationModule.WriteSessionTokenToCookie(GetSecurityToken(nonSensitiveCookieData));
                    }
                    return RedirectToAction("Index", "Home");
                }
                catch (MembershipCreateUserException exception)
                {
                    AddError("", ErrorCodeToString(exception.StatusCode));
                }
                catch (Exception e)
                {
                    AddError("", e.Message);
                }
            }

            // If we got this far, something failed, redisplay form
            return View(model);
        }

This code refers to a method GetSecurityToken which converts a UserNonSensitiveData object  to SessionSecurityToken.  This token will be used to update the session.  Here, NameIdentified plays a very important role as it acts as a “key” to identify the user.  If you have any default roles to be assigned to a user who has registered on the website, you can add them just like we have added email address. 

  1. protected static SessionSecurityToken GetSecurityToken(UserNonSensitiveData nonSensitiveData)
  2. {
  3.     var claims = new List<Claim>
  4.                 {
  5.                     new Claim(ClaimTypes.NameIdentifier, nonSensitiveData.UserId.ToString(CultureInfo.InvariantCulture)),
  6.                     new Claim(ClaimTypes.Name, nonSensitiveData.Username),
  7.                     new Claim(ClaimTypes.Email, nonSensitiveData.Email)
  8.                 };
  9.  
  10.     var identity = new ClaimsIdentity(claims, "Forms");
  11.     var principal = new ClaimsPrincipal(identity);
  12.  
  13.     return new SessionSecurityToken(principal, TimeSpan.FromDays(2));
  14. }

Now, the next important user action is the Login action.  Login action essentially is a subset of Register action.

        [HttpPost]
        [AllowAnonymous]
        [ValidateAntiForgeryToken]
        public virtual ActionResult Login(LoginModel model, string returnUrl)
        {
            if (ModelState.IsValid && WebSecurity.Login(model.UserName, model.Password))
            {
                var userId = WebSecurity.GetUserId(model.UserName);
                var nonSensitiveCookieData = new UserNonSensitiveData(userId, model.UserName, model.UserName);
                FederatedAuthentication.SessionAuthenticationModule.WriteSessionTokenToCookie(GetSecurityToken(nonSensitiveCookieData));

                return RedirectToLocal(returnUrl);
            }

            // If we got this far, something failed, redisplay form
            AddError("", "The user name or password provided is incorrect.");
            return View(model);
        }

 

The LogOff action is just 2 liner code to ensure that the cookies are cleared

        [HttpPost]
        [ValidateAntiForgeryToken]
        public virtual ActionResult LogOff()
        {
            // For Claims-Cookie
            FederatedAuthentication.SessionAuthenticationModule.SignOut();
            WebSecurity.Logout(); // for SimpleMembership
            return RedirectToAction("Index", "Home");
        }


AntiForgeryToken Compatibility

Since we have changed the token to claims-token, our application must be aware of how the claims have to be identified.  This requires adding one line to Application_Start method in Global.asax.cs

// To ensure that claims authentication works with AntiForgeryToken
AntiForgeryConfig.UniqueClaimTypeIdentifier = ClaimTypes.NameIdentifier;

 

Execution

 

Running this code will take you to the Login page (/Account/Login).  When the user successfully registers/logins, a new token cookie is created and user is redirected to the home page.  This cookie has a lifetime which can be set in the code and in web.config file.

Once this cookie is set, you can use ClaimsPrincipal to get the value of any assigned claim as shown below.  If the cookie has expired, you will not get any claims.

        protected Claim GetClaim(string type)
        {
            return ClaimsPrincipal.Current.Claims.FirstOrDefault(c => c.Type.ToString(CultureInfo.InvariantCulture) == type);
        }

This is one of the ways to using Claims-Identity and SimpleMembershipProvider with ASP.NET MVC.

Extending this application

 

You can replace the SimpleMembershipProvider with any other provider (like SqlProvider, etc.) that suites your application requirement.  If your application uses SSO, you can wire up claims after you have received SSO token.  You can serialize SSO token and store it in claims token as well if you need to re-use the token.

All these changes require minimal code and are totally isolated from the Claims-identity management code.

Complete guide to dynamic keyword in C#

April 10, 2013 CSharp, Visual Studio , ,

The dynamic keyword, a new addition to Microsoft .NET C# 4.0 language, is believed to change the type binding to a variable from compile time to runtime.  This also means that apart from the CLR interpreting the variable type dynamically at runtime, the compiler also has to ignore type-matching of a variable during the compilation process.

This is a paradigm shift from what has been followed since the time of Pascal, C and C++ ages and this article focuses on understanding how dynamic works internally and the best practices to be followed when using dynamic keyword

To understand this, let’s consider a code snippet and analyse it.

        private static void Main(string[] args)
        {
            Program program = new Program();

            /* Simple Examples
             * Will work on any data type 
             * on which + can be applied
             */
            Console.WriteLine(program.Add(2, 3));
            Console.WriteLine(program.Add(2.0d, 3.0d));
            Console.WriteLine(program.Add("Punit", "G"));

            /* Will work on any data type 
             * on which = can be applied */
            Console.WriteLine(program.Equals(2, 3));
            Console.WriteLine(program.Equals("Punit", "G"));
            //Console.WriteLine(program.Add(program, program));
        }

In the above snippet, the Add method takes in 2 dynamic parameters and returns a dynamic value.  This means you can typically pass any data type (that supports a + operation) and expect it to work as normal.  When you pass in a data type that does not support a + operation, the program will throw a RuntimeBinderException

The output of above program as expected is:

5

5

PunitG

False

False

<RuntimeBinderException>

At compile time, CLR converts a dynamic keyword to classes in Microsoft.CSharp.RuntimeBinder and System.Runtime.CompilerServices namespace.  The task of these classes is to invoke DLR at runtime and enable following conversion

 

image

 

So the CLR essentially creates an object of CallSite object.  This CallSite is a runtime-binding handler that creates a dynamic object and allows access its properties and methods.  This access is done using Reflection methods.  This CallSite object is a part of DLR engine and the DLR engine interprets these calls as ‘dynamic invocations’  If the DLR does not know the type of object, it will take effort in finding out.  After discovering the type of object, next is to check if it is a special object (like IronRuby, IronPython, DCOM, COM, etc) or it is C# object.

The CLR takes these DLR objects, passes them through

  • Metadata Analysers -  This analyser detects the type of objects
  • Semantic Analysers – This analyser checks if the intended operations (method calls, properties) can be performed on the object.  If there is any mismatch found in these 2 steps, a runtime exception is thrown.
  • Emitter – The emitter builds an Expression Tree and emits it.  This expression tree is sent back to DLR to build a Object-Cache dictionary.  DLR also calls compile on this expression tree to emit a IL.

On second call, the Object-Cache dictionary is used to skip the re-creation of expression tree for the same object and IL is emitted.   Post IL execution of a dynamic type is same as all other types.

 

Creating new dynamic objects

 

The reason why languages such as IronRuby and IronPython today exists is because .NET language allows you to create your own dynamic types.  You can write your own interpreters in C# or VB.NET and let the community use them as they wish.  Let’s see this in an example where we create a new SampleDynamicObject and use its properties and methods.

            dynamic sample = new SampleDynamicObject();
            // TryGetMember will be invoked sample.Name
            // Since the return value is true, it will not
            // throw an exception even if there is no property
            // called Name
            Console.WriteLine(sample.Name);

            // TryInvokeMember will be invoked PrintDetails
            // However it will throw not implemented exception
            // as it is not handled in TryInvokedMember method
            sample.PrintDetails();
    public class SampleDynamicObject : System.Dynamic.DynamicObject
    {
        public override bool TryInvokeMember(System.Dynamic.InvokeMemberBinder binder,
                object[] args, out object result)
        {
            return base.TryInvokeMember(binder, args, out result);
        }

        public override bool TryGetMember(System.Dynamic.GetMemberBinder binder,
                out object result)
        {
            // Property:= Name
            if (binder.Name == "Name")
            {
                result = "Punit";
                return true;
            }
            else
            {
                result = 0;
                return false;
            }
        }
    }

TryInvokeMember is called when any method is called on the dynamic object, while TryGetMember is called when a getter of a property of the dynamic object is called.  Similarly there are other overridable methods that can be implemented.

 

dynamic vs. object

Another aspect of understanding dynamic is to understand how dynamic types are different from System.Object

One variable is typed as object by the compiler and all instance members will be verified as valid by the compiler. The other variable is typed as dynamic and all instance members will be ignored by the compiler and called by the DLR at execution time.

  • Dynamic type can be considered as a special static type that the compiler ignores while compilation which is not the case with System.Object
  • Any operation on object requires type-casting which adds a hit to the performance.  Similarly, defining an object dynamic also involves some extra logic for interpretation.  This extra logic is also referred as Duck Typing
  • An object can be converted to a dynamic type implicitly.  An implicit conversion can be dynamically applied to an expression of type dynamic

 

Limitations of dynamic types

 

The keyword dynamic restricts a lot of functionalities as the type of object is not pre-defined.  Some of the known limitations are

  • Inability to use LINQ, Extension Methods, Lambda expressions
  • Inability to check if type conversions are done correctly
  • Polymorphism can not be fully supported
  • C# language constructs such as using block can not be applied to dynamic types
  • Design principles like Inversion of Control and Dependency Injection are difficult to implement

 

Where best to use dynamic types

 

Some of the areas where, I believe, dynamic types can be used are

  • To leverage runtime type of generic parameters
  • To receive anonymous types
  • To create custom domain objects for data-driven objects
  • To create a cross-language translator (like IronRuby, IronPython, etc) that leverages capabilities of .NET CLR

 

I hope this helps to understand the dynamic keyword and how it differs from System.Object type

Understanding Mock and frameworks – Part 4 of N

January 25, 2013 CSharp, Open Source, Unit Testing, Visual Studio , , , , , , , , ,

I am glad that this series has been liked by large audience and even got focus on Channel9 Video and that motivates to continue this series further on.

This series on Mock and frameworks takes you through Rhino Mocks, Moq and NSubstitute frameworks to implement mock and unit testing in your application.  This series of posts is full of examples, code and illustrations to make it easier to follow.  If you have not followed this series through, I would recommend you to read following articles

  1. Understanding Mock and frameworks – Part 1 of N – Understanding the need of TDD, Mock and getting your application Unit Test ready
  2. Understanding Mock and frameworks – Part 2 of N – Understanding Mock Stages & Frameworks – Rhino Mocks, NSubstitute and Moq
    1. Understanding Mock and frameworks – Part 3 of N -  Understanding how to Mock Methods calls and their Output – Rhino Mocks, NSubstitute and Moq

    This part of the series will focus on exception management, and subscribing to events

    Exception Management

     

    Going back to the second post where we defined our Controller and Service interface,  the method GetObject throws an ArgumentNullException when a NULL value is passed to it.  The Exception Management with Mock allows you to change the exception type and hence, ignore the exception peacefully.

    Using Rhino Mocks

    Using the Stub method of Mocked Service, you can override the exception type.  So instead of getting a ArgumentNullException, we will change the exception to InvalidDataException.  But do not forget to add a ExceptedException attribute to the method (line 2 in example below) that ensures a peaceful exit.

    1. [TestMethod]
    2. [ExpectedException(typeof(InvalidDataException))]
    3. public void T05_TestExceptions()
    4. {
    5.     var service = MockRepository.GenerateMock<IService>();
    6.     // we are expecting that service will throw NULL exception, when null is passed
    7.     service.Stub(x => x.GetObject(null)).Throw(new InvalidDataException());
    8.     
    9.     Assert.IsNull(service.GetObject(null)); // throws an exception
    10. }

    Using NSubstitute

    NSubstitute does not expose a separate /special function like Throw. It treats exceptions as a return value of the test method. So u

    1. [TestMethod]
    2. [ExpectedException(typeof(InvalidDataException))]
    3. public void T05_TestExceptions()
    4. {
    5.     var service = Substitute.For<IService>();
    6.     // we are expecting that service will throw NULL exception, when null is passed
    7.     service.GetObject(null).Returns(args => { throw new InvalidDataException(); });
    8.     
    9.     Assert.IsNull(service.GetObject(null)); // throws an exception
    10. }

    Using Moq

    Moq exposes Throws method just like Rhino Mocks

    1. [TestMethod]
    2. [ExpectedException(typeof(InvalidDataException))]
    3. public void T05_TestExceptions()
    4. {
    5.     var service = new Mock<IService>();
    6.     // we are expecting that service will throw NULL exception, when null is passed
    7.     service.Setup(x => x.GetObject(null)).Throws(new InvalidDataException());
    8.     
    9.     Assert.IsNull(service.Object.GetObject(null)); // throws an exception
    10. }

    Event Subscriptions

    The next topic of interest in this series is managing Event Subscriptions with Mock frameworks. So the target here is to check if an event was called when some methods were called on the mocked object

    Using Rhino Mocks

    The code here is little complex compared to all previous examples.  So we are adding a ServiceObject to the controller which raises an event. Unlike real time examples, this event is hooked to the DataChanged method of the service. This means the controller will raise an event and a service method will be called dynamically.

    1. [TestMethod]
    2. public void T06_TestSubscribingEvents()
    3. {
    4.     var service = MockRepository.GenerateMock<IService>();
    5.     var controller = new Controller(service);
    6.  
    7.     controller.CEvent += service.DataChanged;
    8.     ServiceObject input =new ServiceObject("n1", Guid.NewGuid());
    9.     controller.Add(input);
    10.  
    11.     service.AssertWasCalled(x => x.DataChanged(Arg<Controller>.Is.Equal(controller),
    12.         Arg<ChangedEventArgs>.Matches(m=> m.Action == Action.Add &&
    13.                                               m.Data == input)));
    14. }

    Let’s go into detail of the AssertWasCalled statement – it takes a method handler in the lambda expression. Some of the other methods of interest are Arg<T> Is.Equal and Matches. The method Arg<T> creates a fake argument object of a particular type. Is.Equal() method assigns a value to it and Matches method adds like a filter on the values of input parameters to the original ‘DataChanged’ method.

    Using NSubstitute

    NSubstitute has a different set of methods for event subscription.  The output of Received() method exposes all valid method calls available, Arg.Is<T>() creates fake object for mocked service.

    1. [TestMethod]
    2. public void T06_TestSubscribingEvents()
    3. {
    4.     var service = Substitute.For<IService>();
    5.     var controller = new Controller(service);
    6.  
    7.     controller.CEvent += service.DataChanged;
    8.     ServiceObject input = new ServiceObject("n1", Guid.NewGuid());
    9.     controller.Add(input);
    10.  
    11.     service.Received().DataChanged(controller,
    12.       Arg.Is<ChangedEventArgs>(m => m.Action == Action.Add && m.Data == input));
    13. }

    Using Moq

    Apart from having different names of methods from NSubstitute, Moq does exactly the same as other two frameworks

    1. public void T06_TestSubscribingEvents()
    2. {
    3.     var service = new Mock<IService>();
    4.     var controller = new Controller(service.Object);
    5.  
    6.     controller.CEvent += service.Object.DataChanged;
    7.     ServiceObject input = new ServiceObject("n1", Guid.NewGuid());
    8.     controller.Add(input);
    9.  
    10.     service.Verify(x => x.DataChanged(controller,
    11.         It.Is<ChangedEventArgs>(m => m.Action == Action.Add && m.Data == input)));
    12. }

    The Code for Download

    Understanding RhinoMocks, NSubstitute, Moq by Punit Ganshani

    So that concludes how to use Rhino Mocks, NSubstitute and Moq for Exception Handling and Event Subscriptions.  Should you have any questions, feel free to comment on this post.

    Understanding Mock and frameworks – Part 3 of N

    January 22, 2013 CSharp, Open Source, Unit Testing, Visual Studio , , , , , , , ,

    This series on Mock frameworks takes you through Rhino Mocks, Moq and NSubstitute frameworks to implement mock and unit testing in your application.  This series of posts is full of examples, code and illustrations to make it easier to follow.  If you have not followed this series through, I would recommend you reading following articles

    1. Understanding Mock and frameworks – Part 1 of N – Understanding the need of TDD, Mock and getting your application Unit Test ready
    2. Understanding Mock and frameworks – Part 2 of N – Understanding Mock Stages & Frameworks – Rhino Mocks, NSubstitute and Moq

    In Part 1, we understood how to refactor our application so that it can be Unit tested with Mock frameworks and in Part 2, we understood 3 stages of Mocking – Arrange, Act and Assert with an example.  Now as understood in Part 2 ‘Arrange’ is the stage where implementation in all the three framework differs.  So in this post, we will focus on ‘Arrange’ and explore various types of mocking.

    Mocking Method Calls

    In the examples below, we will mock a method call GetCount of the service so that it returns value 10.  We will clear the controller and then check if GetCount method was actually called.  If this example was called without using Mock frameworks, the outputs would have been different. Please note that in the below examples all the mock methods are applied on the service; however, you can apply them on the controller as well.

    Using Rhino Mocks

    Rhino Mocks uses methods AssertWasCalled and AssertWasNotCalled to check if the ‘actual’ method was invoked.

    1. [TestMethod]
    2. publicvoid T03_TestMethodCall()
    3. {
    4.     var service = MockRepository.GenerateMock<IService>();
    5.     var controller = newController(service); // injection
    6.     service.Stub(x => x.GetCount()).Return(10);
    7.     controller.Clear();
    8.     service.AssertWasNotCalled(x => x.GetCount());
    9.     service.AssertWasCalled(x => x.Clear());
    10. }

    Using NSubstitute

    NSubstitute uses DidNotReceive and Received methods to check if the ‘actual’ methods got a call or not.  There are multiple similar methods available such as DidNotReceiveWithAnyArgs, ClearReceivedCalls, and ReceivedCalls.  Each of these functions can be used to manipulate or check the call stack.

    1. [TestMethod]
    2. publicvoid T03_TestMethodCall()
    3. {
    4.     var service = Substitute.For<IService>();
    5.     var controller = newController(service);
    6.     service.GetCount().Returns(10);
    7.     controller.Clear();
    8.     service.DidNotReceive().GetCount();
    9.     service.Received().Clear();
    10. }

    Using Moq

    Moq standardizes the verification of method calls with a single method Verify. The Verify method has many overloads allowing to even evaluate an expression.

    1. [TestMethod]
    2. publicvoid T03_TestMethodCall()
    3. {
    4.     var service = newMock<IService>();
    5.     var controller = newController(service.Object);
    6.     service.Setup(x => x.GetCount()).Returns(0);
    7.     controller.Clear();
    8.     service.Verify(x => x.GetCount(), Times.Never());
    9.     service.Verify(x => x.Clear());
    10. }

    Mocking Method Calls and their Output

    The next example focuses on a more real time scenario, where we will mock a method that takes input parameters and returns an output parameter.  The service will be mocked and we will test the functionality of our controller.

    So going back to Part 2 of our series, if a NULL input is passed to Add method of the controller it returns back false.  For any non-NULL input, it gives a call to Add method of the service and returns back the value returned by the service.  So we will pass two different types of objects to the controller and return 2 different outputs based on our mock.  When we pass ServiceObject with name ‘m1’ we will mock it to return value ‘false’ while ServiceObject with name ‘m2’ should return value ‘true’

    Using Rhino Mocks

    What’s important to focus at his moment is that we altered the behavior and hence used the method Stub on the mocked service and then later we defined the expected result by the method Returns.

    1. [TestMethod]
    2. publicvoid T04_TestConditions()
    3. {
    4.     var service = MockRepository.GenerateMock<IService>();
    5.     var controller = newController(service); // injection
    6.     ServiceObject oneThatReturnsFalse = newServiceObject(“m1”, Guid.NewGuid());
    7.     ServiceObject oneThatReturnsTrue = newServiceObject(“m2”, Guid.NewGuid());
    8.     service.Stub(x => x.Add(oneThatReturnsFalse)).Return(false);
    9.     service.Stub(x => x.Add(oneThatReturnsTrue)).Return(true);
    10.     Assert.AreEqual(false, controller.Add(oneThatReturnsFalse));
    11.     Assert.AreEqual(true, controller.Add(oneThatReturnsTrue));
    12. }

    Using NSubstitute

    NSubstitute does not require any method like Stub. When you are defining any method on the mocked service object, it automatically assumes that the framework has to alter the behavior of the method.  Just like Rhino Mocks, method Returns defines the expected mocked output of that method.

    1. [TestMethod]
    2. publicvoid T04_TestConditions()
    3. {
    4.     var service = Substitute.For<IService>();
    5.     var controller = newController(service);
    6.     ServiceObject oneThatReturnsFalse = newServiceObject(“m1”, Guid.NewGuid());
    7.     ServiceObject oneThatReturnsTrue = newServiceObject(“m2”, Guid.NewGuid());
    8.     service.Add(oneThatReturnsFalse).Returns(false);
    9.     service.Add(oneThatReturnsTrue).Returns(true);
    10.     Assert.AreEqual(false, controller.Add(oneThatReturnsFalse));
    11.     Assert.AreEqual(true, controller.Add(oneThatReturnsTrue));
    12. }

    Using Moq

    Moq like Rhino Mocks requires a special method call Setup and then followed by another method call to Returns method

    1. [TestMethod]
    2. publicvoid T04_TestConditions()
    3. {
    4.     var service = newMock<IService>();
    5.     var controller = newController(service.Object);
    6.     ServiceObject oneThatReturnsFalse = newServiceObject(“m1”, Guid.NewGuid());
    7.     ServiceObject oneThatReturnsTrue = newServiceObject(“m2”, Guid.NewGuid());
    8.     service.Setup(x => x.Add(oneThatReturnsFalse)).Returns(false);
    9.     service.Setup(x => x.Add(oneThatReturnsTrue)).Returns(true);
    10.     Assert.AreEqual(false, controller.Add(oneThatReturnsFalse));
    11.     Assert.AreEqual(true, controller.Add(oneThatReturnsTrue));
    12. }

    Strict Mock vs Normal Mock

     

    Strict Mock creates brittle tests.   A strict mock is a mock that will throw an exception if you try to use any method that has not explicitly been set up to be used.  A dynamic (or loose, normal) mock will not throw an exception if you try to use a method that is not set up, it will simply return a default value from the method and keep going.  The concept can be applied to any mocking framework that supports these 2 types of mocking.

    Let’s see an example with Rhino Mocks.  We will alter our controller class to add a method Find that invokes two methods of our IService interface

    1. publicpartialclassController
    2. {
    3.     publicServiceObject Find(string name)
    4.     {
    5.         if (string.IsNullOrEmpty(name))
    6.             thrownewArgumentNullException(name);
    7.         if (_service.GetCount() > 0)
    8.         {
    9.             /* Can have more business logic here */
    10.             return _service.GetObject(name);
    11.         }
    12.         else
    13.         {
    14.             returnnull;
    15.         }
    16.     }
    17. }

    Now with a Strict Mock on the method GetObject and no mock on method GetCount, we should expect an exception to be thrown.  However, when using a dynamic/loose mock, GetCount will return its default value which is Zero (0 ) and hence Find Method will return NULL

    1. [TestMethod]
    2. publicvoid T10_TestStrictMock()
    3. {
    4.     // data
    5.     string name = “PunitG”;
    6.     ServiceObject output = newServiceObject(name, Guid.NewGuid());
    7.     // arrange
    8.     var service = MockRepository.GenerateMock<IService>();
    9.     var controller = newController(service);
    10.     var strictService = MockRepository.GenerateStrictMock<IService>();
    11.     var controllerForStrictMock = newController(strictService);
    12.     // act
    13.     service.Stub(x => x.GetObject(name)).Return(output);
    14.     // assert
    15.     Assert.AreEqual(null, controller.Find(name));
    16.     Assert.AreEqual(output, controllerForStrictMock.Find(name)); // exception expected
    17. }

    When you execute this Test, the first assert will pass and the second assert will throw an exception.  Overall the test will fail.

    The Code for Download

    Understanding RhinoMocks, NSubstitute, Moq by Punit Ganshani

    Hope this helps. In the next post, we will explore how to mock Exceptions and Events

    Understanding Mock and frameworks – Part 2 of N

    January 10, 2013 CSharp, Open Source, Unit Testing, Visual Studio , , , , , , , , ,

    Continuing on where we left in the previous post – Understanding Mock and frameworks Part 1 – this post will take a very simple example and illustrate how to implement Mock frameworks.  I was mailed by one of our readers asking if I could define TDD – Test Driven Development in my words.

    Test driven development is a methodology that states that test cases should be driving the design decisions and development.  This also that TDD advocates knowing the expected behaviour of our application (or a part of it), before we write the code.  It also enforces incremental design and allows us to write just enough code to pass our test. That means, no redundant code (YAGNI – You ain’t gonna need it) ships with our product, and our code is fairly simple (KISS – Keep it simple, stupid!) to understand. Sounds cool, isn’t it?

    So you start writing tests first, design your application gradually and then start writing the implementation so that each test passes.  I prefer TDD over Test-After-Development (TAD) where you first design your application and write code.  Once your code is complete (or partially complete), you write the required test cases. Theoretically, a TAD system would appear to produce as high-quality product as TDD would.  But to my opinion, TAD involves taking design decisions up front – right from the start rather than taking incremental baby steps.  This also means that when time to delivery is short and critical, there are chances of not having a 100% code coverage by TAD approach. Also, a developer can fall victim to his own ego that “I don’t bad write code that needs to be tested”

    That’s the way I see Test Driven Development (TDD) and Test-After-Development (TAD). So let’s get back to our topic of interest – an example to get started with Unit Testing with Mocks.

    The service (or data source) interface

    Assume that you are building an application that interfaces with just one service.  We have refactored our application to follow SOLID principles such as SRP, ISP, DIP and DI.  The service implements an interface IService.   This service interface could be interface of your internal service, or an external service whose behaviour and availability can not be controlled by you.

    At this moment, we are not really interested in the implementation of IService and we will restrict our scope to the interface and the consumer (i.e. our application).

    public interface IService
        {
            string ServiceName { get; set; }
    
            bool Add(ServiceObject input);
            bool Remove(ServiceObject input);
            void Clear();
            int GetCount();
            ServiceObject GetObject(string name);
    
            void DataChanged(object sender, ChangedEventArgs args);
        }
    
        public class ServiceObject
        {
            public string Name { get; set; }
            public Guid Ref { get; set; }
            public DateTime Created { get; set; }
    
            public ServiceObject(string name, Guid reference)
            {
                Name = name;
                Ref = reference;
                Created = DateTime.Now;
            }
        }
    
        public enum Action
        {
            Add,
            Remove,
            Clear
        }

    The Controller (or our application)

     

    The consuming class (let’s call it Controller) isolates the dependency on the implementation of service object and ensures that the consuming class injects the appropriate service object.

    public class Controller
        {
            private IService _service;       
    
            public Controller(IService service)
            {
                _service = service;
            }
        }

    In reality, the Controller class could be just a facade or can contain business logic, or data manipulation and a lot of other complex logic.  For this example, it is kept simple and restricted to only service calls.   So our controller class looks like,

    1. public class Controller
    2. {
    3.     private IService _service;
    4.  
    5.     /// <summary>
    6.     /// To demonstrate events in Mock
    7.     /// </summary>
    8.     public event EventHandler<ChangedEventArgs> CEvent;
    9.  
    10.     public Controller(IService service)
    11.     {
    12.         _service = service;
    13.     }    
    14.     
    15.     private void RaiseEvent(Action action,
    16.                     ServiceObject input)
    17.     {
    18.         /* Can have more business logic here */
    19.         if (CEvent != null)
    20.             CEvent(this, new ChangedEventArgs(action, input));
    21.     }
    22.  
    23.     /// <summary>
    24.     /// To demonstrate methods with parameters and return value
    25.     /// in Mock
    26.     /// </summary>
    27.     public bool Add(ServiceObject input)
    28.     {
    29.         if (input == null)
    30.             return false;
    31.  
    32.         /* Can have more business logic here */
    33.  
    34.         _service.Add(input);
    35.         RaiseEvent(Action.Add, input);
    36.  
    37.         return true;
    38.     }
    39.  
    40.     /// <summary>
    41.     /// To demonstrate simple method calls in Mock
    42.     /// </summary>
    43.     public void Clear()
    44.     {
    45.         _service.Clear();
    46.     }
    47. }

     

    So to create an object of our controller, we need to pass an instance of a class that implements IService. As said earlier, we will not implement the interface IService to ensure that we do not have a functional service (to create a scenario of an unavailable service) for testing.  So let’s get started with our Mocking the interface IService.

     

    Mocking – Step Arrange

    All our tests will follow three step process

    • Arrange – This step involves creating a service/interface mock (using a Mock framework) and creating all required objects for that service/interface.  This step also includes faking the object behaviour.
    • Act – This step involves calling a method of the service, or performing any business functionality
    • Assert – This step usually asserts whether the expected result is obtained from the previous step (‘Act’) or not

    Step Arrange: With Rhino Mocks

    Rhino Mock framework maintains a repository of Mocks.   The MockRepository can be seen to follow a Factory design pattern to generate Mocks using Generics, object(s ) of Type, or stubs.  One of the ways to generate a Mock is,

    1. // Arrange
    2. var service = MockRepository.GenerateMock<IService>();
    3. var serviceStub = MockRepository.GenerateStub<IService>();

    There is a minor difference between a Mock and a Stub. A discussion on StackOverflow explains it very appropriately or Martin Fowler has explained it brilliantly in his post on Mocks Aren’t Stubs. To keep it short and simple,

    Mock objects are used to define expectations i.e: In this scenario I expect method A() to be called with such and such parameters. Mocks record and verify such expectations. Stubs, on the other hand have a different purpose: they do not record or verify expectations, but rather allow us to “replace” the behaviour, state of the “fake”object in order to utilize a test scenario.

    If you want to verify the behaviour of the code under test, you will use a mock with the appropriate expectation, and verify that. If you want just to pass a value that may need to act in a certain way, but isn’t the focus of this test, you will use a stub.

    Once the object has been created, we need to mock the service behaviour on this fake service object.  Let’s see how to fake a property

    1. // Arrange
    2. var service = MockRepository.GenerateMock<IService>();            
    3. service.Stub(x => x.ServiceName).Return("DataService");

    Similarly, we can mock methods.  We will see mocking different types in detail later.

     

    Step Arrange: With NSubstitute

    NSubstitute makes it simpler to read, interpret and implement.

    1. // Arrange
    2. var service = Substitute.For<IService>();

    Mocking property in NSubstitute is a lot easier to remember

    1. // Arrange
    2. var service = Substitute.For<IService>();            
    3. service.ServiceName.Returns("DataService");

    Step Arrange: With Moq

    Moq like Rhino Mocks also has MockRepository.  You can choose a mock to be a part of MockRepository or to let it hang individually.  Both the methods are shown as below.

    1.           // Arrange
    2.             var service = newMock<IService>();
    3.  
    4.           MockRepository repository = new MockRepository(MockBehavior.Default);
    5.           var serviceInRepository = repository.Create<IService>();

    Mocking property is somewhat similar to Rhino Mocks

    1.           // Arrange
    2.             var service = newMock<IService>();
    3.           service.Setup(x => x.ServiceName).Returns("DataService");

    Rhino Mocks and Moq both have two different types of behaviours’ – Strict and Loose (default).  We will look into what each of them means in the next article.

    Mocking – Step Act, Assert

    Once we have the fake service and have defined the expected behaviour, we can implement the next step.  This step involves calling a method of the service, or performing any business functionality. So here we can call a method of service, or use its output to do some work.  So our examples will vary from setting something in mocked service object to calling its methods.

    Step Act and Assert: With Rhino Mocks and NSubstitute

    As the output of the first step, both Rhino Mock and NSubstitute give you directly an object of IService (named, service).  So you can directly use this object as a substitute of the actual service.  The object service is a fake object whose ServiceName is expected to be "DataService".  Just to remind that in actual, neither the implementation of the service nor a real object of the service exists.  We are dealing with fake objects through all our examples

    1. // Act
    2. var actual = service.ServiceName;
    3.  
    4. // Assert
    5. Assert.AreEqual("DataService", actual);

    This test on execution will give a PASS

    Step Act and Assert: With Moq

    Moq unlike the other two frameworks does not directly provide you a fake service object.  It provides you the fake service object in one of its properties called ‘Object’.  Everything else remains the same

    1. // Act
    2. var actual = service.Object.ServiceName;
    3.  
    4. // Assert
    5. Assert.AreEqual("DataService", actual);

    If you put together the code, it forms our first Unit Test w/Mock frameworks.

    The Code for Download

    Understanding RhinoMocks, NSubstitute, Moq by Punit Ganshani

    In the next article, we will look into creating Mocks for methods using these frameworks.

    Understanding Mock and frameworks – Part 1 of N

    December 31, 2012 CSharp, Open Source, Unit Testing, Visual Studio , , , , , ,

    We have several posts on the Internet that focus on Test Driven Development and its benefits.  There is a group of SCRUM Masters around who emphasize on how Test Driven Development (TDD) is beneficial and can change the way you write and test code.  I completely agree that Agile and TDD can help you manage your teams, code, product quality and communication in a better way.  But this is not yet another post on Test Driven Development, or Unit Testing.  This is one of the articles on Test Mocking.  How is it different from others?  This series assumes that you have no knowledge of mocking and your application is not unit-test ready so it guides you on the concept of mocking, preparing your application, using mocking frameworks, comparing them and ensuring that your code is ‘Well-Tested and Covered’

    So let’s get started!

    Understanding Mock – What, Why?

    Now if you are following the discipline of TDD and are building large scale enterprise application that interacts with services, databases, or other data sources, writing data driven unit tests becomes a challenge.  Test setup (for each test) becomes more complicated and challenging than the actual unit test.  This is where the concept of Mock comes into picture.

    Mock – to deceive, delude, or disappoint.

    As the English definition of the word Mock suggests we want to deceive the process of unit test with a substituted data rather than retrieving a data set from the actual service or database.  This also means that we are getting rid of various other aspects of service availability, environment setup, data masking, data manipulation and conversion.  It is like telling the unit test

    Do not worry about the data source.  Stop worrying about the actual data and use the mocked up data.  Test the logic!

    OK, this sets a premise on why mocking is required!   Do you need an alternate definition to understand this even better?

    A mocking framework fakes objects to replace any dependencies you have and thereby, allow you to tell them (the mocked code) to behave as you want it to.  So even if the service may not exist if you have asked the mocking framework to mock it, the mocking framework will provide a fake service when requested by the consuming code.

    Refactoring application to be Unit Test ready

     

    Let’s assume that you were told one fine day that your application, for the simplicity is a typical 3-tier web application, needs to be unit tested.   So you start with the first step of analysing the most critical part that constitutes 80% of your business functionality.  Now there are two possibilities – either your application has been designed following the SOLID Principles or it is a legacy application built without considering these design principles.   The second possibility requires more effort on your side, so let’s assume that your application is not design correctly and you want to Unit Test it with Mocking.

    Single Responsibility Principle – Isolate your data sources

    First,  try isolating the all the data sources into repositories.  This means that if your application is reading configuration from XML file, business data from SQL database, real-time feeds from Web Services, and alike then check if the principle of isolation and single-responsibility is applied correctly.  This means that your application should not have a class that defines actions responsible for reading data from multiple sources, manipulate it, do some calculations, or store it.

    Your application should have one class object per data source (DAL class) that is responsible for retrieving / storing data in that data source.  If the need is to do some calculations, manipulations on one or more data sources, then there should be another class object who should have this responsibility.

     

    Interface Segregation Principle – Use interfaces instead of objects

    Now that you have your data sources isolated and have single responsibility of interacting with one-and-only-one data source, your next step is to ensure that all the references of this class object (that represents the data source) is not referenced as an object in all the calling objects.

    In simple words code like this,

    1. ConfigurationStore store = new ConfigurationStore();
    2. store.RefreshConfiguration();

    Needs to be refactored into something like,

    1. IConfigurationStore store = new XmlConfigurationStore();
    2. store.RefreshConfiguration();

    Please note the two changes in the above code. One – instead of creating an object of class ConfigurationStore we are now creating an object of an interface IConfigurationStore and have given more meaningful name to the class as XmlConfigurationStore.  This is also referred as ISP

     

    Dependency Inversion and Injection – Refactoring the construction of data source

    Now that you have isolation between classes representing data sources and have interfaces defining the definition of these classes, the next step is to ensure that there is no hard dependency on the implementation of these classes.  In other words,  we don’t want to make our consumer class dependent on the implementation of the class (representing data source) instead we want it to be dependent on the definition of this interface.

    Higher level module (the consuming classes responsible for any data manipulations, representation, etc) should not depend on low level module (the object of data source/DAL) rather should depend on a layer of abstraction (like an interface)

    This is known as DIP – Dependency Inversion Principle.  There are three ways to implement DIP – through constructor while constructing the higher level modules, through properties by assigning the lower level module objects to the property, or explicitly through methods.  We will not go into the details of implementation and would advise you to go through the MSDN article on Dependency Injection

    So we have a cleaner code that can be referred as unit-test-ready code

    Unit Test and Mock – How?

    Do we need any special infrastructure? Any third-party frameworks required?

    To get started with unit testing, all you need is Visual Studio 2010/2012.  Yes, that’s enough for basic testing. We build our unit tests using any one of NUnit, xUnit or MSTest and run them using appropriate tools.

    But since you are keen at implementing Mocks, you will require a well tested and proven Mocking Framework.  Three frameworks that I’ve used are

    • Rhino Mocks – This is unarguably the most adopted and extensive mocking framework with lots of features.  Some developers find it difficult the adopt considering a wide range of functionalities available.  However, post the series of articles of Mocking using this should not be difficult.
    • NSubstitute – Implementing mocking is made extremely easy using this framework
    • Moq – This is a great framework when you are developing a Silverlight application

    So, get ready to download one of the above frameworks to get your infrastructure ready.  If you are still confused which one, then

    I would recommend staying connected to this post as in the subsequent articles in the series we will see the differences in the implementation of mocking using these 3 frameworks!  We will look more into the ‘How’ aspects of implementing these frameworks!

    Follow on Feedly