.NET

1Gig-Tech (#40) – PowerShell, IoT, .NET, ESP8266, Deep Learning

September 4, 2016 1Gig Tech , , , ,

Welcome to 1Gig Tech update!

In today’s edition, there are 11 articles on technology, news, open source, community on the fantastic and ever evolving technology world.

  • Engineering the Future of .NET (Sam Basu)
    While there is a huge army of engineers at Microsoft who work on .NET and C#, the following folks are arguably the most influential in bringing you the future of .NET and .NET Tooling.
  • Download free Windows Server 2016 eBook, White paper, PDF, etc. (AnandK)
    Microsoft and its partners have made available for download a bunch of resources for Windows Server 2016, that can help you get the best out of this Server operating system. Windows Server 2016 is the next version of Microsoft’s server operating system, being developed in line with Windows 10.
  • Writing a bot for IP Messaging in Node.js (Dominik Kundel)
    It seems like bots are the new hot thing that every chat supports. They usually augment conversations or they can perform tasks for the user. We will add to an existing IP messaging chat a simple bot that will return us a GIF whenever we ask for it.
  • What is Deep Learning?
    Deep learning is an emerging topic in artificial intelligence (AI). A subcategory of machine learning, deep learning deals with the use of neural networks to improve things like speech recognition, computer vision, and natural language processing.
  • Unit Testing .NET Core (Ricardo Peres)
    With the recent arrival of .NET Core, some things that we were used to having are no longer available. This includes unit tests – plus Visual Studio integration – and mocking frameworks. Fortunately, they are now becoming available, even if, in some cases, in pre-release form.
  • AWS vs Azure vs Google Cloud Platform – Internet of Things
    Choosing the right cloud platform provider can be a daunting task. Take the big three, AWS, Azure, and Google Cloud Platform; each offer a huge number of products and services, but understanding how they enable your specific needs is not easy.
  • .NET Rocks! vNext
    The URL for this tag filter is below. Copy it to easily share with others. Carl Franklin is Executive Vice President of App vNext, a software development firm focused on the latest methodologies and technologies. Carl is a 20+ year veteran of the software industry, co-host and founder of .

You can also follow these updates on Facebook Page or can also read previous editions at 1Gig Tech

Thanks

1Gig-Tech (#38) – IoT, VR, Azure, .NET 4.6.2

August 10, 2016 1Gig Tech , , ,

Welcome to 1Gig Tech update!

In today’s edition, there are 9 articles on technology, news, open source, community on the fantastic and ever evolving technology world.

  • IoT-VR with Unity,Intel Edison (Grasshopper.iics, Abhishek Nandy)
    In this tutorial we will cover a different perspective how we can use Unity and for virtual experience.We will work on an animation system give complete life to our character and add a different dimension by bringing in the scope of MQTT adding Intel Edison to feel the magic.
  • AWS vs Azure vs Google Cloud Platform – Storage & Content Delivery
    In this series, we’re comparing cloud services from AWS, Azure and Google Cloud Platform. A full breakdown and comparison of cloud providers and their services are available in this handy poster. With any cloud deployment it is important to be match the right storage solution to each workload.
  • Announcing .NET Framework 4.6.2 (Stacey Haffner)
    Today we are excited to announce the availability of the .NET Framework 4.6.2! Many of the changes are based on your feedback, including those submitted on UserVoice and Connect. Thanks for your continued help and engagement! You can see the full set of changes in the .NET Framework 4.6.
  • .NET Standard Library Support for Xamarin (James Montemagno)
    Today, we are extremely pleased to release support for .NET Standard Libraries for all Xamarin applications. This includes creating and consuming local .NET Standard Libraries, but also adding .NET Standard Libraries from NuGet directly to your Xamarin apps.
  • Cloud Adoption: A Deep Dive into the Swiss Cheese Model
    In a three-part series describing how Hymans Robertson adopted Microsoft Azure, guest blogger Barry Smart describes their risk & mitigations analysis process and explains how you can use the same process to understand the risk of your own cloud journey.
  • Entity Framework Core 1.1 Plans (Rowan Miller)
    Now that Entity Framework Core (EF Core) 1.0 is released, our team is beginning work on bug fixes and new features for the 1.1 release. Keep in mind that it’s early days for this release, we’re sharing our plans in order to be open, but there is a high chance things will evolve as we go.

You can also follow these updates on Facebook Page or can also read previous editions at 1Gig Tech

Thanks

1Gig-Tech (#22) – TrackJS, Microservices, PowerShell, .NET CLI

December 27, 2015 1Gig Tech , , ,

In the last edition for 2015, there are 11 articles on technology, news, open source, community on the fantastic and ever evolving technology world.

Happy holidays!

  • Exploring the new .NET (Scott Hanselman)
    I’ve never much like the whole “dnvm” and “dnu” and “dnx” command line stuff in the new ASP.NET 5 beta bits. There’s reasons for each to exist and they were and they have been important steps, both organizationally and as aids to the learning process.
  • PowerShell Classes for Developers
    Classes in PowerShell have been a feature since long and creating objects of these classes isn’t new. From the classic way of creating objects of .NET classes (like the MailMessage in Example 1 below) or to defining a custom .NET class (in the Example 2 below), we have seen PowerShell extend .
  • Azure WebJobs are awesome and you should start using them right now!
    These real world experiences with Azure are now available in the Pluralsight course “Modernizing Your Websites with Azure Platform as a Service” No really, they’re totally awesome! I used Azure WebJobs in the very early days and whilst they served a purpose, I wasn’t blown away with them at the
  • Create a database, as easily as a spreadsheet
    A query UI anyone can use Filter, sort, group and report with ease. Even non-technical teammates can use our query UI. Save any query as a view to get back to anytime or to share with the team. A query UI anyone can use Filter, sort, group and report with ease.
  • What you need to know about Bootstrap 4 (Ezequiel Bruni)
    Bootstrap is beloved by many. Well, if not “beloved”, then it is at least appreciated for what it is: a giant framework with almost everything you could need for building a site or web app interface. Bootstrap is changing, though. That’s right, version four is in alpha release.
  • Data Science and Machine Learning Essentials
    Learn key concepts of data science and machine learning with examples on how to build a cloud data science solution with R, Python and Azure Machine Learning from the Cortana Analytics Suite.
  • A Review of JavaScript Error Monitoring Services (Raymond Camden)
    If you’re like me, then you’ve been diligent about writing the best JavaScript code you can. You lint. You write tests (both for the code and the UI). You check out your site in multiple different browsers, locales, time zones, and dimensions. You do a good job. Rock on, you.
  • Data Sketches – Yahoo! (YAHOO)
    In the analysis of big data there are often problem queries that don’t scale because they require huge compute resources to generate exact results, or don’t parallelize well. Examples include count distinct, quantiles, most frequent items, joins, matrix computations, and graph analysis.
  • TrackJS
    Minified JavaScript code is hard to debug. With Trackjs, simply drag-and-drop your sourcemap on to a stacktrace and we’ll automatically un-minify source code.

You can also follow these updates on Facebook Page or can also read previous editions at 1Gig Tech

We will resume again in Feb 2016 (yes, not in Jan due to some other commitments).

Thanks

1Gig-Tech (#20) – CodeInject, Books, LiveWriter, .NET4.6.1

December 13, 2015 1Gig Tech , , ,

Welcome to 1Gig Tech update!

In today’s edition, there are 12 articles on technology, news, open source, community on the fantastic and ever evolving technology world.

  • The .NET Journey: Recapping the last year (Heath Stewart)
    Having just completed Connect(); // 2015, we thought to take a moment to review everything that’s happened with .NET over the last year, between last year’s and this year’s Connect();. And what a year it’s been! We’ve seen significant developments in the .
  • Optimizing Xamarin.Forms Apps for Maximum Performance
    We know performance matters when it comes to mobile apps. With Xamarin, your iOS and Android apps are fully native apps taking advantage of each and every optimization the platform has to offer. It’s no different if you’re building native mobile apps with Xamarin.
  • Free Red Book: Readings in Database Systems, 5th Edition (Todd Hoff)
    Editors Peter Bailis, Joseph M. Hellerstein, and Michael Stonebraker curated the papers and wrote pithy introductions. Unfortunately, links to the papers are not included, but a kindly wizard, Nindalf, gathered all the referenced papers together and put them in one place. What’s in it?
  • TFVC and Git repositories in the same team project (Heath Stewart)
    Many teams are transitioning from TFVC to Git for version control and want to keep their work items, build definitions, and other data in their team project. Now with TFS Update 1 or Team Services, you can add Git repositories to your existing team project created with TFVC.
  • Programming Sucks
    Every friend I have with a job that involves picking up something heavier than a laptop more than twice a week eventually finds a way to slip something like this into conversation: “Bro,1[1] you don’t work hard. I just worked a 4700-hour week digging a tunnel under Mordor with a screwdriver.”
  • .NET Framework 4.6.1 is now available! (Heath Stewart)
    Today we are announcing the availability of .NET Framework 4.6.1. You can download this release now. The .NET Framework 4.6.1 can be installed on Windows 10, Windows 8.1, Windows 8, Windows 7 and the corresponding server platforms.

You can also follow these updates on Facebook Page or can also read previous editions at 1Gig Tech

Thanks

5 steps to targeting multiple .NET frameworks

June 21, 2015 CSharp, Visual Studio , ,

When designing an API or libraries, we aim to have maximum coverage of available .NET frameworks so that we can have maximum number of clients adopt our APIs.  The key challenge in such scenarios is to have a clean code and an efficient way to manage multiple versions of code, Nuget packages and builds.

This article will outline a quick and easy way to manage single code base and target multiple .NET framework versions.  I’ve used the same concept in KonfDB

Step 1 – Visual Studio Project Configuration

 

First, we need to use Visual Studio to create multiple build definitions.  I would prefer 2 definitions per .NET configuration like

  • .NET 4.0 — DebugNET40, ReleaseNET40
  • .NET 4.5 — DebugNET45 and ReleaseNET45

When adding these configurations, clone them from Debug and Release and make sure you have selected ‘Create New Project Configurations’


This will modify your solution (.sln) file and Project (.csproj) files.

If certain projects do not support both versions, you can uncheck them before clicking on Close button.   This is usually done, when your solution has 2 parts – API and Server and you want the API to be multi-framework target and Server code to run on a particular version of .NET

Step 2 – Framework Targeting in Projects

 

There are 2 types of changes required in the Project (.csproj) files to manage multiple .NET versions

Every project has default configuration.  This is usually the lowest or base configuration.  This is defined by xml property like

<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<TargetFrameworkVersion>v4.0</TargetFrameworkVersion>

 

Change this to

<Configuration Condition=" '$(Configuration)' == '' ">DebugNET40</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<TargetFrameworkVersion>v4.0</TargetFrameworkVersion>

 

Make sure that all the projects in solution have same default Configuration and TargetFrameworkVersion

When we added multiple configurations to our solution, there is one PropertyGroup per configuration added to our Project (.csproj) files.  This appears something like,

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'DebugNET40|AnyCPU' ">
<DebugSymbols>true</DebugSymbols>
<DebugType>full</DebugType>
<Optimize>false</Optimize>
<OutputPath>bin\</OutputPath>
<DefineConstants>DEBUG;TRACE</DefineConstants>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
</PropertyGroup>

 

We need to add/modify 3 lines in each of these PropertyGroup tags to change OutputPath, TargetFrameworkVersion and DefineConstants

For .NET 4.0:

<OutputPath>bin\$(Configuration)\$(TargetFrameworkVersion)\</OutputPath>
<TargetFrameworkVersion>v4.0</TargetFrameworkVersion>
<DefineConstants>DEBUG;TRACE;NET40</DefineConstants>

 

For .NET 4.5:

<OutputPath>bin\$(Configuration)\$(TargetFrameworkVersion)\</OutputPath>
<TargetFrameworkVersion>v4.5</TargetFrameworkVersion>
<DefineConstants>DEBUG;TRACE;NET45</DefineConstants>

 

We will use these settings later in the article.

Step 3 – References Targeting in Projects

 

Our dependent libraries may have different versions for different versions of .NET. A classic example is Newtonsoft JSON libraries which are different for .NET 4.0 and .NET 4.5. So we may require framework dependent references – be it Standard References or Nuget References.

When we are using standard references, we can organize our libraries in framework specific folders and alter the project configuration to look like,

<Reference Include="Some.Assembly">
<HintPath>..\Libraries\$(TargetFrameworkVersion)\Some.Assembly.dll</HintPath>
</Reference>

 

To reference Nuget packages, we can add conditions to the references as shown below

<ItemGroup>
<Reference Include="Newtonsoft.Json, Version=6.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL"
Condition="'$(TargetFrameworkVersion)' == 'v4.5'">
<HintPath>..\..\..\packages\Newtonsoft.Json.6.0.8\lib\net45\Newtonsoft.Json.dll</HintPath>
</Reference>
<Reference Include="Newtonsoft.Json, Version=6.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL"
Condition="'$(TargetFrameworkVersion)' == 'v4.0'">
<HintPath>..\..\..\packages\Newtonsoft.Json.6.0.8\lib\net40\Newtonsoft.Json.dll</HintPath>
</Reference>
</ItemGroup>

 

When we now do a batch build in Visual Studio, the solution should compile without errors.

Step 4 – Managing Clean Code with multiple frameworks

 

There are 2 ways to manage our code with different versions of .NET.

Bridging the gap of .NET 4.5.x in .NET 4.0

 

Let’s assume we are creating an archival process where we want to zip the log files and delete the log files after zipping them. If we build this functionality with .NET 4.5 framework, we can use the ZipArchive class (in System.IO.Compression) in .NET 4.5 but there is no such class in .NET 4.0. In such cases, we should go for interface driven programming and define 2 implementations – one for .NET 4.0 and one for .NET 4.5.

These 2 implementations cannot co-exist in the solution as they may give compilation issues. To avoid these we need to edit the Project (.csproj) file to

<Compile Include="LogFileMaintenance40.cs" Condition=" '$(TargetFrameworkVersion)' == 'v4.0' " />
<Compile Include="LogFileMaintenance45.cs" Condition=" '$(TargetFrameworkVersion)' == 'v4.5' " />

 

Both these files can have the same class names as at a given time, only one of them will compile

The unclean way

 

The unclean way is where we use the DefineConstants to differentiate between the framework versions. Earlier in the project configuration, we changed the DefineConstants to have NET40 and NET45. We can use these DefineConstants as pre-processor directives to include framework specific code like,

#if NET40
    …
#endif
#if NET45
    …
#endif

 

This methodology should be adopted only if there is minor change in the functionalities as it is very difficult to debug this code.

Step 5 – Build without Visual Studio

 

While Visual Studio allows us to trigger builds for any configuration by manually selecting the configuration from the dropdown, we can also create a batch file to allow us build our solution with different .NET frameworks. This batch file can be used with any Build System like TFS, Jenkins, TeamCity, etc.

REM Build Solution
SET CONFIGURATION=%1
set PATH_SOURCE_SLN="%cd%\OurSolution.sln"
if [%1]==[] (
SET CONFIGURATION=DebugNET40
)
MSBuild %PATH_SOURCE_SLN% /p:Configuration=%CONFIGURATION%


This 5 step process allows us to develop our solution targeting multiple .NET frameworks and allows us to narrow down the implementation to a particular .NET framework during the build.

 

 


 

Shell Script to setup .NET on Linux

December 5, 2014 ASP.NET vNext, Mono , , , , ,

If you are scared by the tedious process of setting up Mono, KVM and KRE on Linux or are not aware of how to get started with it, here is your lifesaver – a single Shell script that can do magic for you.  You don’t need to know anything special to run this.  All you need is an access to Linux VM.  Once you have logged in to a VM using any client (like PuTTy) and run the script.

 

#!/bin/bash

clear

PREFIX=$1
VERSION=$2

if [ -z $PREFIX ]; then
  PREFIX="/usr/local/"
fi

if [ -z $VERSION ]; then
  VERSION="3.10.0"
fi


sudo apt-get install make
sudo apt-get install git autoconf libtool automake build-essential mono-devel gettext zip unzip
sudo apt-get install bash zsh curl

sudo mkdir $PREFIX
sudo chown -R `whoami` $PREFIX

PATH=$PREFIX/bin:$PATH
wget http://download.mono-project.com/sources/mono/mono-$VERSION.tar.bz2
tar -xjvf mono-$VERSION.tar.bz2
cd mono-$VERSION
./autogen.sh --prefix=$PREFIX
make
make install

sudo certmgr -ssl -m https://go.microsoft.com
sudo certmgr -ssl -m https://nugetgallery.blob.core.windows.net
sudo certmgr -ssl -m https://nuget.org
sudo certmgr -ssl -m https://www.myget.org/F/aspnetvnext/

mozroots --import –sync

wget http://dist.libuv.org/dist/v1.0.0-rc1/libuv-v1.0.0-rc1.tar.gz 
tar -xvf libuv-v1.0.0-rc1.tar.gz
cd libuv-v1.0.0-rc1/
./gyp_uv.py -f make -Duv_library=shared_library
make -C out
sudo cp out/Debug/lib.target/libuv.so /usr/lib/libuv.so.1.0.0-rc1
sudo ln -s libuv.so.1.0.0-rc1 /usr/lib/libuv.so.1

curl -sSL https://raw.githubusercontent.com/aspnet/Home/master/kvminstall.sh | sh & source ~/.kre/kvm/kvm.sh

source ~/.kre/kvm/kvm.sh

kvm upgrade

mono --version


All you have to do is save this file as say SetupDotNetOnLinux.sh and on your Linux VM, execute this script.  It may prompt you for Y/N at times and keep answering Y everytime

Those who need help in executing this script, a short way I use is

chmod u+x SetupDotNetOnLinux.sh
ls -l SetupDotNetOnLinux.sh
sh SetupDotNetOnLinux.sh

There are several other ways of executing scripts on Linux.  A simple article that can be helpful is Creating and running a script

Update – 6 Dec:  Added script for Libuv.  Thanks to David Fowler for pointing it.
Also script is available on GitHub.  Any suggestions to improve it, please fork the script 🙂

Upgrading to .NET 4.5.x for Enterprise Applications

January 4, 2014 ASP.NET, CSharp, Visual Studio , , ,

Microsoft launched .NET 4.5 in 2012 and later added an upgrade to .NET 4.5.1 in Dec, 2013.  So considering the timelines, there has been over an year that .NET 4.5 has been launched.  It means .NET 4.5.x has been tested thoroughly (by general public/developers), bugs have been identified and fixed and hence should be an easy upgrade for enterprise applications.

Upgrading to .NET 4.5.x may be very tempting to developers due to several reasons (many times IDE specific); however, its important to evaluate the waters before jumping in.  In this article, let’s see some of the areas that you would want to consider before upgrading your application suite to .NET 4.5.x

Environment Accessibility and Support

First thing to check before planning any development or upgrade activity is whether your production (or ‘live’) environment supports installation of .NET 4.5.x framework.  Some of checks that you should do before committing to upgrade are –

  • Certification and Packaging of .NET 4.5.x framework – Organizations (specially those concerned about security) tend to test and certify application suites before re-packaging (setup files) them so that applications/framework can be installed on desktop / servers in an automated way.  This does not mean that the original packaging (setup) was not done efficiently.  The re-packaging is done to ensure standardization in installation process. 
  • Mechanism to promote / install framework upgrades – Considering that the product life cycle has reduced it is important to know how much time the packaging team would require to re-packaging updates and promote them.

If your application is a n-tier web application (ASP.NET WebForms / MVC), you need to check your hardware capabilities

  • Supported Physical / Virtual infrastructure – .NET frameworks do not require more than 512 MB RAM.

    But for server applications, .NET 4.5.x requires Windows Server 2012 as operating system.  Windows Server 2012, for its smooth operation, requires 2GB RAM (recommended: 8GB) with 1.3 Ghz single 64-bit core processor.  So there is a dependency on availability of Windows Server 2012 packaging in your organization.

    .NET 4.5 needs a Windows Server 2008 variant and requires lesser hardware capabilities to run your application efficiently.  If you are not using any of these server operating systems, or are not on 64-bit architecture you will not be able to leverage the capabilities of .NET 4.5.x and will have to restrict your upgrades to .NET 4.0

If your application is a desktop application, there are several constraints to upgrade to a newer framework as you have several production machines (each desktop).  Unless you have a strategy to push .NET 4.5.x to all desktops, you shouldn’t think of upgrading your framework.

Understand the upgrade

No, I am not referring to understand the features of .NET 4.5.x.  I’ll talk about that in the next part of this article.  Here I am referring to how .NET 4.5.x fits in Microsoft’s release schema.  Until now, for every .NET framework release there has been a new folder created in %windir%\Microsoft.NET\Framework

In case of .NET 4.5.x, it is not the case.  When you install .NET 4.0,  the installer creates v4.0 folder.  When you install .NET 4.5.x, it updates the assemblies in v4.0 folder.  So in a way, .NET 4.5.x is not just an upgrade to .NET 4.0 but a replacement (with enhancements.) So unless you actually refer to registry keys you would not come to know if .NET 4.5.x is installed on your machine. Refer to MSDN article for more details on this.

image

What this means to a developer?

Simple.  You can not have disparate installations on developer’s machine and your deployment machines (system integration, QA, user acceptance, parallel production, production, DR, etc.) Let’s understand why.

Let’s assume that a developer has .NET 4.5.1 installed on his machine with VS 2013 as IDE.  The system integration environment has also been upgraded but others have not been upgraded yet.  Let’s say your code is still using .NET 4.0 and you trigger a build on your development machine for a production release.  What would happen here is that your build will use references of new assemblies installed in v4.0 folder (mentioned above.)  When this code gets packaged (on your machine) and is deployed in a non-upgraded environment, it may give you some runtime errors as it does not find .NET 4.5.x assemblies in GAC. 

So it’s necessary to understand the impact of this upgrade. 

Process Automation – Continuous Integration

Well, we understood the impact of the upgrade in the previous section.  The solution to above problem lies in creating an isolated environment that resembles the non-upgraded environment.  The Build Automation Environment.  There are two ways to resolve this without having a DLL Hell –

  1. Keep the build environment on .NET 4.0 so that you can trigger the builds using plain vanilla .NET 4.0 framework instead of .NET 4.5.x assemblies
  2. If you still upgrade this environment to .NET 4.5.x, you will need to alter the MSBuild targets to use

    C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0
    instead of
    %windir%\Microsoft.NET\Framework\v4.0

    This would ensure that the output of your build automation works for .NET 4.0 / 4.5.x.

Application level breaking changes

Not many, but there are a few breaking changes.  To validate this, you can check for file size of System.Runtime.Serialization.dll in two folders.  The Reference Assemblies (in Program Files folder) are the original framework assemblies and the Framework folder (in Windows directory) are the replaced assemblies if .NET 4.5.x is installed.

C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.0\
and
C:\Windows\Microsoft.net\Framework\v4.0.30319\

Why is this radical change in the way .NET worked?  Earlier to .NET 4.0, reference assemblies were a direct copy of GAC assemblies.  That means with every minor upgrade, you can unknowingly use a newer assembly.  And this program would fail on an unpatched machine.  To resolve this, in .NET 4.0 Reference Assemblies (in Program Files folder) act as redirecting assemblies and have only metadata.  They do not have IL code as it is moved to base .NET framework (mscorlib).

So in a nutshell, .NET 4.5 does not have its own independent CLR.  If you have different .NET frameworks installed on different machines using same CLR, you will have to test your application under these breaking changes. 

Refer to MSDN article on .NET 4.5 breaking changes and .NET 4.5.x breaking changes.  One of the problem, often faced, is serialization exception

Common Language Runtime detected an invalid program.

The easiest technique to fix this issue without changing any code is to add following block in the configuration

  1. <configuration>
  2.   <system.xml.serialization>
  3.     <xmlSerializer useLegacySerializerGeneration="true"/>
  4.   </system.xml.serialization>
  5. </configuration>

 

What happens to .NET version on IIS hosted websites?


When installing Web Server Role (IIS), you need to explicitly select the versions on ASP.NET you want to support.

image

By default, the wizard will create following Application Pools

image

Please note the .NET framework version for ASP.NET v4.5 is v4.0.

Hope this helps in taking an informed decision of upgrade and its impact.

Method, Delegate and Event Anti-Patterns in C#

October 28, 2013 CSharp, Visual Studio , , , , ,

No enterprise application exists without a method, event, delegate and every developer would have written methods in his/her application.  While defining these methods/delegates/events, we follow a standard definitions.  Apart from these definitions there are some best practices that one should follow to ensure that method does not leave any object references, other methods are called in appropriate way, arguments to parameters are validated and many more. 

This article outlines some of the anti-patterns while using Method, Delegate and Event and in-effect highlights the best practices to be followed to get the best performance of your application and have a very low memory footprint.

The right disposal of objects

We have seen multiple demonstrations that implementing IDisposable interface (in class BaseClass) and wrapping its object instance in ‘using’ block is sufficient to have a good clean-up process.  While this is true in most of the cases, this approach does not guarantee that derived classes (let’s say DerivedClass) will have the same clean-up behaviour as that of the base class.

To ensure that all derived classes take responsibility of cleaning up their resources, it is advisable to add an additional virtual method in the BaseClass that is overridden in the DerivedClass and cleanup is done appropriately.  One such implementation would look like,

  1. public class BaseClass : IDisposable
  2. {
  3.     protected virtual void Dispose(bool requiresDispose)
  4.     {
  5.         if (requiresDispose)
  6.         {
  7.             // dispose the objects
  8.         }
  9.     }
  10.  
  11.     public void Dispose()
  12.     {
  13.         Dispose(true);
  14.         GC.SuppressFinalize(this);
  15.     }
  16.  
  17.     ~BaseClass()
  18.     {
  19.         Dispose(false);
  20.     }
  21. }
  22.  
  23. public class DerivedClass: BaseClass
  24. {
  25.     // some members here    
  26.  
  27.     protected override void Dispose(bool requiresDispose)
  28.     {
  29.         // Dispose derived class members
  30.         base.Dispose(requiresDispose);
  31.     }
  32. }

This implementation assures that the object is not stuck in finalizer queue when the object is wrapped in ‘using’ block and members of both BaseClass and DerivedClass are freed from the memory

The return value of a method can cause a leak

While most of our focus is on freeing the resources used inside the method, it is the return value of the method that also occupies memory space.   If you are returning an object, the memory space occupied (but not used) is large

Let’s see some bad piece of code that can leave some unwanted objects in memory. 

  1. public void MethodWhoseReturnValueIsNotUsed(string input)
  2. {
  3.     if (!string.IsNullOrEmpty(input))
  4.     {
  5.         // value is not used any where
  6.         input.Replace(" ", "_");
  7.  
  8.         // another example
  9.         new MethodAntiPatterns();
  10.     }
  11. }

Most of the string methods like Replace, Trim (and its variants), Remove, IndexOf and alike return a ‘new’ string value instead of manipulating the ‘input’ string.  Even if the output of these methods is not used, CLR will create a variable and store it in memory.  Another similar example is creation of an object that is never used (ref: MethodAntiPattern object in the example)

Virtual methods in constructor can cause issues

The heading speaks for itself.  When calling virtual methods from constructor of ABaseClass, you can not guarantee that the ADerivedClass would have been instantiated.

  1. public partial class ABaseClass
  2. {
  3.     protected bool init = false;
  4.     public ABaseClass()
  5.     {
  6.         Console.WriteLine(".ctor – base");
  7.         DoWork();
  8.     }
  9.  
  10.     protected virtual void DoWork()
  11.     {
  12.         Console.WriteLine("dowork – base >> "
  13.             + init);
  14.     }
  15. }
  16.  
  17. public partial class ADerivedClass: ABaseClass
  18. {
  19.     public ADerivedClass()
  20.     {
  21.         Console.WriteLine(".ctor – derived");
  22.         init = true;
  23.     }
  24.  
  25.     protected override void DoWork()
  26.     {
  27.         Console.WriteLine("dowork – derived >> "
  28.             + init);
  29.             
  30.         base.DoWork();
  31.     }
  32. }

 

Use SecurityCritical attribute for code that requires elevated privileges

Accessing of critical code from a non-critical block is not a good practice.

Mark methods and delegates that require elevated privileges with SecurityCritical attribute and ensure that only the right (with elevated privileges) code can call those methods or delegates

  1. [SecurityCritical]
  2. public delegate void CriticalDelegate();
  3.  
  4. public class DelegateAntiPattern
  5. {
  6.     public void Experiment()
  7.     {
  8.         CriticalDelegate critical  = new CriticalDelegate(CriticalMethod);
  9.  
  10.         // Should not call a non-critical method or vice-versa
  11.         CriticalDelegate nonCritical = new CriticalDelegate(NonCriticalMethod);
  12.     }
  13.  
  14.     // Should not be called from non-critical delegate
  15.     [SecurityCritical]
  16.     private void CriticalMethod() {}
  17.         
  18.     private void NonCriticalMethod() { }
  19. }

 

Override GetHashCode when using overriding Equals method

When you are overriding the Equals method to do object comparisons, you would typically choose one or more (mandatory) fields to check if 2 objects are same.  So your Equal method would look like,

  1. public class User
  2. {
  3.     public string Name { get; set; }
  4.     public int Id { get; set; }
  5.  
  6.     //optional for comparison
  7.     public string PhoneNumber { get; set; }
  8.  
  9.     public override bool Equals(object obj)
  10.     {
  11.         if (obj == null) return false;
  12.  
  13.         var input = obj as User;
  14.         return input != null &&
  15.             (input.Name == Name && input.Id == Id);
  16.     }
  17. }

 

Now this approach checks if all mandatory field values are same.  This looks good in an example for demonstration, but when you are dealing with business entities this method becomes an anti-pattern.  The best approach for such comparisons would be to rely on GetHashCode to find out if the object references are the same

  1. public override bool Equals(object obj)
  2. {
  3.     if (obj == null) return false;
  4.  
  5.     var input = obj as User;
  6.     return input == this;
  7. }
  8.  
  9. public override int GetHashCode()
  10. {
  11.     unchecked
  12.     {
  13.         // 17 and 23 are combinations for XOR
  14.         // this algorithm is used in C# compiler
  15.         // for anonymous types
  16.         int hash = 17;
  17.         hash = hash * 23 + Name.GetHashCode();
  18.         hash = hash * 23 + Id.GetHashCode();
  19.         return hash;
  20.     }
  21. }

You can use any hashing algorithm here to compute a hash of an object.  In this case, comparisons happen between computed hash of objects (int values) which will be more accurate, faster and scalable when you are adding new properties for comparison.

Detach the events when not in use

Is it necessary to remove event handler explicitly in C#?  Yes if you are looking for lower memory footprint of your application.  Leaving the events subscribed is an anti-pattern.

Let’s understand the reason by an example

  1. public class Publisher
  2. {
  3.     public event EventHandler Completed;
  4.     public void Process()
  5.     {
  6.         // do something
  7.         if (Completed != null)
  8.         {
  9.             Completed(this, EventArgs.Empty);
  10.         }
  11.     }
  12. }
  13.  
  14. public class Subscriber
  15. {
  16.     public void Handler(object sender, EventArgs args) { }
  17. }

Now we will attach the Completed event of Published to Handler method of Subscriber to understand the clean up.

  1. Publisher pub = new Publisher();
  2. Subscriber sub = new Subscriber();
  3. pub.Completed += sub.Handler;
  4.  
  5. // this will invoke the event
  6. pub.Process();
  7.             
  8. // frees up the event & references
  9. pub.Completed -= sub.Handler;
  10.  
  11. // will not invoke the event
  12. pub.Process();
  13.  
  14. // frees up the memory
  15. pub = null; sub = null;

After the Process method has been executed the Handler method would have got the execution flow and completed the processing.  However, the event is still live and so are its references.  If you again call Process method, the Handler method will be invoked.  Now when we unsubscribe (-=) the Handler method, the event association and its references are freed up from the memory but objects pub and sub are not freed yet.  When pub and sub objects are assigned null, they are marked for collection by GC.

If we do not unsubscribe (-=) and keep other code AS-IS – GC will check for any live references for pub and sub and it will find a live event.  It will not collect these objects and they will cause a memory leak.  This common anti-pattern is more prevalent in UI based solutions where the UI events are attached/hooked to code-behind / view-models / facade.

Following these practices will definitely reduce your application’s footprint and make it faster.

Which one is better : JSON vs. Xml serialization?

October 23, 2013 CSharp, Visual Studio , , , ,

One of the hot topics of discussion in building enterprise applications is whether one should use JSON or XML based serialization for

  • data serialization and deserialization
  • data storage
  • data transfer

To illustrate these aspects, let’s write some code that can help build the facts.  Our code will involve creating an entity User

  1. public class User
  2. {
  3.     public string Name { get; set; }
  4.     public int Id { get; set; }
  5.     public UserType Type { get; set; }
  6. }
  7.  
  8. public enum UserType { Tech, Business, Support }

 

To test performance and other criteria we will use standard XML serialization, and for JSON we will evaluate these parameters using 2 open source frameworks Newtonsoft.Json and ServiceStack.Text

To create dummy data, our code looks like

 

  1. private Random rand = new Random();
  2. private char[] letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ".ToCharArray();
  3.  
  4. private List<User> GetDummyUsers(int max)
  5. {
  6.     var users = new List<User>(max);
  7.     for (int i = 0; i < max; i++)
  8.     {
  9.         users.Add(new User { Id = i, Name = GetRandomName(), Type = UserType.Business });
  10.     }
  11.  
  12.     return users;
  13. }
  14.  
  15. private string GetRandomName()
  16. {
  17.     int maxLength = rand.Next(1, 50);
  18.     string name = string.Empty;
  19.     for (int i = 0; i < maxLength; i++)
  20.     {
  21.         name += letters[rand.Next(26)];
  22.     }
  23.     return name;
  24. }
  25.  
  26. private long Compress(byte[] data)
  27. {
  28.     using (var output = new MemoryStream())
  29.     {
  30.         using (var compressor = new GZipStream(output, CompressionMode.Compress, true))
  31.         using (var buffer = new BufferedStream(compressor, data.Length))
  32.         {
  33.             for (int i = 0; i < data.Length; i++)
  34.                 buffer.WriteByte(data[i]);
  35.         }
  36.         return output.Length;
  37.     }
  38. }

 

The serialization logic to convert List<User> to serialized string and gather the statistics is

 

  1. public void Experiment()
  2. {
  3.     DateTime dtStart = DateTime.Now;
  4.     List<User> users = GetDummyUsers(20000); // change the number here
  5.     Console.WriteLine("Data generation  \t\t took mSec: "
  6.         + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  7.  
  8.     Console.WriteLine("—-");
  9.  
  10.     dtStart = DateTime.Now;
  11.     var xml = MyXmlSerializer.Serialize<List<User>>(users, Encoding.UTF8);
  12.     Console.WriteLine("Length (XML):      \t" + xml.Length + " took mSec: "
  13.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  14.     xml = MyXmlSerializer.Serialize<List<User>>(users, Encoding.UTF8);
  15.     Console.WriteLine("Length (XML):      \t" + xml.Length + " took mSec: "
  16.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  17.  
  18.     dtStart = DateTime.Now;
  19.     var json = JsonConvert.SerializeObject(users);
  20.     Console.WriteLine("Length (JSON.NET): \t" + json.Length + " took mSec: "
  21.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  22.     dtStart = DateTime.Now;
  23.     json = JsonConvert.SerializeObject(users);
  24.     Console.WriteLine("Length (JSON.NET): \t" + json.Length + " took mSec: "
  25.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  26.  
  27.     var serializer2 = new JsvSerializer<List<User>>();
  28.     dtStart = DateTime.Now;
  29.     var json2 = serializer2.SerializeToString(users);
  30.     Console.WriteLine("Length (JSON/ST) : \t" + json2.Length + " took mSec: "
  31.         + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  32.     dtStart = DateTime.Now;
  33.     json2 = serializer2.SerializeToString(users);
  34.     Console.WriteLine("Length (JSON/ST) : \t" + json2.Length + " took mSec: "
  35.         + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  36.  
  37.     Console.WriteLine("—-");
  38.     
  39.     var xmlBytes = Converter.ToByte(xml);
  40.     Console.WriteLine("Bytes (XML):     \t" + xmlBytes.Length);
  41.  
  42.     var jsonBytes = Converter.ToByte(json);
  43.     Console.WriteLine("Bytes (JSON):    \t" + jsonBytes.Length);
  44.  
  45.     var jsonBytes2 = Converter.ToByte(json);
  46.     Console.WriteLine("Bytes (JSON/ST): \t" + jsonBytes2.Length);
  47.  
  48.     Console.WriteLine("—-");
  49.  
  50.     var compressedBytes = Compress(xmlBytes);
  51.     Console.WriteLine("Compressed Bytes (XML):     \t" + compressedBytes);
  52.  
  53.     compressedBytes = Compress(jsonBytes);
  54.     Console.WriteLine("Compressed Bytes (JSON):    \t" + compressedBytes);
  55.  
  56.     compressedBytes = Compress(jsonBytes2);
  57.     Console.WriteLine("Compressed Bytes (JSON/ST): \t" + compressedBytes);
  58.  
  59.     Console.WriteLine("—-");
  60.  
  61.     dtStart = DateTime.Now;
  62.     MyXmlSerializer.Deserialize<List<User>>(xml, Encoding.UTF8);
  63.     Console.WriteLine("Deserialized (XML): \t took mSec "
  64.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  65.     MyXmlSerializer.Deserialize<List<User>>(xml, Encoding.UTF8);
  66.     Console.WriteLine("Deserialized (XML): \t took mSec "
  67.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  68.  
  69.     dtStart = DateTime.Now;
  70.     JsonConvert.DeserializeObject<List<User>>(json);
  71.     Console.WriteLine("Deserialized (JSON): \t took mSec "
  72.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  73.     dtStart = DateTime.Now;
  74.     JsonConvert.DeserializeObject<List<User>>(json);
  75.     Console.WriteLine("Deserialized (JSON): \t took mSec "
  76.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  77.  
  78.     dtStart = DateTime.Now;
  79.     serializer2.DeserializeFromString(json2);
  80.     Console.WriteLine("Deserialized (JSON/ST): \t took mSec "
  81.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  82.     serializer2.DeserializeFromString(json2);
  83.     Console.WriteLine("Deserialized (JSON/ST): \t took mSec  "
  84.             + DateTime.Now.Subtract(dtStart).TotalMilliseconds);
  85.  
  86. }

Statistics

On running this program on a Quad-Core processor with 6 GB RAM, the statistics look as below. 

 

image

 

What the statistics mean?

 

  • Serialization and Deserialization Performance

    For XML Serialization, there is no decrease in serialization time in the subsequent serialization requests even after using the same XmlSerializer object.  When using JSON, we see that the frameworks reduce the serialization time drastically.  JSON serialization appears to give us an gain of 50-97% in serialization time. 

    When dealing with deserialization, XML deserialization gives a better performance consistently with both data sets (20K and 200K).  JSON deserialization seems to take more time even when averaged. 

    Every application requires both serialization and deserialization features.  Considering the performance statistics, it looks like using ServiceStack.Text outperforms other two libraries. 

    Winner: JSON with ServiceStack.Text by taking 91% of time taken by Xml serialization + deserialization

  • Data Storage

    Looking at data storage aspect, Xml based string definitely requires more storage space.  So if you are looking at storing string, JSON is the clear choice at benefit.

    However, when you apply GZip compression on serialized strings generated by XML / JSON frameworks it appears that there is no major difference in the storage size.  JSON still saves some bytes for you!  This is one reason some NoSQL databases uses JSON based storage instead of XML based storage.  However, for quicker retrieval you need to apply some indexing mechanisms too.

    Winner: JSON without compression; With compression, minor gain by using JSON

  • Data Transfer

    Data transfer comes in picture when you are transferring your objects on EMS / MQ / WebServices.  Keeping other parameters such as network latency, availability, bandwidth, throughput, etc. as constants in both cases amount of data transfer becomes a function of data length or protocol used over network. 

    For EMS / MQ – Data length, as in statistics, is lesser in case of JSON when sent as string and almost same when sending as compressed bytes. 

    For WebServices / WCF – Data transfer depends on the protocol used.  If you are using SOAP based services, apart from your serialized Xml you will also have SOAP headers that will form the payload.  But if you are using REST protocol, you can return plain Xml / JSON and in that case a JSON string will have lesser payload than XML string.

    Winner: Depends on the protocol of transfer and compression technique used

 

Note: The performance may vary slightly when using any other C# libraries. But, hopefully, the % change should be neutralized.

 

Hope this article helps you to choose the right protocol and technique for your application.  If you have any questions, I’ll be happy to help you!

Complete guide to dynamic keyword in C#

April 10, 2013 CSharp, Visual Studio , ,

The dynamic keyword, a new addition to Microsoft .NET C# 4.0 language, is believed to change the type binding to a variable from compile time to runtime.  This also means that apart from the CLR interpreting the variable type dynamically at runtime, the compiler also has to ignore type-matching of a variable during the compilation process.

This is a paradigm shift from what has been followed since the time of Pascal, C and C++ ages and this article focuses on understanding how dynamic works internally and the best practices to be followed when using dynamic keyword

To understand this, let’s consider a code snippet and analyse it.

        private static void Main(string[] args)
        {
            Program program = new Program();

            /* Simple Examples
             * Will work on any data type 
             * on which + can be applied
             */
            Console.WriteLine(program.Add(2, 3));
            Console.WriteLine(program.Add(2.0d, 3.0d));
            Console.WriteLine(program.Add("Punit", "G"));

            /* Will work on any data type 
             * on which = can be applied */
            Console.WriteLine(program.Equals(2, 3));
            Console.WriteLine(program.Equals("Punit", "G"));
            //Console.WriteLine(program.Add(program, program));
        }

In the above snippet, the Add method takes in 2 dynamic parameters and returns a dynamic value.  This means you can typically pass any data type (that supports a + operation) and expect it to work as normal.  When you pass in a data type that does not support a + operation, the program will throw a RuntimeBinderException

The output of above program as expected is:

5

5

PunitG

False

False

<RuntimeBinderException>

At compile time, CLR converts a dynamic keyword to classes in Microsoft.CSharp.RuntimeBinder and System.Runtime.CompilerServices namespace.  The task of these classes is to invoke DLR at runtime and enable following conversion

 

image

 

So the CLR essentially creates an object of CallSite object.  This CallSite is a runtime-binding handler that creates a dynamic object and allows access its properties and methods.  This access is done using Reflection methods.  This CallSite object is a part of DLR engine and the DLR engine interprets these calls as ‘dynamic invocations’  If the DLR does not know the type of object, it will take effort in finding out.  After discovering the type of object, next is to check if it is a special object (like IronRuby, IronPython, DCOM, COM, etc) or it is C# object.

The CLR takes these DLR objects, passes them through

  • Metadata Analysers -  This analyser detects the type of objects
  • Semantic Analysers – This analyser checks if the intended operations (method calls, properties) can be performed on the object.  If there is any mismatch found in these 2 steps, a runtime exception is thrown.
  • Emitter – The emitter builds an Expression Tree and emits it.  This expression tree is sent back to DLR to build a Object-Cache dictionary.  DLR also calls compile on this expression tree to emit a IL.

On second call, the Object-Cache dictionary is used to skip the re-creation of expression tree for the same object and IL is emitted.   Post IL execution of a dynamic type is same as all other types.

 

Creating new dynamic objects

 

The reason why languages such as IronRuby and IronPython today exists is because .NET language allows you to create your own dynamic types.  You can write your own interpreters in C# or VB.NET and let the community use them as they wish.  Let’s see this in an example where we create a new SampleDynamicObject and use its properties and methods.

            dynamic sample = new SampleDynamicObject();
            // TryGetMember will be invoked sample.Name
            // Since the return value is true, it will not
            // throw an exception even if there is no property
            // called Name
            Console.WriteLine(sample.Name);

            // TryInvokeMember will be invoked PrintDetails
            // However it will throw not implemented exception
            // as it is not handled in TryInvokedMember method
            sample.PrintDetails();
    public class SampleDynamicObject : System.Dynamic.DynamicObject
    {
        public override bool TryInvokeMember(System.Dynamic.InvokeMemberBinder binder,
                object[] args, out object result)
        {
            return base.TryInvokeMember(binder, args, out result);
        }

        public override bool TryGetMember(System.Dynamic.GetMemberBinder binder,
                out object result)
        {
            // Property:= Name
            if (binder.Name == "Name")
            {
                result = "Punit";
                return true;
            }
            else
            {
                result = 0;
                return false;
            }
        }
    }

TryInvokeMember is called when any method is called on the dynamic object, while TryGetMember is called when a getter of a property of the dynamic object is called.  Similarly there are other overridable methods that can be implemented.

 

dynamic vs. object

Another aspect of understanding dynamic is to understand how dynamic types are different from System.Object

One variable is typed as object by the compiler and all instance members will be verified as valid by the compiler. The other variable is typed as dynamic and all instance members will be ignored by the compiler and called by the DLR at execution time.

  • Dynamic type can be considered as a special static type that the compiler ignores while compilation which is not the case with System.Object
  • Any operation on object requires type-casting which adds a hit to the performance.  Similarly, defining an object dynamic also involves some extra logic for interpretation.  This extra logic is also referred as Duck Typing
  • An object can be converted to a dynamic type implicitly.  An implicit conversion can be dynamically applied to an expression of type dynamic

 

Limitations of dynamic types

 

The keyword dynamic restricts a lot of functionalities as the type of object is not pre-defined.  Some of the known limitations are

  • Inability to use LINQ, Extension Methods, Lambda expressions
  • Inability to check if type conversions are done correctly
  • Polymorphism can not be fully supported
  • C# language constructs such as using block can not be applied to dynamic types
  • Design principles like Inversion of Control and Dependency Injection are difficult to implement

 

Where best to use dynamic types

 

Some of the areas where, I believe, dynamic types can be used are

  • To leverage runtime type of generic parameters
  • To receive anonymous types
  • To create custom domain objects for data-driven objects
  • To create a cross-language translator (like IronRuby, IronPython, etc) that leverages capabilities of .NET CLR

 

I hope this helps to understand the dynamic keyword and how it differs from System.Object type

Follow on Feedly