5 steps to targeting multiple .NET frameworks

June 21, 2015 CSharp, Visual Studio No comments , ,

When designing an API or libraries, we aim to have maximum coverage of available .NET frameworks so that we can have maximum number of clients adopt our APIs.  The key challenge in such scenarios is to have a clean code and an efficient way to manage multiple versions of code, Nuget packages and builds.

This article will outline a quick and easy way to manage single code base and target multiple .NET framework versions.  I’ve used the same concept in KonfDB

Step 1 – Visual Studio Project Configuration


First, we need to use Visual Studio to create multiple build definitions.  I would prefer 2 definitions per .NET configuration like

  • .NET 4.0 — DebugNET40, ReleaseNET40
  • .NET 4.5 — DebugNET45 and ReleaseNET45

When adding these configurations, clone them from Debug and Release and make sure you have selected ‘Create New Project Configurations’

This will modify your solution (.sln) file and Project (.csproj) files.

If certain projects do not support both versions, you can uncheck them before clicking on Close button.   This is usually done, when your solution has 2 parts – API and Server and you want the API to be multi-framework target and Server code to run on a particular version of .NET

Step 2 – Framework Targeting in Projects


There are 2 types of changes required in the Project (.csproj) files to manage multiple .NET versions

Every project has default configuration.  This is usually the lowest or base configuration.  This is defined by xml property like

<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>


Change this to

<Configuration Condition=" '$(Configuration)' == '' ">DebugNET40</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>


Make sure that all the projects in solution have same default Configuration and TargetFrameworkVersion

When we added multiple configurations to our solution, there is one PropertyGroup per configuration added to our Project (.csproj) files.  This appears something like,

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'DebugNET40|AnyCPU' ">


We need to add/modify 3 lines in each of these PropertyGroup tags to change OutputPath, TargetFrameworkVersion and DefineConstants

For .NET 4.0:



For .NET 4.5:



We will use these settings later in the article.

Step 3 – References Targeting in Projects


Our dependent libraries may have different versions for different versions of .NET. A classic example is Newtonsoft JSON libraries which are different for .NET 4.0 and .NET 4.5. So we may require framework dependent references – be it Standard References or Nuget References.

When we are using standard references, we can organize our libraries in framework specific folders and alter the project configuration to look like,

<Reference Include="Some.Assembly">


To reference Nuget packages, we can add conditions to the references as shown below

<Reference Include="Newtonsoft.Json, Version=, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL"
Condition="'$(TargetFrameworkVersion)' == 'v4.5'">
<Reference Include="Newtonsoft.Json, Version=, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL"
Condition="'$(TargetFrameworkVersion)' == 'v4.0'">


When we now do a batch build in Visual Studio, the solution should compile without errors.

Step 4 – Managing Clean Code with multiple frameworks


There are 2 ways to manage our code with different versions of .NET.

Bridging the gap of .NET 4.5.x in .NET 4.0


Let’s assume we are creating an archival process where we want to zip the log files and delete the log files after zipping them. If we build this functionality with .NET 4.5 framework, we can use the ZipArchive class (in System.IO.Compression) in .NET 4.5 but there is no such class in .NET 4.0. In such cases, we should go for interface driven programming and define 2 implementations – one for .NET 4.0 and one for .NET 4.5.

These 2 implementations cannot co-exist in the solution as they may give compilation issues. To avoid these we need to edit the Project (.csproj) file to

<Compile Include="LogFileMaintenance40.cs" Condition=" '$(TargetFrameworkVersion)' == 'v4.0' " />
<Compile Include="LogFileMaintenance45.cs" Condition=" '$(TargetFrameworkVersion)' == 'v4.5' " />


Both these files can have the same class names as at a given time, only one of them will compile

The unclean way


The unclean way is where we use the DefineConstants to differentiate between the framework versions. Earlier in the project configuration, we changed the DefineConstants to have NET40 and NET45. We can use these DefineConstants as pre-processor directives to include framework specific code like,

#if NET40
#if NET45


This methodology should be adopted only if there is minor change in the functionalities as it is very difficult to debug this code.

Step 5 – Build without Visual Studio


While Visual Studio allows us to trigger builds for any configuration by manually selecting the configuration from the dropdown, we can also create a batch file to allow us build our solution with different .NET frameworks. This batch file can be used with any Build System like TFS, Jenkins, TeamCity, etc.

REM Build Solution
set PATH_SOURCE_SLN="%cd%\OurSolution.sln"
if [%1]==[] (

This 5 step process allows us to develop our solution targeting multiple .NET frameworks and allows us to narrow down the implementation to a particular .NET framework during the build.




Application Design: Going Stateless on Azure

May 20, 2015 Azure No comments , ,

I am glad to say that I authored this exclusive article for Microsoft Press Blog and MVP Award Program Blog and it was first published on 4th May, 2015.
This article is available on my website for archival purpose


The components of a cloud application are distributed and deployed among multiple cloud resources (virtual machines) to benefit from the elastic demand driven environment. One of the most important factor in this elastic cloud is the ability to add or remove application components and resources as and when required to fulfil scalability needs.

However, while removing the components, this internal state or information may be lost.

That’s when the application needs to segregate their internal state from an in-memory store to a persistent data store so that the scalability and reliability are assured even in case of reduction of components as well as in the case of failures.  In this article, we will understand ‘being stateless’ and will explore strategies like Database-driven State Management, and Cache driven State Management.


Being stateless


Statelessness refers to the fact that no data is preserved in the application memory itself between multiple runs of the strategy (i.e. action). When same strategy is executed multiple times, no data from a run of strategy is carried over to another. Statelessness allows our system to execute the first run of the strategy on a resource (say X) in cloud, the second one on another available resource (say Y, or even on X) in cloud and so on.

This doesn’t mean that applications should not have any state. It merely means that the actions should be designed to be stateless and should be provided with the necessary context to build up the state.

If our application has a series of such actions (say A1, A2, A3…) to be performed, each action (say A1) receives context information (say C1), executes the action and builds up the context (say C2) for next action (say A2). However, Action A2 should not necessarily depend on Action A1 and should be able to be executed independently using context C2 available to it.

How can we make our application stateless?


The conventional approach to having stateless applications is to push the state from web/services out of the application tier to somewhere else – either in configuration or persistent store. As shown in diagram below, the user request is routed through App Tier that can refer to the configuration to decide the persistent store (like, database) to store the state. Finally, an application utility service (preferably, isolated from application tier) can perform state management



The App Utility Service (in the above diagram) takes the onus of state management. It requires the execution context from App Tier so that it can trigger either a data-driven state machine or an event-drive state machine. An example of state machine for bug management system would have 4 states as shown below


To achieve this statelessness in application, there are several strategies to push the application state out of the application tier. Let’s consider few of them.


Database-drive State Management


Taking the same bug management system as an example, we can derive the state using simple data structures stored in database tables.

Current State



Next State




Bug Opened

Bug Opened



Fix Needed

Not A Bug


Bug Closed

Fix Needed



Bug Fixed



Fix Needed

Bug Fixed



Bug Closed



Fix Needed

Bug Closed



The above structure only defines the finite states that a bug resolution can visit. Each action needs to be context-aware (i.e. minimal bug information and sometimes the state from which the action was invoked) so that it can independently process the bug and identify the next state (especially when multiple end-states are possible).

When we look at database-drive state management on Azure, we can leverage one of these out-of-the-box solutions

  • Azure SQL Database: The Best choice when we want to work with relational & structured data using relations, indexes, constraints, etc. It is a complete suite of MS-SQL database hosted on Azure.

  • Azure Storage Tables: Works great when we want to work with structured data without relationships, possibly with larger volumes. A lot of times better performance at lower cost is observed with Storage Tables especially when used for data without relationships. Further read on this topic – SQL Azure and Microsoft Azure Table Storage by Joseph Fultz
  • DocumentDB: DocumentDB, a NoSQL database, pitches itself as a solution to store unstructured data (schema-free) and can have rich query capabilities at blazing speeds. Unlike other document based NoSQL databases, it allows creation of stored procedures and querying with SQL statements.

Depending on our tech stack, size of the state and the expected number of state retrievals, we can choose one of the above solutions.

While moving the state management to database works for most of the scenarios, there are times when these read-writes to database may slow down the performance of our application. Considering state is transient data and most of it is not required to be persisted across two sessions of the user, there is a need of a cache system that provides us state objects with low-latency speeds.


Cache driven state management


To persist state data using a cache store is also an excellent option available to developers.  Web developers have been storing state data (like, user preferences, shopping carts, etc.) in cache stores ever since ASP.NET was introduced.  By default, ASP.NET allows state storage in memory of the hosting application pool.  In-memory state storage is required following reasons:

  • The frequency at which ASP.NET worker process recycles is beyond the scope of application and it can cause the in-memory cache to be wiped off

  • With a load balancer in the cloud, there isn’t any guarantee that the host that processed first request will also receive the second one. So there are chances that the in-memory information on multiple servers may not be in sync

The typical in-memory state management is referred as ‘In-role’ cache when this application is hosted on Azure platform.

Other alternatives to in-memory state management are out-of-proc management where state is managed either by a separate service or in SQL server – something that we discussed in the last section.  This mechanism assures resiliency at the cost of performance.  For every request to be processed, there will be additional network calls to retrieve state information before the request is processed, and another network call to store the new state.

The need of the hour is to have a high-performance, in-memory or distributed caching service that can leverage Azure infrastructure to act as a low-latency state store – like, Azure Redis Cache.

Based on the tenancy of the application, we can have a single node or multiple node (primary/secondary) node of Redis Cache to store data types such as lists, hashed sets, sorted sets and bitmaps.

Azure Redis cache supports master-slave replication with very fast non-blocking first synchronization and auto-reconnection on net split. So, when we choose multiple-nodes for Redis cache management, we are ensuring that our application state is not managed on single server. Our application state get replicated on multiple nodes (i.e. slaves) at real-time. It also promises to bring up the slave node automatically when the master node is offline.


Fault tolerance with State Management Strategies

With both database-driven state management and cache-driven state management, we also need to handle temporary service interruptions – possibly due to network connections, layers of load-balancers in the cloud or some backbone services that these solutions use. To give a seamless experience to our end-users, our application design should cater to handle these transient failures.

Handling database transient errors

Using Transient Fault Handling Application Block, with plain vanilla ADO.NET, we can define policy to retry execution of database command and wait period between tries to provide a reliable connection to database. Or, if our application is using any version of Entity Framework, we can include SqlAzureExecutionStrategy, an execution strategy that configures the policy to retry 3 times with an exponential wait between tries.

Every retry consumes computation power and slows down the application performance. So, we should define a policy, a circuit breaker that prevents throttling of service by processing the failed requests. There is no-one-size-fits all solution to breaking the retries.

There are 2 ways to implement a circuit breaker for state management –

  • Fallback or Fail silent– If there is a fallback mechanism to complete the requested functionality without the state management, the application should attempt executing it. For example, when the database is not available, the application can fallback on cache object. If no fallback is available, our application can fail silent (i.e. return a void state for a request).
  • Fail fast – Error out the user to avoid flooding the retry service and provide a friendly response to try later.     

Handling cache transient errors

Azure Redis cache internally uses ConnectionMultiplexer that automatically reconnects to Redis cache should there be disconnection or Internet glitch. However, the StackExchange.Redis does not retry for the get and set commands. To overcome this limitation, we can use library such as Polly that provide policies like Retry, Retry Forever, Wait and Retry and Circuit Breaker in a fluent manner.

The take-away!

The key take-away is to design applications considering that the infrastructure in cloud is elastic and that our applications should be designed to leverage its benefits without compromising the stability and user experience. It is, hence, utmost important to think about application information storage, its access mechanisms, exception handling and dynamic demand.

First published on 4th May, 2015 on Microsoft Press Blog and MVP Award Program Blog

DevOps: Continuous Delivery using Visual Studio and Azure (with ASP.NET MVC)

May 17, 2015 Sessions No comments

Interested in Cloud application development? Join Microsoft MVPs for upcoming live and recorded webcasts featuring the most current hot topics around developing for Microsoft Cloud technologies. You don’t need experience with the Microsoft platform—this edition of the MVP Community Camp offers introductory seminars.

Book the dates: 25-29th May, 2015

Along with other South-East Asia MVPs, I will be presenting a 30-minute session on

Continuous Delivery using Visual Studio and Azure (with ASP.NET MVC)

The session will focus on configuring VSO, developing applications using VS2015 and running continuous integration and auto deploying the code to Azure platform. We will use ASP.NET MVC for this session and can be extended for any application using C#, Java, NodeJS or even PHP.


So make sure you tune-in to the session. Don’t forget to register for this FREE webcast on http://aka.ms/mvpcomcamp4th

5 steps to create Ubuntu Hyper-V Virtual Machine

May 3, 2015 ASP.NET vNext, Azure No comments , ,

For quite some time now, I’ve been trying .NET 2015 on Azure Virtual Machines – Windows Server and Ubuntu and have been trying my hands at Shell Scripts. I’ve also been trying IoT using Linux on Raspberry Pi, Arduino and Intel Galileo Gen 2 boards.

To avoid running out of Azure credits, this time, I thought of creating a Hyper-V based Virtual Machine of Ubuntu on my laptop that could run in parallel with Windows OS. This article will outline 5 basic steps to create Ubuntu VM on your laptop that connects to the Internet. Once I have setup Ubuntu, I can use this VM to explore more of ASP.NET vNext

Step 1: Enable Hyper-V on your Windows 8.1 / 10 laptop


Ensure that hardware virtualization support is turned on in the BIOS settings.

Save the BIOS settings and reboot the machine. At the Start Screen, type ‘turn windows features on or off’ and select that item. Select and enable Hyper-V

If Hyper-V was not previously enabled, reboot the machine to apply the change. Ensure that hardware virtualization support is turned on in the BIOS settings


Step 2: Create a Virtual Switch for your Wireless Network


In Hyper-V Manager, select ‘Virtual Switch Manager’ in the Action pane. Ensure that you have at least one Virtual Switch that enables ‘External’ connectivity

Step 3: Download Ubuntu ISO image and Create New VHDX


Download latest image of Ubuntu ISO image – Server or Desktop from http://www.ubuntu.com/download and store it in local disk.

Open Hyper-V Manager, and select “New > Virtual Machine”. In the wizard, provide a friendly name like “Ubuntu VM” and select “Generation 2″. Assign min 512 MB memory and check the box “Use Dynamic Memory for Virtual Machine.”

In Configure Networking step, select the same Virtual Switch that has external network connectivity (configured in step 2)

In Connect Virtual Hard Disk, ensure that you have allocated at least 10GB of disk space. In the Installation Options, select the option “Install the operating system from bootable image file” and select the ISO file downloaded from Ubuntu.com and click Finish.

Step 4: Disable Secure Boot


In Hyper-V Manager, select the “Ubuntu VM” and click on Settings in the Action pane and uncheck ‘Enable Secure Boot’

Step 5: Start Ubuntu VM


In Hyper V Manager, right click on “Ubuntu VM” and click on Start and then on Connect. This will start Ubuntu on Hyper V.

Select Install Ubuntu and press ENTER and wait for some time.

Once this wizard completes, you will have a working version of Ubuntu on your machine, running in parallel with Windows 8.1 / 10

Getting Started with IaaS and Open Source on Azure

April 24, 2015 Azure No comments , ,

As a developer, we often spend time in using our favourite developer tools, design patterns, deployment practices and we also brag about DevOps.  When it comes to developing for cloud, knowing development practices isn’t sufficient.  For green-field projects, we can definitely adopt PaaS model and leverage the best of the cloud world. However, when we want to leverage cloud for existing applications (with less or negligible code changes), having knowledge of IaaS is essential.

Three fundamental courses on MVA are key to understanding and exploring IaaS

  • Fundamentals of IaaS
    As the name suggests, take a dig on managing server on Azure and some of the management practices

These courses provide an excellent insight to how infrastructure can be best managed on Azure!

Global Azure Bootcamp 2015 Singapore Chapter

March 9, 2015 Sessions No comments , , ,

Azure Bootcamp is back again in Singapore on 25th April. This time with special tracks for Developers and IT-Pros! One full day of deep dive sessions on Azure for Developers and IT Pro’s delivered by the experts to get you started on complex topics like Media Streaming, Mobile Services, ALM, PowerShell and IoT

Developer Track:
Explore and learn IoT, Premium Media Services, Azure Websites, Storage, ALM and Mobile Services from experts

IT Pro/Data Track:
Explore and learn PowerShell Desired State Configuration, ExpressRoute, PowerBI and Azure Machine Language from experts

Special Sessions:
Go beyond Azure and explore Office 365 and Skype4Business, or have a one-on-one session with Microsoft MVPs about anything you want to!

Visit http://globalazurebootcampsg.azurewebsites.net/ today to know more about the event.

What am I doing for/at Azure Bootcamp?


As one of the co-organizers, I’ve designed and developed the website http://globalazurebootcampsg.azurewebsites.net/ so partly helped with setting up digital footprint of the event.

Also helped setup website for 12 cities/countries

I will also be presenting on following topic

Connected Sensor, Data and Analytics – IoT and Azure with C#
25 April 2015 – 10:00am – 11:00am
Microsoft Singapore
One Marina Boulevard
Singapore 018989

This should have code with and without .NET

I’ll also be available, in all probabilities, in the “Ask the Expert” sessions. So we can connect and have a chat over a cup of coffee at Global Azure Bootcamp Singapore on 25th April!

You can RSVP for session on Meetup:

Global Azure Bootcamp in Singapore

Saturday, Apr 25, 2015, 8:30 AM

Microsoft Singapore, One Marina Boulevard
Level 22, Meeting Room Singapore, SG

124 Members Went

Welcome to Global Azure Bootcamp! On, Saturday, April 25, 2015 we are out to set some records again!In April of 2013 we held the first Global Windows Azure Bootcamp at more than 90 locations around the globe! In March 2014 we topped that with 136 locations! This year we are again doing a one day deep dive class to help thousands of people get up t…

Check out this Meetup →

Microphone detection in Arduino / Galileo (IoT) using VC++

February 25, 2015 Intel Galileo, IoT No comments , , , ,

After setting up Intel Galileo in our last post, let’s get going with the first sensor – Microphone. I had to refresh some of the basics that I had learnt during my bachelor studies – yes I did my undergraduate engineering studies in Automation and I’ve played with different microprocessors, controllers and sensors. So this post is going to be about voice detection using Microphone detector and pulsating LED when voice crosses few decibels.

Basics first, the wiring


You need a Galileo board and an Arduino compatible shield that can help you wire your sensors in a clean way. So with the shield, your board will look like

Now you need 2 different Grove sensors for this. Ideally, you can use sensors of any brand with any IoT device. All you need to remember is that all sensors will have minimum 2 pins

  • Voltage – Often abbreviated as V or VCC
  • Ground – Often abbreviated as GND
  • Data Pins – Often abbreviated as Dx (where x is a number)
  • Not connected Pins – Often abbreviated as NC

A point to remember is that you always have to connect V/VCC with another V/VCC and GND with another GND on any board. If you connect otherwise, your circuit will not be complete (and current will not flow).

When you are using an “Analog” sensor that will provide you some data, you will have a pin that says OUT. This OUT pin will have a voltage signal that will represent the signal captured by your sensor. This may not make perfect sense at first go. So let us go a bit deeper. There are 2 types of sensors – Analog ones that provide signals back in Voltage form and Digital ones that provide signals in bit/byte form. A weighing scale uses a sensor that can be analog or digital.

Any signal measured in analog format will require some calibration i.e. a conversion mechanism to digital or the other way.

Microphone Sensor and LED kit


A microphone sensor has 4 pins – VCC, GND, NC and OUT. You will get the voltage as sensor signal in the OUT pin

A LED sensor kit has 4 pins as well – VCC, GND, NC, SIG. You can set 5V on the SIG pin to light up the LED and can set 0V to SIG pin to light it off

So essentially what we are planning to do is to get the OUT signal of microphone into the SIG pin of LED kit. Ideally, you do not need a powerful processor like Galileo for such a trivial work. You can do this with few electronics fundamentals. But considering that you want to build something more sophisticated and this is the first step, we can go through the rest of the tutorial.

Setting up the sensor and the kit


I’ve setup Microphone sensor on A0 (as INPUT) and LED sensor kit on D3 (as OUTPUT) of the shield. You can use any other ports of your choice. Next is opening up VS 2013 and creating a new project of type Visual C++ > Windows for IoT

And in the main.cpp, you can paste the below code

#include "stdafx.h"
#include "arduino.h"

#define LED D3

void pins_init()
	pinMode(LED, OUTPUT);
void turnOnLED()
	digitalWrite(LED, HIGH);
void turnOffLED()
	digitalWrite(LED, LOW);

int _tmain(int argc, _TCHAR* argv[])
	return RunArduinoSketch();

void setup()

void loop()
	int sensorValue = analogRead(MICROPHONE);

	if (sensorValue > THRESHOLD_VALUE)
		Log("OK, got something worth listening\n");


Understanding the Code


The above statement is a digital value for the sound threshold. A microphone captures analog signal (0-5V) which is provided to your Galileo in form of a digital signal (0-1024). This means 0v = 0 in digital and 5v = 1024 in digital. To eliminate the environmental sounds, I prefer a threshold to be at least 33% i.e. 2v. So a digital value of 450, converts to 2.19v (= 450* 5 / 1024). At my place, I found that environmental sounds where contributing to a value of 291 (i.e. 1.42v)

The next important bits are the port definitions,

pinMode(LED, OUTPUT);

Here, we have directed that we will take input from A0 and output the data to D3. Now let’s understand the core of our program – the loop function

We are reading the analog value of microphone sensor using below code which converts the analog value into digital number

int sensorValue = analogRead(MICROPHONE);

When this value goes beyond the defined threshold, you want to send a 5v to LED (by sending a HIGH bit) using code

digitalWrite(LED, HIGH);

When you play some loud music you will see that the LED light will lighten-up for 2 seconds (delay=2000ms) and will turn off.

When you run/execute this project from Visual Studio using Remote Debugger, VS will deploy this code to your Galileo device. You will be prompted for your Galileo user name and password.

You can say something aloud or play some video on YouTube to test this functionality.

This code is also available on GitHub at: https://github.com/punitganshani/ganshani/tree/master/Samples/IntelGalileo/GroveMic


Universal Application and Xamarin Development – Session

February 23, 2015 Sessions No comments , , ,

As a part of March Singapore .NET Meetup, you can expect to learn on some of the hot topics like Client Server Code, ASP.NET MVC, Windows Universal Application Development and WPF. The detailed agenda is as per,

  • Secure Client-Server code by Lawrence Hughes
  • Basic ASP.NET MVC for beginners by Dawa Law
  • Windows Universal App
    Development by Punit Ganshani
  • Smart auto complete in WPF, by Riza
  • How to use Azure SQL Database using .NET by Riza

I will be speaking on developing Universal Applications and will cover 3 platforms – Windows 8.1, Windows Phone 8.1 and Android using Xamarin

Location:     Microsoft Singapore
Date and Time:    3 March, 7:00 PM to 9:00 PM

You can RSVP on the Meetup Event Site

Getting Started with Windows on Intel Galileo (IoT)

February 22, 2015 Intel Galileo, Windows No comments , , ,

At //Build 2014 conference, Microsoft demonstrated a version of Windows running on Intel Galileo board. It was not the first time Microsoft has showcased Windows running on smaller devices. They had around 8 different versions of Windows Embedded that ran on POS terminals since Windows 3.1 release and most of the retail POS terminals, arcade games, set-up boxes and ATMs, even today, across the globe run on Windows Embedded. And what more the later versions also allowed running applications developed using .NET 3.5

So what has changed with Intel Galileo? It’s the scale, licensing and availability.

Microsoft has shipped a pared-down version of Windows Embedded on Galileo (and coming soon will be a version for Raspberry PI v2) to reach out to DIYers, hardware makers and developers like you and me. Tons of opportunities lay wide open in front of us to create applications that can gather real-time analog data using sensors and transmit them for analysis.

So let’s get started with configuring our Intel Galileo V2

Prerequisites, first!

Let’s start with our checklist of hardware and software you would need to get started


  • Intel Galileo V2 Board (with 12v power supply)
  • microSD card with adapter – Minimum 16GB, Class 10
  • Ethernet/LAN cable
  • USB cable
  • *Laptop with USB port and Ethernet port
  • Internet connection


*If you have an Ultrabook (like I do) and don’t have an Ethernet port on your laptop, you will need a router that has an unused Ethernet port.


The Intel Galileo V2 Board

The Intel Galileo V2 board comes packed in a static-resistant bag with a wide range of adapters for power adapters.

The board looks like one shown above,

  1. USB port to connect to PC
  2. Ethernet port to connect to PC or router
  3. 12 volt power supply
  4. microSD slot to load WIM
  5. Additional USB port

Once your board has been initialized, you can see 2 LEDs light up as shown below

Point to note is that I’ve not inserted my microSD card and Ethernet cable in their slots on Galileo board.

Associating Galileo to COM port on your laptop

On your Start Menu, as an administrator, type Device Manager. Navigate to Other devices > Gadget Serial v2.4 and Update Driver Software

Select the folder ‘C:\arduino-1.5.3\hardware\arduino\x86\tools‘ to browse the drivers

This will associate a COM port (serial port) for Galileo under the Ports section in Device Manager


Loading Windows Image for Embedded devices to microSD


Connect your microSD card to your PC using a microSD adapter. Once the microSD card has been detected, format it using FAT32 (not NTFS). Let’s assume the microSD has a drive letter E:

Open Command Prompt as an administrator and navigate to the directory where you downloaded all the files from Microsoft Connect website and execute following command,

apply-bootmedia.cmd -destination {WhateverYourSDCardDriveLetterIS} -image {latest WIM image} -hostname mygalileo -password admin

In my case, it was

apply-bootmedia.cmd -destination E: -image 9600.16384.x86fre.winblue_rtm_iotbuild.141114-1440_galileo_v2.wim -hostname mygalileo -password admin

The process will take some time and the imaging output will look like,

Once this is done, you can view the contents of microSD card in Windows Explorer.

An interesting point to note is that Windows OS takes less than 1GB of disk space.

You can now eject the microSD card from laptop and insert it in Galileo. You can also connect the Ethernet cable to it. The setup should appear like,

Run the MSI you downloaded from Microsoft Connect website and boot up Galileo. The boot process will take around 2 minutes and then the Galileo Watcher will detect your device

You can right click on your Galileo in Galileo Watcher and Telnet to your device. Or you can open Web Browser by right clicking on your device. My device IP is, so I can view the memory consumed by Windows by browsing the URL

You can view the contents of microSD card by connecting to \\\c$. The username should be administrator and password should be admin, unless you have changed it when applying the image

Shutting down Galileo


Well, it’s always advisable to shutdown Windows safely and so it is for Galileo. You can type the standard shutdown command on telnet

shutdown /s /t 0

And that’s how you can setup Galileo to run Windows.

Azure Mobile Services – Session

February 16, 2015 Sessions No comments , ,

The UG leads of both Azure and SGDotNet have got together to consolidate our events, as the target audience are predominantly overlapping. As a starter, organized on 26th Feb is the meetup focusing on the most trending topics – IoT and Cloud. There will be 2 sessions

  • Programming Windows on IoT Devices by Neng Giin Yap
  • Azure Mobile Services with Cross-Platform Mobile Development by Punit Ganshani

I will be broadly covering basics of Azure Mobile Services and the architecture. As a demo, we will also cover how to create Mobile services on the Azure, consume the service from the Mobile devices using Windows Phone or Android using Xamarin or PhoneGap.

Location: Microsoft Singapore
Date and Time: 26 Feb, 2015 7:00 PM Onwards

You can RSVP for the session on Meetup Event Board


Follow on Feedly