Optimizing performance of your WCF Services
Performance Optimization process starts when you are provided with Non-Functional Requirements such as Availability, Concurrent Users, Scalability and so on. If you are not provided with NFRs, one tends to ignore them and continues build application using Visual Studio Wizards that generate code recipes. One tends to ignore them until faced upon a production issue that raises some questions on why a recently development application is poorly designed. As the experts say, the earlier you detect your issues in a life-cycle of a product the cheaper is it to resolve them. So its better to be aware of design principles and be cautious of code that can cause significant drop in performance.
When it comes to Windows Communication Foundation (WCF), an architect has to take several design decisions. This article will outline some of the design decisions an architect/lead has to take when he is designing a WCF service. The pre-requisite for this article is having preliminary knowledge of WCF. You can refer to MSDN articles on WCF
The right Binding
Choosing the right binding is not difficult. You can read an article on WCF Bindings in depth on MSDN if you want more information. If we have to summarize the types and some of the overheads in using them, then it would be
- Basic binding: The BasicHttpBinding is designed to expose a WCF service as a legacy ASMX web service, so that old clients or cross platform clients can work with the new services hosted either over Intranet/ Internet. This binding, by default, does not enable any security. The default message encoding is text/XML.
- Web Service (WS) binding: The WSHttpBinding class uses HTTP or HTTPS for transport, and is designed to offer a variety of features such as reliability, transactions, and security over the Internet. This means that if a BasicHttpBinding takes 2 network calls (Request & Response) to complete a request, a WSHttpBinding may take over 5 network calls which makes is slower than BasicHttpBinding. If your application is consuming the services hosted on the same machine, to achieve scalable performance it is preferred to use IPC Binding instead of WsHttpBinding.
- Federated WS binding:The WSFederationHttpBinding binding is a specialization of the WS binding, offering support for federated security.
- Duplex WS binding: The WSDualHttpBinding binding, this is similar to the WS binding except it also supports bidirectional communication from the service to the client. Reliable sessions are enabled by default.
- TCP binding: The NetTcpBinding is primarily used for cross-machine communication on the Intranet and supports variety of features, including reliability, transactions, and security, and is optimized for WCF-to-WCF communication – only .NET clients can communicate to .NET services using this binding. This is an ideal replacement of socket-based communication. To achieve greater performance, try changing the following settings
- Set the value of serviceThrottling to highest
- Increase the maxItemsInObjectGraph to 2147483647
- Increase the values of listenBacklog, maxConnections, and maxBuffer
- Peer network binding:The NetPeerTcpBinding is used for peer networking as a transport. The peer network-enabled client and services all subscribe to the same grid and broadcast messages to it.
- IPC binding: The NetNamedPipeBinding class, this uses named pipes as a transport for same-machine communication. It is the most secure binding since it cannot accept calls from outside the machine and it supports a variety of features similar to the TCP binding. It can be used efficiently for cross product communication.
- MSMQ binding: The NetMsmqBinding uses MSMQ for transport and is designed to offer support for disconnected queued calls.
- MSMQ integration binding: The MsmqIntegrationBinding converts WCF messages to and from MSMQ messages, and is designed to interoperate with legacy MSMQ clients.
The right Encoder
Once you have decided on the binding you are going to use, the first level of optimization can be done at the message level. There are 3 message encoders available out of the box in .NET framework.
- Text – A default encoder for BasicHttpBinding and WsHttpBinding bindings – it uses Uses a Text-based (UTF-8 by default) XML encoding
- MTOM - An interoperable format(though less broadly supported then text) that allows for a more optimized transmission of binary blobs, as they don’t get base64 encoded.
- Binary - A default encoder for NetTcpBinding and NetNamedPipeBinding bindings – it avoids base64 encoding your binary blobs, and also uses a dictionary-based algorithm to avoid data duplication. Binary supports “Session Encoders” that get smarter about data usage over the course of the session (through pattern recognition).
Having said that, the best match for you is decided based on
- Size of the encoded message – as it is going to be transferred over wire. A smaller message size with not much of hierarchy in objects would be transmitted best in text/XML format.
- CPU load – while encoding the messages and also process your operations contracts
- Simplicity – Messages once converted into binary do not remain readable by naked eyes. If you do not want to log the messages and want faster transmission, binary is the format to go for
- Interoperable – MTOM does not ensure 100% interoperability with other non-WCF Services. If you do not require interoperability, binary is the format to go for
Binary encoder, so far, seems to be the fastest encoder and if you are using NetTcpBinding or NetNamedPipeBinding a binary encoder will do wonders for you! Why? Over a period of time, “Session encoders” become smarter (by using dictionary and analysing pattern) and perform optimizations to achieve faster speed.
Final Words - A text encoder converts binary into Base64 format which is an overhead (around 4-5 times) and can be avoided by using binary or MTOM encoders. If there is no binary data in the message, MTOM encoder seems to slower down the performance as it has an overhead of converting the message into MIME format. So try out different message encoders to check what suits your requirement!
The right Compression
Choosing a right encoder can reduce the message size by 4-5 times. But what if the message size is still in MBs? There are ways to compress your message and make it compact. If your WCF Services are hosted on IIS or WAS (Windows Server 2008/2012), you can opt for IIS Compression. IIS Compression enables you to perform compression on all outgoing/incoming messages using GZip compression. To enable IIS Configuration, you need to follow steps mentioned by Scot Hanselman’s article - Enabling dynamic compression (gzip, deflate) for WCF Data Feeds, OData and other custom services in IIS7
Caching any data on which your service depends avoids dependency issues and gets you faster access to same data. There are several frameworks available to cache your data. If your application is smaller (non-clustered, non-scalable, etc) and your service is not-stateless (you might want to make it stateless), you might want to consider In-Memory caching; however, with large scale applications you might want to check out AppFabric, Coherence, etc.
- In-memory Caching – If your WCF services are hosted on IIS or in WAS you can enable ASP.NET Caching by adding AspNetCompatibilityRequirements attribute to your service and setting aspNetCompatibilityEnabled to true in Web.Config file to use ASP.NET Caching block. If you are using self-hosted applications, you can use Enterprise Library Caching block. If your application is built on .NET 4.0 or later, you can use Runtime Caching
- AppFabric – Use AppFabric for dedicated and distributed caching to increase the service performance. This will help you overcome several problems of in-memory caching such as sticky sessions, caching in each component/service on a server, synchronization of cache when any data changes and alike.
When storing objects in cache, prefer caching serializable objects. It can help you to switch caching providers at any time.
Load balance should not just be seen as a means to achieve scalability. While it definitely increases scalability, many times an increased performance is a driving force towards load balancing services. There is an excellent article on MSDN on Load Balancing WCF services
Accelerated using GPU
There are many open-source GPU APIs available out there that can enhance the performance of high data computational tasks or image processing. Data computation tasks also involve data sorting, filtering and selection – operations that we do using LINQ or PLINQ. In one of my projects, we observed that operations that took 20,000 milli-seconds on a i5 processor using PLINQ barely took 900 milli-seconds on the same processor using GPU. So leveraging the power of GPU in your Data-layer based WCF Services can boost the performance of your application by at least 500 times!
Some of the APIs that I recommend are Accelerator by Microsoft and CUDA by NVIDIA. Both of them support development in C# language.
With the right binding and encoder you can expect a 10% increase in the performance but when bundled up with data caching and GPU acceleration, the performance can shoot by a minimum 110%. So if you are facing issues with optimizing the performance of your WCF service, experiment with the above steps and get started. Some links that may interest you are,
- High Performance WCF Services : netTcpBinding
- Programming massively with CUDA (requires iTunes)
- Throttling parameters of WCF
Let me know if you need any other information on WCF