Print

Writing Highly Maintainable WCF Services

When it comes to writing maintainable software, there is no alternative to the five core principles of object oriented design. When software is based on these principles, everything becomes significantly easier. When your software is based on these principles, writing a highly maintainable WCF web service on top of that can be done in just a matter of minutes.

The code supporting this article can be found at solidservices.codeplex.com.

Most of my clients have maintainability issues with their software. Almost always these problems are caused by improper software design. Incorrect design can have many causes, such as bad requirement analysis, and high pressure. Bad design can even cause more bad design and even bigger maintainability nightmares. When looking closely at such design, I often see a violation of the five basic design principles of object oriented design; the SOLID principles. For me, there is no alternative: writing maintainable software starts with the SOLID principles.

Just as bad design triggers more bad design, good design can trigger more good design. For instance, after correctly applying the SOLID principles to your software, it will be much easier to write (web) services that are highly maintainable. In my last few articles (here and here) I described a way of modeling important parts of a software system in such a way that it increases maintainability (by simply following the SOLID principles). By modeling both business operations (commands) and business queries as messages, and hiding the behavior for processing these objects behind proper abstractions, the maintainability and flexibility increases dramatically.

Since those command and query objects are simple data containers, serializing them is easy. Being able to serialize those messages has a few clear advantages. We could for instance serialize them to a log file, which gives us a complete overview of what happened at what time and by whom. It’s a functional transaction log. Since both a command and a query contain all the data that is needed to correctly execute the operation (except perhaps some context information such as the current user), we could replay this information during a stress test or use it to debug a problem. By serializing commands to a (transactional) queue (such as MSMQ), we can even let commands run in parallel on background services. This can improve reliability and scalability of a system.

Another advantage of being able to serialize those messages is to be able to send them over the wire to a web service. Those messages can be used as the data contract of the web service, and the web service can be built as a thin layer that lies on top of that. With the right constructs and configuration, we can build this web service in such a way that it hardly ever needs to be changed. In this article I will show you how to do this with a WCF service based on the patterns described in my previous articles (so please read them if you haven't).

WCF has a few interesting features, which make it an extremely convenient layer on top of a model based on commands and queries. For instance, WCF allows a service class to dynamically specify which types of messages the service can handle using the ServiceKnownTypeAttribute. This allows us to write the service once and never change it again. Another feature is the possibility to let the client and service share the same assembly. Of course this is only possible when the client is a .NET application as well, but this saves you from having lots of generated code on the client. This works best when the client and web service are part of the same Visual Studio solution.

This next code is all it takes to make a web service that can handle any arbitrary command that's available in your application:

[ServiceKnownType("GetKnownTypes")]
public class CommandService
{
    [OperationContract]
    public void Execute(dynamic command)
    {
        Type commandHandlerType = typeof(ICommandHandler<>)
            .MakeGenericType(command.GetType());
 
        dynamic commandHandler = Bootstrapper.GetInstance(commandHandlerType);
 
        commandHandler.Handle(command);
    }
 
    public static IEnumerable<Type> GetKnownTypes(
        ICustomAttributeProvider provider)
    {
        var commandAssembly = typeof(ICommandHandler<>).Assembly;
 
        var commandTypes =
            from type in commandAssembly.GetExportedTypes()
            where type.Name.EndsWith("Command")
            select type;
 
        return commandTypes.ToArray();
    }
}

This service has just one operation, decorated with the OperationContractAttribute. It can process any command. Since WCF needs to know what messages it must accept (to be able to generate a WSDL for instance), the service is decorated with the ServiceKnownTypeAttribute. This attribute points at the public GetKnownTypes method, which is part of the service. This method simply queries the metadata of the assembly containing all commands. This method uses convention over configuration, since it expects all types in that assembly which name ends with "Command" to be command messages. However, other ways to retrieve the applicable command types (such as defining them by a common ICommand interface or marking commands with attributes) will do just fine.

Since the service’s Execute method accepts any possible command, it uses reflection to build the corresponding ICommandHandler<TCommand> interface for the supplied command. It requests this handler type from the Composition Root and uses a bit of reflection again to execute that command. The performance impact of the reflection is negligible, since the WCF pipeline (with all its deserialization and verification) obviously has much more overhead (but if needed, performance can be improved by caching the types).

The Composition Root is the part of the application where services are tied together and object graphs are composed. Here is how this composition root might look like:

using System.Linq;
using System.Reflection;
using System.Web.Compilation;

using SimpleInjector;
using SimpleInjector.Extensions;

public static class Bootstrapper
{
    private static Container container;
 
    public static void Bootstrap()
    {
        container = new Container();
 
var assemblies = BuildManager.GetReferencedAssemblies().Cast<Assembly>();

        container.RegisterManyForOpenGeneric(typeof(ICommandHandler<>), assemblies);
 
        container.RegisterManyForOpenGeneric(typeof(IQueryHandler<,>), assemblies);
 
        container.Verify();
    }
 
    public static object GetInstance(Type serviceType)
    {
        return container.GetInstance(serviceType);
    }
}

Not surprisingly, I use the Simple Injector to bootstrap the application, since Simple Injector makes batch register generic types and generic decorators embarrassingly easy. However, any descent DI container will allow you to do this in one way or another. The first call to the RegisterManyForOpenGeneric method iterates through all application assemblies and registers all concrete ICommandHandler<TCommand> implementations that it finds. This of course is just a simple example.

The Bootstrap method is called during application startup. For a WCF service this will be the Application_Start event in the Global.asax:

public class Global : System.Web.HttpApplication
{
    protected void Application_Start(object sender, EventArgs e)
    {
        Bootstrapper.Bootstrap();
    }
}

With these three pieces in place we have a working WCF service that can accept command messages from a client. If you haven’t already, you can start defining commands just like the following:

public class MoveCustomerCommand
{
    public int CustomerId { get; set; }
 
    public Address NewAddress { get; set; }
}

Notice how this type lacks any WCF DataContractAttributes and DataMemberAttributes. When working with DTOs, WCF allows you to skip using these attributes, which simply means that WCF will serialize the complete instance, which is exactly what we want. Not only removes this noise from our code, it keeps our commands simple POCOs, free from any technology specific attributes, which is always a good thing.

I must admit that this whole design can seem a bit overwhelming, and not very appealing at first, but as I explained in my previous blog posts, this model starts to shine once you start applying decorators to those handlers and can drastically lower maintenance when your application starts to grow. In my post about commands I made a small list of cross-cutting concerns that are easy to implement as decorator, such as validation, audit trailing, and queuing. Besides these, when running a WCF service, it could be really useful to have a mechanism to prevent messages from being replayed (both preventing accidental duplicates and hacking). Adding such feature as a decorator would be pretty easy.

Commands are of course just one half of the story. Queries are the other half. Let’s cut to the chase; Here’s the service that can execute queries:

[ServiceKnownType("GetKnownTypes")]
public class QueryService
{
    [OperationContract]
    public object Execute(dynamic query)
    {
        Type queryType = query.GetType();
        Type resultType = GetQueryResultType(queryType);
        Type queryHandlerType = typeof(IQueryHandler<,>)
            .MakeGenericType(queryType, resultType);
 
        dynamic queryHandler = Bootstrapper.GetInstance(queryHandlerType);
 
        return queryHandler.Handle(query);
    }
 
    public static IEnumerable<Type> GetKnownTypes(
        ICustomAttributeProvider provider)
    {
        var contractAssembly = typeof(IQuery<>).Assembly;
 
        var queryTypes = (
            from type in contractAssembly.GetExportedTypes()
            where TypeIsQueryType(type)
            select type)
.ToList();
 
        var resultTypes =
            from queryType in queryTypes
            select GetQueryResultType(queryType);
 
        return queryTypes.Union(resultTypes).ToArray();
    }

private static bool TypeIsQueryType(Type type)
{
return GetQueryInterface(type) != null;
}

    private static Type GetQueryResultType(Type queryType)
    {
        return GetQueryInterface(queryType).GetGenericArguments()[0];
    }
 
    private static Type GetQueryInterface(Type type)
    {
        return (
            from interfaceType in type.GetInterfaces()
            where interfaceType.IsGenericType
            where typeof(IQuery<>).IsAssignableFrom(
                interfaceType.GetGenericTypeDefinition())
            select interfaceType)
            .SingleOrDefault();
    }
}

The structure of this QueryService is similar to what we've seen with the CommandService. However, since queries return a value, a bit more wiring must be done. When executing queries however, there is one catch. Since the command service doesn't return any data when processing commands, clients could easily let Visual Studio generate the service contract for them. Query objects however, implement an interface that describes the data they return, for instance:

public class GetUnshippedOrdersForCurrentCustomerQuery : IQuery<OrderInfo[]>
{
    public int PageIndex { get; set; }
 
    public int PageSize { get; set; }
}

WCF however, doesn't communicate this interface through its WSDL definition and this part of the contract is lost. This problem can be solved by sharing the assembly that contains the query objects between the client and the service. Sharing an assembly between client and server is done by specifying it in the “Reuse types in specified referenced assemblies” option of the Advanced tab when adding the web service reference using Visual Studio’s “Add Service Reference” wizard:

Service Reference Settings

Unfortunately, it is not always possible to reuse the same assembly. Especially when dealing with non-.NET clients. Those clients will either need to cast the returned object to the correct type manually or will have to write some infrastructural code that adds compile time checking again (such as writing or generating partial classes to add this interface again to generated code). This of course only holds for clients written in static languages. With a dynamic language, you’ll have a different set of problems :-).

Since this shared assembly functions as the service’s contract, not sharing that assembly will make us lose information about this contract. WCF does not have the ability (at least not that I know) to express what data comes back from the service with what input. However, not all is lost. Since this information is available in the metadata, documentation can be generated based on this metadata. It could be as simple as shipping the XML documentation file that is generated by the C# compiler, or a Sandcastle documentation based on that XML file. This makes it easier for the client developers to work. Or web service could even expose an extra method that returns a list with the names of all queries with their corresponding return type. This would make it pretty easy for the developers of the client to use this information to generate the proper code for their environment that adds type safety and compile time support again (although this highly depends on the possibilities of the used system, runtime, and language).

Update: Instead of generating code on (non-.NET) client side to communicate with this service, you can also generate code on the WCF service, for instance using T4 templates. I added an example of this in the CodeProject project.

In fact, this is all it takes to write a WCF service. Obviously your service should do the proper authentication, authorization, validation, logging, and all other sorts of cross-cutting concerns. Authentication is typically done at the WCF layer, and almost all other cross-cutting concerns can be implemented by registering decorators for ICommandHandler<TCommand> and IQueryHandler<TQuery, TResult>. This will keep the CommandService and QueryService clean from these sort of checks, and it will allow you to reuse this logic in other applications, running on the same business layer.

When you get the hang of this way of designing your system, you will appreciate how easy and flexible it is. Still, please take the following things into consideration:

  • Don't forget that although adding new commands and queries can be done without making changes to the CommandService and QueryService classes, the service’s contract will still change. Although adding new commands and queries would usually not be a problem, every change to an existing command or query object might break your clients. For example, changing validation logic of a command could break your client. Managing the contract and backwards compatibility with existing clients is especially crucial when the clients are external. That’s a problem that this model doesn’t solve. Of course, things are much easier when the client application is part of the same solution, because contract changes can be made without a problem and you'll even get compiler warnings on the client application when you make these changes.
  • Make sure the service contract only contains commands and queries that must be accessible from clients. If they're not public, don't place them in the contract assembly. If there's no contract assembly, make sure GetKnownTypes method does not return them. This should be as easy as changing the LINQ query in GetKnownTypes. Depending on the DI framework you use, you might be able to leverage features of the container to find out which registrations exist. Simple Injector for instance, contains a GetCurrentRegistrations method, that returns a list of registered types.
  • Decorators are a great mechanism to extend behavior of command handlers and query handlers with cross-cutting concerns like validation and authorization. This can be mixed with metadata (attributes) placed on the command and query objects to define what behavior they should have.
  • Find a mechanism to communicate validation errors efficiently to the client. For instance, try a model where you can define validations in one place and let these validations be executed both on the server and client. You could for instance decorate command properties with Data Annotation attributes to allow them to be executed on both the client as the server. You could extend this with the configuration based approach of the Enterprise Library Validation Application Block to define the server-side only validation.
  • When your architecture is based on commands and queries, setting up a web service is really easy and almost maintenance free. This means that it can be very convenient to have multiple (almost identical) web services side by side, with slightly different configuration. Imagine a service for public clients with access to a sub set of commands and queries of a second service, meant for internal clients. This can be a nice extra layer of defense. Or both an (internal) WCF service and a public ASP.NET Web API service.
  • And of course apply WCF best practices when it comes to securing your web service (and do test this).

Here is the CodePlex project where you can find a working demo solution: solidservices.codeplex.com. When you go to the Source Code tab you can download the latest version by clicking on the 'download' link.

This is how I roll on the service side of my architecture.

- .NET General, Architecture, C#, Dependency injection - eight comments / No trackbacks - §

The code samples on my weblog are colorized using javascript, but you disabled javascript (for my website) on your browser. If you're interested in viewing the posted code snippets in color, please enable javascript.

eight comments:

Steven,

Strictly speaking when using command/query separation your commands should not return a response. Changes in the domain should be handled using events. What's your point of view about this?

Great post! Interesting read, and I always love a code sample!

Mathijs
Mathijs - 09 09 12 - 12:30

If you are talking about command/query separation, I expect you are referring to the CQRS architectural pattern. That pattern does AFAIK indeed promote the use of asynchronous commands, which implies they cannot return any data. The model I describe on my blog can be used on top of any architectural pattern, not solely CQRS. In the code samples in the SolidServices.CodePlex projects, you'll therefore find an example of a command (the CreateOrderCommand) returning data (as explained here http://bit.ly/QvRiBH). This implies it is not CQRS. Changing this code to let commands be asynchronous would of course be trivial (just don't return data). By showing how to return data, this example gets useful for a much bigger audience. Also note that often you don't want to send command asynchronous because of user experience as Jimmy Bogard explains here http://bit.ly/PzLOcF. That's why CQRS will not always be a good fit.
Steven (URL) - 09 09 12 - 13:18

Hello junkie :) and cheers for your effort on all of this.
I'm starter in DI, IOC & patterns programming I'm trying to implement Huy Nguyen's generic repo pattern http://huyrua.wordpress.com/2012/09/16/e.. in this highly maintainable WCF, i'm also using simple injector for injecting concrete object. Simple injector have WCF Integration on nuget with method like container.RegisterPerWcfOperation();
I think that I figured out that dbcontext of Huy implements Unit of work pattern, and that one instance of generic repo executes one unit of work transaction.
Question is that i cant figure out how can I initialize huys unit of work in command decorator by WCF operation. Or what else, i stuck there.
I'm loosing my self :)
So please help me understand that.

Thank you lot
Daniel - 05 11 12 - 13:35

Hi Daniel,

I'm sorry, I'm not familair with Nguyen's implementation, so I'm unable to comment on that. All I can say is that you typically should register your unit of work (yes, indeed your DbContext) with a 'per request' (RegisterPerWcfOperation in your case) lifestyle.

You could try asking at stackoverflow. Some tips: The problem isn't Simple Injector specific, but a general problem. Showing some code of Nguyen's implementation might help. Make the question short, but try to write the question in such way that Nguyen's article doesn't have to be read.

You might also be interested in this old article of mine:
http://www.cuttingedge.it/blogs/steven/p..
It too explains an implementation of the repository pattern.
Steven (URL) - 05 11 12 - 14:52

Hey, great article, as always.
I'v been implementing this for a silverlight app. And with the old service I created a Proxy that contained all the available service Methods and assigned each method a delegate.

_queryProxy = new MediaServiceClient();
_queryProxy.GetMediaList += ProxyOnGetMediaListCompleted;

Obviously I can't do that now.
The way I figure it is that I create the Proxy as normal, then assign the delegate, Call the service and then remove the delegate again.

_queryProxy = new QueryServiceClient();
...
_queryProxy.ExecuteCompleted += ProxyOnGetMediaListCompleted;
_queryProxy.ExecuteAsync(new GetMediaForCurrentUserSLQuery());
_queryProxy.ExecuteCompleted -= ProxyOnGetMediaListCompleted;

Or I could New up a Proxy for each method, but newing up several instances of ServiceClient smells bad.

Or is there another way altogether for doing this?
Mike - 06 06 13 - 10:26

You mention the ASP.NET WebAPI in the second to last bullet point. This usually is used to create REST APIs.
But a web service implemented with the approach you show here wouldn't conform to the REST principles.
Do you agree?
Daniel Hilgarth (URL) - 04 07 13 - 16:06

@Daniel, A design based on commands and command handlers is by nature use case-driven compared to the resource-driven approach that the Richardson Maturity Model for RESTful services describes. Having a use case-driven web API is most suited when you (as a team) build both the web service and the client applications that make use of it. When exposing your web API to third parties however, you don’t really know what use cases their applications implement. So in general it is better for an externally exposed web API to be resource-driven.

Implementing a resource based API with commands and queries will probably be cumbersome. In that case you will probably have more success when implementing the web API on to of an IRepository<TEntity> abstraction instead of building it on top of an ICommandHandler<TCommand> abstraction.

Note that using a generic interface is still important, because this allows you to apply cross-cutting concerns more easily, which will help you reach the goal that this blog post describes: having a highly maintainable web service.
Steven (URL) - 10 07 13 - 20:27

There's an interesting video online from NDC Oslo 2013 about "CQRS Hypermedia with WebAPI" that goes deeper into the previous discussion about resource driven and use case driven architectures with Web API: http://vimeo.com/68320468
Steven - 23 07 13 - 21:29


No trackbacks:

Trackback link:

Please enable javascript to generate a trackback url


  
Remember personal info?

/

Before sending a comment, you have to answer correctly a simple question everyone knows the answer to. This completely baffles automated spam bots.
 

  (Register your username / Log in)

Notify:
Hide email:

Small print: All html tags except <b> and <i> will be removed from your comment. You can make links by just typing the url or mail-address.