Sunday, 1 September 2013

Quantity trumps Quality?

This post was originally on my Posterous account, it's posted here without updating, some of the links my not work.

After deciding today to blog much more often, Jeff Atwood tweets about an old post of his: 

Considence I'm sure, but it rung true with a conversation I had this morning about TDD.  I'm newish to TDD, but a total convert, one thing I've learnt is not to overthink where it is leading you.  Start from the most abstract business logic and write tests for the simple cases first.  Then if your finding it difficult to write the tests try injecting in some providers or adapters instead of complicated data structures, suppling the parameters you want not what you have.  Don't be afraid to get it wrong, because your going to go back and refactor it shortly anyway.  You may find that you end up throwing away a lot, or trying things a few different ways, because you chose the wrong approach, but this quantity seems to lead to quality in the end.

Asynchronous WCF calls

This post was originally on my Posterous account, it's posted here without updating, some of the links my not work.

Update: I have created a CodeRush plugin to automate this which can be found

It is very easy to make asynchronous WCF calls without resorting to having a thread hanging around. Let's say you have a service contract like this:
[ServiceContract(Name = "MyService", Namespace = "")]
public interface IMyService
    MyRetunType FetchMyInformation(InfoId informationId);

To call this service asynchronously create another interface like this:
[ServiceContract(Name = "MyService", Namespace = "")]
public interface IMyServiceAsync
    [OperationContract(AsyncPattern = true)]
    IAsyncResult BeginFetchMyInformation(InfoId informationId, AsyncCallback callback, object state);

    MyRetunType EndFetchMyInformation(IAsyncResult asyncResult);

  1. The ServiceContract's Name and Namespace arguments must be identical to the original
  2. The service only needs to implement the original interface not the new async one. WCF only uses this new interface to generate the proxy.

The new interface can now be used in the normal way the .Net 1.0 Async pattern is used, or more easily with the .Net Task Parallel Library:
IMyServiceAsync channel = GetChannel();
Task task = Task.Factory.FromAsync(channel.BeginFetchMyInformation(id, null, null), asyncResult => channel.EndFetchMyInformation(asyncResult));

Note: It is very important that the same channel is used for both the Begin and the End, otherwise WCF will throw an exception on the End.

This will start the call and return immediately. You can then either wait on the result:
MyReturnType ret = task.Result;

Or use a continuation to act on the result once the task is complete:
task.ContinueWith(task => Information = task.Result, TaskScheduler.FromCurrentSynchronizationContext());

Note: The TaskScheduler.FromCurrentSynchronizationContext() argument is optional and can be used to ensure the continuation is performed on the UI thread. Make sure this is called on the UI thread in this case, the result can be kept and reused across multiple tasks.

You can also chain async WCF calls together:
Task<string> task = Task.Factory.FromAsync(channel.BeginFetchMyInformation(id, null, null), asyncResult => channel.EndFetchMyInformation(asyncResult))
    .ContinueWith(task => Task.Factory.FromAsync(channel.BeginParse(task.Result, null, null), asyncResult => channel.EndParse(asyncResult)))

This will give you a Task that calls FetchMyInformation, then calls Parse with the result of FetchMyInformation. The Unwrap means you get a Task instead of a Task<Task>, and allows you to easily add more continuations to the chain of tasks. I would strongly advise wrapping the Task.Factory.FromAsync calls in another method, so your code is more readable.

Task also supports exception handling, cancellation and performing multiple tasks in parallel.MSDN Task Parallel Library provides more information.

Your users do not care about your software. Deal with it!

This post was originally on my Posterous account, it's posted here without updating, some of the links my not work.

This micro-blogging lark isn't going very well, not posted anything for months.  Oh well, let's try again...

When I first started working in commercial software development (nearly 10 years ago now) the first major piece of work I was involved in was a rewrite from the ground up of one of our applications.  The application processed data in realtime and controlled some hardware based on the data received.  There had been 5 of the best people in our group working for 3 years to replace an ageing, ugly, bug riddled and almost impossible to maintain application with a shiny new one.  The old application was very basic, it was controlled through a couple of drop down menus, and some dialog boxes, and 2 text fields giving progress updates. The new one had a fancy new UI, in corporate colours with big shiny buttons that had embossed icons on them showing what they did.  Gone were the text fields, replaced with page after page of graphical displays updating in real time to show exactly what was happening.

I was lucky enough to be able to visit with one of our most important customers to oversee the install.  I installed the software and gave a demo, being sure to show off the new UI.

They hated it!

OK, hate is a strong word, they did like the new features and the better performance meant that the hardware control was more accurate (timing was very critical to this application), but they hated the UI.  It was big, took up too much of the screen, and they had to navigate around all over the place to get to the information they wanted to see.  For all it's faults the old application showed the important information, and only the important information.  This was when I realised that the customer could not care less about our software, all they wanted was their hardware doing it's job, so they could get on with theirs.

This experience has stuck with me, and guides a lot of what I do whenever I'm involved in any kind of UI work.  I've also been reminded of it by a couple of pieces of software I've had to put up with recently.  The first is this:

This is the Samsung ODD Firmware LiveUpdate, and what it does is update the firmware on a Samsung Blu-ray drive. Or should I say, what it also does is update the firmware on a Samsung Blu-ray drive, it's main purpose seems to be to sit in your task tray, popping up every day or so to remind you that there aren't any updates available at the moment.  This is for some hardware firmware!  I expect that I may have to upgrade when I first get the hardware, and then only if I have problems, I do not care even the slightest that the Samsung software guys haven't delivered a new bit of whiz bang firmware in the last 24 hours!  And look at that interface, someone has spent a lot of time and effort on that, with a lot of branding, it has an expanding black rectangle animation to let you know it's opening, and a non-rectangular window.  I'm sure they are very proud of it, but this software would have been perfectly fine as a little command line app that I can leave, hanging around on the hard disc, in case I ever need it.  It also requires that you give Samsung your contact details before it will work (or even close) which I personally think is disgusting behaviour, I've already paid you money and now you want me to give up my contact details as well. ( to the rescue)

After fighting with that monstrosity I discovered that Windows 7 doesn't support Blu-ray playback out of the box (Didn't we have this pain with DVD as well?) but the drive comes with CyberLink PowerDVD, which I installed and was able to watch my first (and possibly last, but that's another story) Blu-ray which I very much enjoyed.  After watching the film, I went to shut down my PC, and noticed a whole array of new software in my Start Menu.  When you install PowerDVD it comes with loads of other stuff you wouldn't possibly want, I mean who buys special discs that they can print labels on using the drive?  Who needs two video editing suites? Who thinks, you know, having only one way of writing a DVD isn't enough, I really need three more!

So CyberLink and Samsung haven't figured out that their users don't care about their software, but who has?  I think the perfect example of egoless software is Google Chrome.  I have to admit to being a bit of a Google fanboy, I'm blown away by almost everything they do, but Chrome is by far my favourite.  If you using it to read this, take a look at the window it's running in, now with the exception of the Task bar and the little one on the New Tab page, can you see a single Google or Chrome badge anywhere?  Notice how when you maximise it the title bar gets completely out of the way.  The whole application is designed to allow you to look at web sites, and do nothing else.  Have you ever been asked to restart Chrome or your computer to install an update? No, because it does it silently in the background, most of the time your not at all aware it's happened.

This is how all software should be, focused on allowing it's user to do what they want to do with the minimal of fuss, that's all they are really interested in, and that's what they pay for.

Task Parallel Library equivalent of Thread.Sleep()

This post was originally on my Posterous account, it's posted here without updating, some of the links my not work.

Recently I had need to use Thread.Sleep() inside a TPL Task.  Thread.Sleep is usually something I avoid but I was testing out a theory and Sleep was my "long running process".  However I also needed this to scale to 50-60 tasks running in parallel, and Sleep was wasting an entire thread that could be better served doing real work.
So I needed a solution that would cause a Sleep task, that returned the thread back to the thread pool while it was sleeping.  Some searches on the net didn't find much but it turns out to be pretty easy.  Here is the solution I eventually settled on:
    public class TimerTask
        public static Task Wait(int millisecondsToWait)
            var tcs = new TaskCompletionSource<object>();
            var timer = new System.Threading.Timer(delegate(object obj)
                null, millisecondsToWait, System.Threading.Timeout.Infinite);
            Task<object> waitTask = tcs.Task;
            waitTask.ContinueWith(antecedant => timer.Dispose());
            return waitTask;
Hopefully someone else will find this useful.
Note: I think there might be an issue with Timer resources running out if a very large number of these are in use at the same time, but I havn't seen this in practise.

Supporting Multiple Threads in Moq

This post was originally on my Posterous account, it's posted here without updating, some of the links my not work.

I'm a big fan of the Moq library.  It provides a really simple way of introducing mock objects into unit tests.  However recently we stumbled onto an issue with it.  The mocks that are produced are not thread safe.

 (See this issue)

Purests would say this isn't an issue as you shouldn't have threads in your unit tests, and this is a position I generally do agree with, however there is always some code to manage to multiple threads and that should be unit tested just as much as all the other code.

The issue has existed for a while and a colleague of mine has also created a fix, but the fix isn't in a released version and we didn't really want to go to the trouble of building Moq ourselves just for the one failng unit test.
One solution to this, is to make use of the same features Moq is built with.  Moq uses the Castle DynamicProxy library which is usually ILMerge'd with the Mod.dll assembly, but Moq also comes as a version without DynamicProxy merged in.  By switching to this version and using DynamicProxy ourselves we can do this:

public class SynchronizedInterceptor : IInterceptor
    private object lockObject = new object();

    public void Intercept(IInvocation invocation)
        lock (lockObject)

public static class MoqSynchronisedExtensions
    private static readonly ProxyGenerator _generator = new ProxyGenerator();

    public static TType GetSynchronizedObject<TType>(this Mock<TType> mock) where TType : class
        var synchronizedObject = _generator.CreateInterfaceProxyWithTarget<TType>(mock.Object, new SynchronizedInterceptor());
        return synchronizedObject;

This allows to write this unit test:

public void Moq_WhenUsingSynchronizedObject_IsThreadSafe()
    // Arrange
    var m = new Mock<idisposable>();
    var disposeable = m.GetSynchronizedObject();

    // Act
    Parallel.For(0, 1000000, d =>
With this solution we are able to get our unit test to pass every time, without affecting any other tests, using a manual mock or resorting to our own version on Moq.

This solution might not work for everyone, GetSynchronizedObject() returns a new object every time, therefore it is only synchronized when accessed though that object.  Also if you are doing other synchronization in callbacks you may introduce deadlocks into your unit tests.

This should make a good stop gap if you are experiencing this issue until there is an official fix built into Moq.

Wednesday, 27 March 2013

Moq As<> a code smell

The moq As method allows you to create a mock object that implements more than one interface:

(If IFirstInterface implements ISecondInterface then the As<> is not required.)

Its easy to see how this could help you write a test if the SUT cast the object of type IFirstInterface to ISecondInterface.

But what does this say about the code?

It's saying that you have a class that needs to implement more than one interface in order to performs it's job correctly. When you have a class that implements more than one interface, it suggests it has more than use, more than one reason to change. The need to use As<> is suggesting a Single Responsibility Principle violation.

While I wouldn't expect to see As<> used in tests for a Domain Model or Business Logic, it is possibly useful in "plumbing" layers, such as when adapting an external service. As always with code smells, use your own judgement.

To find a bug.

So I thought as I've suggested that debuggers are bad I should describe my "process" for tracking down bugs. I'm not going to claim this is as perfect approach, but it seems to work for me.

What is the bug?

The first thing I try and do when receiving a bug report is get information on what the bug actually is. What the expected behaviour was and what the actual the behaviour was. Any experienced developer will at some point have misunderstood an issue and fixed the wrong problem. This step is vital, but not really what I want to talk about in this post.

Where is the bug?

Time to start theorising. What components are likely to be involved with the operation? This may be an easy question to answer if you know the code well, but if you don't, try doing a search for keywords in your code this is where good naming (and also good spelling) can really make things easier. (I think there's another post in that statement).

Now the most important step of all - read the code!

When I read code, knowing there is an issue with it, I gain a new perspective on it. I notice things I didn't see before and question the assumptions that it makes. I start to theorize about how this could possibly be doing what it is actually doing. Tests are important here as well, is there a test for the behaviour seen? If there is, then maybe the behaviour is by design, or maybe there are situations that the tests don't account for. The amount that can be learnt from this type of inspection goes far beyond that of a normal code review, where you see what you expect to see rather than what is actually there.

You can also learn a lot about how to write code from this. If a particular pattern is hard to understand this will become very obvious at this stage, and you can consider other patterns in the future. You can also spot what is a good name and what isn't. At this stage if the code is quite complicated I will often refactor it. This is particularly useful if there are no unit tests as refactoring will require you to write some.

I'd say the majority of bugs I look at will be found and understood by this point, usually much more quickly than it would take to fire up a debugger. Plus my understanding of the code will be greatly improved.

If I still haven't figured it out, now is the time to fire up the debugger. But this is made much easier due to my increased understanding of the code. I know where to put break points and have an idea of what I should be seeing.

Where else could this bug exist?

It always surprises me how often the same bug occurs more than once within a codebase. Patterns of usage tend to be copied and reused. The most obvious benefit of looking for other occurrences of a bug is that you may uncover another bug in your software, but it also gives you the opportunity to increase your understanding of the code and possibly improve it. Looking for these bugs you may find alternative approaches to the problem. If the bug is in business logic you may find DRY principle violations, or you may find ways of simplifying the problem so all places can be cleaned up and improved.

How do we reproduce it?

Write a test. This could be a unit test, an integration test or an acceptance test, which one really depends on the bug. Unit tests are light, and easy to implement, but acceptance test have the advantage that they are easier to show to the user and get them confirm this is the behaviour they expected.

Fix, review, push the changes.

Go home.

OK, now I've finally got that post out of the way, I can get on to the interesting stuff - how to make this process easy.