Monthly Archives: January 2018

Short Walks – XUnit Tests Not Appearing in Test Explorer

On occasion, there may be a case where you go into Test Explorer, knowing that you have XUnit tests within the solution; the Xunit tests are in a public class, they are public, and they are decorated correctly (for example, [Fact]). However, they do not appear in the Text Explorer.

If you have MS Test tests, you may find that they do appear in the Test Explorer – only the XUnit tests do not.


To run Xunit tests from the command line, you’ll need this package.

To run Xunit tests from Visual Studio, you’ll need this package.


Using the Builder Pattern for Validation

When doing validation, There’s a number of options to how you approach it: you could simply have a series of conditional statements testing logical criteria, you could follow the chain of responsibility pattern, use some form of polymorphism with the strategy pattern, or even, as I outline below, try using the builder pattern.

Let’s first break down the options. We’ll start with the strategy pattern, because that’s where I started when I was looking into this. It’s a bit like a screwdriver – it’s usually the first thing you reach for and, if you encounter a nail, you might just just tap it with the blunt end.

Strategy Pattern

The strategy pattern is just a way of implementing polymorphism: the idea being that you implement some form of logic, and then override key parts of it; for example, in the validation case, you might come up with an abstract base class such as this:

public abstract class ValidatorBase<T>
    public ValidationResult Validate(T validateElement)
        ValidationResult result = new ValidationResult();
        if (CheckIsValid(validateElement))
            result = OnIsValid();                
            result = OnIsNotValid();
        return result;
    protected virtual ValidationResult OnIsValid()
        return null;

    . . .

You can inherit from this for each type of validation and then override key parts of the class (such as `CheckIsValid`).

Finally, you can call all the validation in a single function such as this:

public bool Validate()
    bool isValid = true;
    IEnumerable<ValidatorBase> validators = typeof(ValidatorBase)
        .Where(t => t.IsSubclassOf(typeof(ValidatorBase)) && !t.IsAbstract)
        .Select(t => (ValidatorBase)Activator.CreateInstance(t));
    foreach (ValidatorBase validator in validators)
        ValidationResult result = validator.Validate();
        if (!result.IsValid)
            isValid = false;
        if (result.StopValidation)
    return isValid;

There are good and bad sides to this pattern: it’s familiar and well tried; unfortunately, it results in a potential explosion of code volume (if you have 30 validation checks, you’ll need 30 classes), which makes it difficult to read. It also doesn’t deal with the scenario whereby one validation condition depends on the success of another.

So what about the chain of responsibility that we mentioned earlier?

Chain of responsibility

This pattern, as described in the linked article above, works by implementing a link between a class that validates your data, and the class that will validate it next: in other words, a linked list.

This idea does work well, and is relatively easy to implement; however, it can become unwieldy to use; for example, you might have a class like this:

private static bool InvokeValidation(ValidationRule rule)
    bool result = rule.ValidationFunction.Invoke();
    if (result && rule.Successor != null)
        return InvokeValidation(rule.Successor);
    return result;

But to build up the rules, you might have a series of calls such as this:

ValidationRule rule2 = new ValidationRule();
rule.ValidationFunction = () => MyTest();

ValidationRule rule1 = new ValidationRule();
rule.ValidationFunction = () => MyTest();
Rule.Successor = rule2;

As you can see, it doesn’t make for very readable code. Admittedly, with a little imagination, you could probably re-order it a little. What you could also do is use the Builder Pattern…

Builder Pattern

The builder pattern came to fame with Asp.Net Core; where during configuration, you could write something like:


So, the idea behind it is that you call a method that returns an instance of itself, allowing you to repeatedly call methods to build a state. This concept overlays quite neatly onto the concept of validation.

You might have something along the lines of:

public class Validator
    private List<ValidationRule> _logic;        
    public ValidatorEngine AddRule(Func<bool> validationRule)
        ValidatorLogic logic = new ValidatorLogic()
            ValidationFunction = validationRule
        return this;

So now, you can call:

    .AddRule(() => MyTest())
    .AddRule(() => MyTest2())

I think you’ll agree, this makes the code much neater.


NDC London 2018

As usual with my posts, the target audience is ultimately me. In this case, I’m documenting the talks that I’ve seen so that I can remember to come back and look at some of the technology in more depth. I spent the first two days at a workshop, building a web app in Asp.Net Core 2.0…

Asp.Net Core 2.0 Workshop

These were two intense days with David Fowler and Damien Edwards where we created a conference app (suspiciously similar to the NDC app) from the ground up.

Notable things that were introduced were HTML helper tags, the authentication and authorisation features of .Net Core 2.0. The ability to quickly get this running in Azure.

Day One

Keynote – What is Programming Anyway? – Felienne Hermans

This was mainly relating to learning, and how we learn and teach, how we treat each other as developers, and the nature of programming in general. Oddly, these themes came up again several times during the conference, so it clearly either struck a chord, or it’s something that’s on everyone’s mind at this time.

Sondheim, Seurat and Software: finding art in code – Jon Skeet

Okay, so let’s start with: this is definitely not the kind of talk I would normally go to; however, it was Jon Skeet, so I suppose I thought he’d just talk about C# for an hour and this was just a clever title. There was C# in there – and NodaTime, but not much. It was mainly a talk about the nature of programming. It had the same sort of vibe as the keynote – what is programming, what is good programming (or more accurately, elegant programming). At points throughout the talk, Jon suddenly burst into song; so all in all, one of the more surreal talks I’ve seen.

Authorization is hard! Implementing Authorization in Web Applications and APIs – Brock Allen & Dominick Baier

This was sort of part two of a talk on identity server. They discussed a new open source project that Microsoft have released that allows you to control authorisation; so, you can configure a policy, and within that policy you can have roles and features. What this means (as far as I could tell – and I need to have a play) is that out of the box, you can state that only people with a specific role are able to access, say an API function; or, only people that have roles with a specific feature are able to access an API function.

The example given was using a medical environment: a nurse, a doctor and a patient; whilst they all live in the same system, only the nurse and doctor are able to prescribe medication, and it is then possibly to configure the policy such that the nurse is able to prescribe less.

I’m Pwned. You’re Pwned. We’re All Pwned – Troy Hunt

This was the first of two talks I saw by Troy. This one was on security; although, oddly, the second was not. He did what he normally does which was start tapping around the internet, and showing just how susceptible everyone was to an attack.

He also mentioned that the passwords that he keeps in his database are available to be queried. I believe there’s an API endpoint, too. So the suggestion was that instead of the usual: “Your password much be 30 paragraphs long with a dollar sign, a hash and a semi-colon, and you have to change it every five minutes,” restriction on password entry, it would be better to simply ensure that the password doesn’t exist on that database.

Compositional Uis – the Microservices Last Mile – Jimmy Bogard

The basic premise here is that, whilst many places have a partitioned and Microservice architected back end, most front ends are still just one single application, effectively gluing together all the services. His argument followed that the thing to then do was to think about ways that you could split the font end up. The examples he gave included Amazon, so this isn’t a problem that most people will realistically have to solve; but it’s certainly interesting; especially his suggestion that you could shape the model by introducing a kind of message bus architecture in the front end: so each separate part of the system is polled and in turn “asked” if it had anything to add to the current request; that part of the system would then be responsible for communicating with its service.

C# 7, 7.1 and 7.2 – Jon Skeet and Bill Wagner

This was actually two talks, but they kind of ended up as one, single, two hour talk on all the new C# features. I have previously written about some of the new features of C#7 +. However, there were a few bits that I either overlooked, or just missed: pattern matching being one that I overlooked. The concept of deconstruction was also mentioned: I need to research this more.

Day Two

Building a Raspberry Pi Kubernetes Cluster and running .Net Core – Scott Hanselman & Alex Ellis

This was a fascinating talk where Alex had effectively build a mini-cloud set-up using a Raspberry Pi tower (IIRC six), and using a piece of open source software called Open Faas to orchestrate them.

This is a particularly exciting area of growth in technology: the fact that you can buy a fully functional machine for around £20 – £30 and then chain them together to provide redundancy. The demonstration given was a series of flashing lights; they demonstrated pulling a cable out of one, and the software spotted this and moved the process across to another device.

An Opinionated Approach to Asp.Net Core – Scott Allen

In this talk, Scott presented a series of suggestions for code layout and architecture. There was a lot of ideas; obviously, these all work well for Scott, and there was a lot of stuff in there that made sense; for example, he suggested mirroring the file layout that MS have used in their open source projects.

How to win with Automation and Influence People – Gwen Diagram

Gwen gave a talk about the story of her time at Sky, and how she dealt with various challenges that arose from dealing with disparate technologies and personality traits within her testing team. She frequently referred back to the Dale Carnegie book “How to win friends and influence people” – which presumably inspired the talk.

Hack Your Career – Troy Hunt

It’s strange listening to Troy talk about something that isn’t security related. He basically gave a rundown of how he ended up in the position that he’s in, and the challenges that lie therein.

HTTP: History & Performance – Ana Balica

This was basically a review of the HTTP standards from the early days of the modern internet, to now. Scott Hanselman touched on a similar theme later on, which was that it helps to understand where technology has come from in order to understand why it is like it is.

GitHub Beyond your Browser – Phil Haack

Phil demonstrated some new features of the GitHub client (which is written in Electron). He also demonstrated a new feature of GitHub that allows you to work with a third party on the same code (a little like the feature that VS have introduced recently).

.Net Rocks Live with Jon Skeet and Bill Wagner – Carl Franklin & Richard Cambpell

I suppose if you’re reading this and you don’t know what .Net Rocks is then you should probably stop – or go and listen to an episode and then come back. The interview was based around C#, and the new features. You should look out for the episode and listen to it!

Keynote – The Modern Cloud – Scott Guthrie

Obviously, if you ask Scott to talk about the cloud, he’s going to focus on a specific vendor. I’ll leave this, and come back to it in Scott’s later talk on a similar subject.

Web Apps can’t really do *that* can they? – Steve Sanderson

Steve covered some new areas of web technology here; specifically: Service Workers, Web Assembly, Credential Management and Payment Requests.

The highlight of this talk was when he demonstrated the use of Blazor which basically allows you to write C# and run it in place of Javascript.

The Hello World Show Live with Scott Hanselman, Troy Hunt, Felienne and Jon Skeet

I’d never heard of The Hello World Show before. To make matters worse, it is not the only You Tube program called this. Now I’ve heard of it, I’ll definitely be watching some of the back catalogue.

I think the highlight of the show was Scott’s talk – which pretty much had me in stitches.

Tips & Tricks with Azure – Scott Guthrie

This is the talk that I referred to above. Scott described a series of useful features of Azure that many people weren’t aware of. For example, the Azure Advisor, which gives tailored recommendations for things like security settings, cost management, etc.

Other tips included the Security Centre, Hybrid Use Rights (reduced cost for a VM is you own the Windows license) and Cost Management.

Serverless – the brief past, the bewildering present, and the beautiful (?) future – Mike Roberts

Mike has worked with AWS for a while now, and imparted some of the experience that he had, gave a little history of how it all started, and talked about where it might be going.

Why I’m Not Leaving .Net – Mark Rendle

Mark introduced a series of tools and tricks in response to every reason he could think of that people gave for leaving .Net.

Amongst the useful information that he gave was a sort of ORM tool he’d written called Beeline. Basically, if all you’re doing with your ORM tool is reading from the DB and then serialising it to JSON, then this does that for you, but without populating a series of .Net classes first.

He also talked about CoreRT which allows you to compile .Net. There’s a long way to go with it, but the idea is that you can produce an executable that will run with no runtime libraries.

Getting Started with iOS for a C# Programmer – Part 6 – Graphics

In this post I set out to create a series of posts that would walk through creating a basic game in Swift. In the post preceding this one I covered collision, and now we are moving on to graphics.

Add Assets

The secret to graphics in Swift seems to be creating assets. If you look to the left hand side of your project, you should see an asset store:

This can be added to:

Create a new image here (in fact, we’ll need two):

Then drag and drop your icons. Mine were kindly provided by my daughter:

Use SKSpriteNode

Once you have an image, it’s a straight-forward process to map it to the game sprite (for which we are currently using a rectangle). As you can see in GameScene.swift, very little actually changes:

    func createAlien(point: CGPoint) -> SKSpriteNode {
        let size = CGSize(width: 40, height: 30)
        //let alien = SKShapeNode(rectOf: size)
        let alien = SKSpriteNode(imageNamed: "alienImage")
        alien.size = size
        //print(self.frame.minY, self.frame.maxY, self.frame.minX, self.frame.maxX, self.frame.width, self.frame.height)
        alien.position = point
        //alien.strokeColor = SKColor(red: 0.0/255.0, green: 200.0/255.0, blue: 0.0/255.0, alpha: 1.0)
        //alien.lineWidth = 4
        alien.physicsBody = SKPhysicsBody(rectangleOf: size)
        alien.physicsBody?.affectedByGravity = false
        alien.physicsBody?.isDynamic = true
        alien.physicsBody?.categoryBitMask = collisionAlien
        alien.physicsBody?.collisionBitMask = 0
        alien.physicsBody?.contactTestBitMask = collisionPlayer
        = "alien"
        return alien


It’s worth bearing in mind that this will simply replace the existing rectangles with graphics. As you can probably see from the images above, mine are neither straight, trimmed, nor centered, and so it looks a little skewed in the game. Nevertheless, we’re now in the territory of a playable game:

We’re not there yet, though. The next post is the final one, in which we’ll add a score, deal with the overlapping aliens and probably reduce the size of the ship. Also, if you run this, you’ll see that after a short amount of time, it uses a huge amount of memory tracking all the aliens, so we’ll limit the number of aliens.


Serverless Computing – A Paradigm Shift

In the beginning

When I first started programming (on the Spectrum ZX81), you would typically write a program such as this (apologies if it’s syntactically incorrect, but I don’t have a Speccy interpreter to hand):

10 PRINT "Hello World"
20 GOTO 10

You could type that in to a computer at Tandys and chuckle as the shop assistants tried to work out how to stop the program (sometimes, you might not even use “Hello World”, but something more profane).

However, no matter how long it took them to work out how to stop the program, they only paid for the electricity the computer used while it was on. Further, there will only ever be a finite and, presumably (I never tried), predictable number of “Hello World” messages produced in an hour.

The N-Tier Revolution

Fast forward a few years, and everyone’s talking about N-Tier computing. We’re writing programs that run on servers. Some of those servers are big and expensive, but pretty much the same statement is true. No matter how big and complex your program that you run on the server, it’s your server (well, actually, it probably belongs to a company that you work for in some capacity). For example, if you have a poorly written SQL Server Procedure that scans an entire table, the same two statements are still true: no matter how long it takes to run, the price for execution is consistent, and the amount of time it takes to run is predicable (although, you may decide that if it’s slow, speeding it up might be a better use of your time than calculating exactly how slow).

Using Other People’s Computers

And now we come to cloud computing… or do we? The chronology is a little off on this, and that’s mainly because everyone keeps forgetting what cloud computing actually is. You’re renting time on somebody else’s computer. If I was twenty years older, I might have started this post by saying that “this was what was done back in the 70’s and 80’s in a manner of speaking. But I’m not, so we’ll jump back to the mid to late 90’s: and SETI. Anyone who had a computer back in those days of dial-up connections and 14.4K modems will remember that SETI (search for extra-terrestrial intelligence) were distributing a program for everyone to run on their computer instead of toaster screensavers*.

Wait – isn’t that what cloud computing is?

Yes – but SETI did it in reverse. You would dial-up, download a chunk of data and your machine would process it as a screensaver.

Everyone was happy to do that for free: because we all want to find aliens. But what if there had been a cost associated to each byte of data processed; clearly something similar was in the mind of Amazon when they started with this idea.

Back to the Paradigm Shift

So, the title, and gist of this post is that the two statements that have been consistent right through from programs written on the ZX Spectrum to programs written in Turbo C and Pascal, to programs written in C# running on a dedicated server, has now changed. So, here are the four big changes brought about by the cloud **:

1. Development and Debugging

If you write a program, you can no-longer be sure of the cost, or the execution time. This isn’t a scare post: both are broadly predictable; but the paradigm has now changed. If you’re in the middle of writing an Azure function, or a GCP BigQuery query, and it doesn’t work, you can’t just close your laptop and go for dinner while you have a think, because while you do, nodes are lighting up all over the world trying to complete your task. The lights are dimming in Seattle while your Azure function crashes again and again.

2. Scale and Accessibility

The second big change is the way that your code is scaled. For example, you might be used to parallelising slow code so that you can make use of all available threads; however, in our new world, if you do that, you may actually be making it harder for your cloud platform of choice to scale your code.

Because you pay per minute of computing time (typically this is more expensive than storage), code that is unnecessarily slow or inefficient may not cause your system to slow down – what it will probably do is to cost you more money to run it at the same speed.

In some cases, you might find it’s more efficient to do the opposite of what you have thus-far believed to be accepted industry best practice. For example, it may prove more cost efficient to de-normalise data that you need to access.

3. Logging

This isn’t exactly new: you should always have been logging in your programs – it’s just common sense. However, there’s a new emphasis here: you’re not running this on your own server (you’re not even running it on a customer’s server) – it’s somebody else’s. That means that if it crashes, there’s a limited amount of investigation that you can do. As a result, you need to log profusely.

4. Microservices and Message Busses

IMHO, there are two reason why the Microservice architecture has become so popular in the cloud world: it’s easy – that is, spinning up a new Azure function endpoint takes minutes.

Secondly, it’s more scaleable. Microservices make your code more scaleable because it’s easy for the cloud provider to instantiate two instances of your program, and then three, and then a million. If your program does one small thing, then only that small thing needs to be instantiated. If your program does twenty different things, it can still scale, but it’ll cost more, because it will require more processing power.

Finally, instead of simply calling the service that you need, there is now the option to place a message on a queue; apart from separating your program into definable responsibility sectors, this means that, when your cloud provider of choice scales your service out, all the instances can pick up a message and deal with it.


So, what’s the conclusion: is cloud computing good, or bad? In five years time, the idea that someone in your organisation will know the spec of the machine your business critical application is running on seems a little far-fetched. The days of having a trusted server that has a load of hugely important stuff on, but nobody really knows what, and has been running since 1973 are numbered.

Obviously, there’s a price to pay for everything. In the case of the cloud, it’s complexity – not of the code, and not really of the combined system, but typically you are introducing dozens of moving parts to a system. If you decide to segregate your database, you might find that you have several databases, tens, or even hundreds of independent processes and endpoints, you could even spread that across multiple cloud providers. It’s not that hard to lose track of where these pieces are living and what they are doing.


* If you don’t get this reference then you’re probably under 30.

** Clearly this is an opinion piece.

Setting up an e-mail Notification System using Logic Apps

One of the new features of the Microsoft’s Azure offering are Logic Apps: these are basically a workflow system, not totally dis-similar to Windows Workflow (WF so as not to get sued by panda bears). I’ve worked with a number of workflow systems in the past, from standard offerings to completely bespoke versions. The problem always seems to be that, once people start using them, they become the first thing you reach for to solve every problem. Not to say that you can’t solve every problem using a workflow (obviously, it depends which workflow and what you’re doing), but they are not always the best solution. In fact, they tend to be at their best when they are small and simple.

With that in mind, I thought I’d start with a very straightforward e-mail alert system. In fact, all this is going to do is to read an service bus queue and send an e-mail. I discussed a similar idea here, but that was using a custom written function.

Create a Logic App

The first step is to create a new Logic App project:

There are three options here: create a blank logic app, choose from a template (for example, process a service bus message), or define your own with a given trigger. We’ll start from a blank app:


Obviously, for a workflow to make sense, it has to start on an event or a schedule. In our case, we are going to run from a service bus entry, so let’s pick that from the menu that appears:

In this case, we’ll choose Peek-Lock, so that we don’t lose our message if something fails. I can now provide the connection details, or simply pick the service bus from a list that it already knows about:

It’s not immediately obvious, but you have to provide a connection name here:

If you choose Peek-Lock, you’ll be presented with an explanation of what that means, and then a screen such as the following:

In addition to picking the queue name, you can also choose the queue type (as well as listening to the queue itself, you can run your workflow from the dead-letter queue – which is very useful in its own right, and may even be a better use case for this type of workflow). Finally, you can choose how frequently to poll the queue.

If you now pick “New step”, you should be faced with an option:

In our case, let’s provide a condition (so that only queue messages with “e-mail” in the message result in an e-mail):

Before progressing to the next stage – let’s have a look at the output of this (you can do this by running the workflow and viewing the “raw output”):

Clearly the content data here is not what was entered. A quick search revealed that the data is Base64 encoded, so we have to make a small tweak in advanced mode:

Okay – finally, we can add the step that actually sends the e-mail. In this instance, I simply picked, and allowed Azure access to my account:

The last step is to complete the message. Because we only took a “peek-lock”, we now need to manually complete the message. In the designer, we just need to add an action:

Then tell it that we want to use the service bus again. As you can see – that’s one of the options in the list:

Finally, it wants the name of the queue, and asks for the lock token – which it helpfully offers from dynamic content:


To test this, we can add a message to our test queue using the Service Bus Explorer:

I won’t bother with a screenshot of the e-mail, but I will show this:

Which provides a detailed overview of exactly what has happened in the run.


Having a workflow system built into Azure seems like a two edged sword. On the one hand, you could potentially use it to easily augment functionality and quickly plug holes; however, on the other hand, you might find very complex workflows popping up all over the system, creating an indecipherable architecture.

Working with Multiple Cloud Providers – Part 3 – Linking Azure and GCP

This is the third and final post in a short series on linking up Azure with GCP (for Christmas). In the first post, I set-up a basic Azure function that updated some data in table storage, and then in the second post, I configured the GCP link from PubSub into BigQuery.

In the post, we’ll square this off by adapting the Azure function to post a message directly to PubSub; then, we’ll call the Azure function with Santa’a data, and watch that appear in BigQuery. At least, that was my plan – but Microsoft had other ideas.

It turns out that Azure functions have a dependency on Newtonsoft Json 9.0.1, and the GCP client libraries require 10+. So instead of being a 10 minute job on Boxing day to link the two, it turned into a mammoth task. Obviously, I spent the first few hours searching for a way around this – surely other people have faced this, and there’s a redirect, setting, or way of banging the keyboard that makes it work? Turns out not.

The next idea was to experiment with contacting the Google server directly, as is described here. Unfortunately, you still need the Auth libraries.

Finally, I swapped out the function for a WebJob. WebJobs give you a little move flexibility, and have no hard dependencies. So, on with the show (albeit a little more involved than expected).


In this post I described how to create a basic WebJob. Here, we’re going to do something similar. In our case, we’re going to listen for an Azure Service Bus Message, and then update the Azure Storage table (as described in the previous post), and call out to GCP to publish a message to PubSub.

Handling a Service Bus Message

We weren’t originally going to take this approach, but I found that WebJobs play much nicer with a Service Bus message, than with trying to get them to fire on a specific endpoint. In terms of scaleability, adding a queue in the middle can only be a good thing. We’ll square off the contactable endpoint at the end with a function that will simply convert the endpoint to a message on the queue. Here’s what the WebJob Program looks like:

public static void ProcessQueueMessage(
    [ServiceBusTrigger("localsantaqueue")] string message,
    TextWriter log,
    [Table("Delivery")] ICollector<TableItem> outputTable)
    // parse query parameter
    TableItem item = Newtonsoft.Json.JsonConvert.DeserializeObject<TableItem>(message);
    if (string.IsNullOrWhiteSpace(item.PartitionKey)) item.PartitionKey = item.childName.First().ToString();
    if (string.IsNullOrWhiteSpace(item.RowKey)) item.RowKey = item.childName;
    log.WriteLine("DeliveryComplete Finished");

Effectively, this is the same logic as the function (obviously, we now have the GCPHelper, and we’ll come to that in a minute. First, here’s the code for the TableItem model:

public class TableItem : TableEntity
    public string childName { get; set; }
    public string present { get; set; }

As you can see, we need to decorate the members with specific serialisation instructions. The reason being that this model is being used by both GCP (which only needs what you see on the screen) and Azure (which needs the inherited properties).


As described here, you’ll need to install the client package for GCP into the Azure Function App that we created in post one of this series (referenced above):

Install-Package Google.Cloud.PubSub.V1 -Pre

Here’s the helper code that I mentioned:

public static class GCPHelper
    public static async Task AddMessageToPubSub(TableItem toSend)
        string jsonMsg = Newtonsoft.Json.JsonConvert.SerializeObject(toSend);
            Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "Test-Project-8d8d83hs4hd.json"));
        GrpcEnvironment.SetLogger(new ConsoleLogger());

        PublisherClient publisher = PublisherClient.Create();
        string projectId = "test-project-123456";
        TopicName topicName = new TopicName(projectId, "test");
        SimplePublisher simplePublisher = 
            await SimplePublisher.CreateAsync(topicName);
        string messageId = 
            await simplePublisher.PublishAsync(jsonMsg);
        await simplePublisher.ShutdownAsync(TimeSpan.FromSeconds(15));

I detailed in this post how to create a credentials file; you’ll need to do that to allow the WebJob to be authorised. The Json file referenced above was created using that process.

Azure Config

You’ll need to create an Azure message queue (I’ve called mine localsantaqueue):

I would also download the Service Bus Explorer (I’ll be using it later for testing).

GCP Config

We already have a DataFlow, a PubSub Topic and a BigQuery Database, so GCP should require no further configuration; except to ensure the permissions are correct.

The Service Account user (which I give more details of here needs to have PubSub permissions. For now, we’ll make them an editor, although in this instance, they probably only need publish:


We can do a quick test using the Service Bus Explorer and publish a message to the queue:

The ultimate test is that we can then see this in the BigQuery Table:

Lastly, the Function

This won’t be a completely function free post. The last step is to create a function that adds a message to the queue:

public static HttpResponseMessage Run(
    [HttpTrigger(AuthorizationLevel.Function, "post")]HttpRequestMessage req,             
    TraceWriter log,
    [ServiceBus("localsantaqueue")] ICollector<string> queue)
    log.Info("C# HTTP trigger function processed a request.");
    var parameters = req.GetQueryNameValuePairs();
    string childName = parameters.First(a => a.Key == "childName").Value;
    string present = parameters.First(a => a.Key == "present").Value;
    string json = "{{ 'childName': '{childName}', 'present': '{present}' }} ";            

    return req.CreateResponse(HttpStatusCode.OK);

So now we have an endpoint for our imaginary Xamarin app to call into.


Both GCP and Azure are relatively immature platforms for this kind of interaction. The GCP client libraries seem to be missing functionality (and GCP is still heavily weighted away from .Net). The Azure libraries (especially functions) seem to be in a pickle, too – with strange dependencies that makes it very difficult to communicate outside of Azure. As a result, this task (which should have taken an hour or so) took a great deal of time, and it was completely unnecessary.

Having said that, it is clearly possible to link the two systems, if a little long-winded.


Getting Started With iOS for a C# Programmer – Part Five – Collision

In the previous post in this series, we covered moving an object around the screen. The next thing to consider to how to shoot the aliens, and how they can defeat the player.


The first stage is to create something to collide with. As with other game objects, our aliens will simply be rectangles at this stage. Let’s start with a familiar looking function:

    func createAlien(point: CGPoint) -> SKShapeNode {
        let size = CGSize(width: 40, height: 30)
        let alien = SKShapeNode(rectOf: size)
        alien.position = point
        alien.strokeColor = SKColor(red: 0.0/255.0, green: 200.0/255.0, blue: 0.0/255.0, alpha: 1.0)
        alien.lineWidth = 4
        alien.physicsBody = SKPhysicsBody(rectangleOf: size)
        alien.physicsBody?.affectedByGravity = false
        alien.physicsBody?.isDynamic = true
        = "alien"
        return alien


So, that will create us a green rectangle – let’s cause them to appear at regular intervals:

    override func didMove(to view: SKView) {

    func createAlienSpawnTimer() {
        var timer = Timer.scheduledTimer(timeInterval: 1.0, target: self, selector: #selector(self.timerUpdate), userInfo: nil, repeats: true)

The scheduleTimer calls self.timerUpdate:

    @objc func timerUpdate() {
        let xSpawn = CGFloat(CGFloat(arc4random_uniform(1334)) - CGFloat(667.0))
        let ySpawn = CGFloat(250)
        print (xSpawn, ySpawn)
        let spawnLocation = CGPoint(x: xSpawn, y: ySpawn)
        let newAlien = createAlien(point: spawnLocation)

So, every second, we’ll get a new alien… But they will just sit there at the minute; let’s get them to try and attack our player:

    override func update(_ currentTime: TimeInterval) {
        // Called before each frame is rendered
        player?.position.x += playerSpeed!
        self.enumerateChildNodes(withName: "bullet") {
            (node, _) in
            node.position.y += 1
    func moveAliens() {
        self.enumerateChildNodes(withName: "alien") {
            (node, _) in
            node.position.y -= 1
            if (node.position.x < (self.player?.position.x)!) {
                node.position.x += CGFloat(arc4random_uniform(5)) - 1 // Veer right
            } else if (node.position.x > (self.player?.position.x)!) {
                node.position.x += CGFloat(arc4random_uniform(5)) - 4 // Veer left


The SpriteKit game engine actually handles most of the logic around collisions for you. There’s a few changes that are needed to our game at this stage, though.


This is the parent class that actually handles the collision logic, so your GameScene class now needs to look more like this:

class GameScene: SKScene, SKPhysicsContactDelegate {

The game engine needs to be told where this SKPhysicsContactDelegate implementation is; in our case, it’s the same class:

    func createScene(){
        self.physicsBody = SKPhysicsBody(edgeLoopFrom: self.frame)
        self.physicsBody?.isDynamic = false
        self.physicsBody?.affectedByGravity = false
        self.physicsWorld.contactDelegate = self
        self.backgroundColor = SKColor(red: 255.0/255.0, green: 255.0/255.0, blue: 255.0/255.0, alpha: 1.0)

Contact and Colision Masks

The next thing, is that you need to tell SpriteKit how these objects interact with each other. There are three concepts here: contact, collision and category.


This allows each object to adhere to a type of behaviour; for example, the aliens need to pass through each other, but not through bullets; likewise, if we introduced a different type of alien (maybe a different graphic), it might need the same collision behaviour as the existing ones.


The idea behind contact is that you get notified when two objects intersect each other; in our case, we’ll need to know when aliens intersect bullets, and when aliens intersect the player.


Collision deals with what happens when the objects intersect. Unlike contact, this isn’t about getting notified, but the physical interaction. Maybe we have a game where blocks are pushed out of the way – in which case, we might only need collision, but not contact; or, in our case, we don’t have any physical interaction, because contact between two opposing entities results in one of them being removed.


So, the result of all that is that we need three new properties setting for each new object:

        alien.physicsBody?.categoryBitMask = collisionAlien
        alien.physicsBody?.collisionBitMask = 0
        alien.physicsBody?.contactTestBitMask = collisionPlayer
        = "alien"

. . .

        bullet.physicsBody?.categoryBitMask = collisionBullet
        bullet.physicsBody?.collisionBitMask = 0
        bullet.physicsBody?.contactTestBitMask = collisionAlien
        = "bullet"

. . .

        player.physicsBody?.categoryBitMask = collisionPlayer
        player.physicsBody?.collisionBitMask = 0
        player.physicsBody?.contactTestBitMask = 0
        = "player"


One this is done, you have access to the didBegin function; which, bizarrely named as it is, is the function that handles contact. Before we actually write any code in here, let’s create a helper method to determine if two nodes have come into contact:

    func AreTwoObjectsTouching(objA: String, nodeA: SKNode, objB: String, nodeB: SKNode, toRemove: String) -> Bool {
        if (objA == && objB == {
            if (toRemove == objA) {
                RemoveGameItem(item: nodeA)
            } else if (toRemove == objB) {
                RemoveGameItem(item: nodeB)
            return true
        } else if (objB == && objA == {
            if (toRemove == objA) {
                RemoveGameItem(item: nodeB)
            } else if (toRemove == objB) {
                RemoveGameItem(item: nodeA)
            return true
        } else {
            return false

Since the accepted standard for iOS is bad naming, I felt it my duty to continue the tradition. This helper method is effectively all the logic that occurs. As you can see, as we don’t know what has touched what, we have reversed checks and then simply remove the stated item (in our case, that is just a game rule). The didBegin function simply calls this:

    func didBegin(_ contact: SKPhysicsContact) {
        print ("bodyA", contact.bodyA.node?.name)
        print ("bodyB", contact.bodyB.node?.name)
        // If player and alien collide then the player dies
        if (AreTwoObjectsTouching(objA: "alien", nodeA: contact.bodyA.node!,
                                  objB: "player", nodeB: contact.bodyB.node!, toRemove: "player")) {
        } else if (AreTwoObjectsTouching(objA: "bullet", nodeA: contact.bodyA.node!,
                                         objB: "alien", nodeB: contact.bodyB.node!, toRemove: "alien")) {
            RemoveGameItem(item: contact.bodyB.node!)
            RemoveGameItem(item: contact.bodyA.node!)

The debug statements at the top are clearly not necessary; however, they do give some insight into what is happening in the game.


We now have a vaguely working game:

You can control the ship, fire, and the aliens are removed when they are hit, along with the player being removed when it is hit. The next stage is to replace the rectangles with images.


Short Walks – GCP Credit Alerts

One of the things that is quite unnerving when you start using GCP is the billing. Unlike Azure (with your MSDN monthly credits), GCP just has a single promotion and that’s it; consequently, they don’t have any real way to have an automatic shut off instead of actually charging you (unlike to regular mails that I get from MS telling me they’ve suspended my account until next month).

When you start messing around with BigTable and BigQuery, you can eat up tens, or even hundreds of pounds very quickly, and you might not even realise you’ve done it.

GCP does have a warning, and you can set it to e-mail you at certain intervals within a spending limit:

However, this doesn’t include credit by default. That is, if Google give you a credit to start with (for example, because you’re trying out GCP or, I imagine, if you load your account up before-hand) then that doesn’t get included in your alerts.

Credit Checkbox

There is a checkbox that allows you to switch this behaviour, so that these credit totals are included:

And now you can see how much of your credit you’ve used:

And even receive e-mail warnings:

Disable Billing

One other thing you can do is to disable billing:

Unfortunately, this works differently from Azure, and effectively suspends you project:

Short Walks – NSubstitute – Subclassing and Partial Substitutions

I’ve had this issue a few times recently. Each time I have it, after I’ve worked out what it was, it makes sense, but I keep running into it. The resulting frustration is this post – that way, it’ll come up the next time I search for it on t’internet.

The Error

The error is as follows:

“NSubstitute.Exceptions.CouldNotSetReturnDueToNoLastCallException: ‘Could not find a call to return from.”

Typically, it seems to occur in one of two circumstances: substituting a concrete class and partially substituting a concrete class; that is:

var myMock = Substitute.ForPartsOf<MyClass>();


var myMock = Substitute.For<MyClass>();


If you were to manually mock out an interface, how would you do that? Well, say you had IMyClass, you’d just do something like this:

public class MyClassMock : IMyClass 
	// New and imaginative behaviour goes here

All’s good – you get a brand new implementation of the interface, and you can do anything with it. But what would you do if you were trying to unit test a method inside MyClass that called another method inside MyClass; for example:

public class MyClass : IMyClass
	public bool Method1()
		int rowCount = ReadNumberOfRowsInFileOnDisk();
		Return rowCount > 10;
	public int ReadNumberOfRowsInFileOnDisk()
		// Opens a file, reads it, and returns the number of rows

(Let’s not get bogged down in how realistic this scenario is, or whether or not this is good practice – it illustrates a point)

If you want to unit test Method1(), but don’t want to actually read a file from the disk, you’ll want to replace ReadNumberOfRowsInFileOnDisk(). The only real way that you can do this is to subclass the class; for example:

public class MyClassMock : MyClass

You can now test the behaviour on MyClass, via MyClassMock; however, you still can’t* override the method ReadNumberOfRowsInFileOnDisk() because it isn’t virtual. If you make the method virtual, you can override it in the subclass.

The same is true with NSubstitute – if you want to partially mock a class in this way, it follows the same rules as you must if you would to roll your own.


* There may, or may not be one or two ways to get around this restriction, but let’s at least agree that they are, at best, unpleasant.