Introduction to Unit Tests (with examples in .Net) – Part 1 – Structuring Tests

I’m intending this to be the first of a series on Unit Testing. In the series, I’ll discuss the basics of unit tests, the principles behind them, what makes a good unit test, what makes a bad unit test, and the technologies that you may choose to use to help you with them. I will not be covering test-driven development – this is simply about the mechanics and the reasons, not the methodology.

In this article, we’ll talk about what a unit test is, and how you might structure one. We will not be using any external tools for this, and what we do here should be possible in just about any language.

What is a Unit Test

A unit test is any way in which a single unit of functionality can be verified: it doesn’t have to be written before the code to be a test, it doesn’t have to be written in a test framework to be a test; it just has to run the code, and have some way of telling that the code has worked (the term “worked” is filled with ambiguity, but we’ll ignore that for the minute).

There are typically three parts to a unit test: they vary based on methodology, but essentially they are that you set-up the test, run the test, and check that the test worked; this is sometimes referred to as arrange, act, and assert.

For this section, we’ll be testing the following simple method:

int AddNumbers(int a, int b) => a + b;

Arrange

This is the part of the test where you configure the system under test (SUT). Given that you’re only testing a single piece of functionality, this can sometimes be quite involved, in order to get the system to place where it is ready to be tested, and actually running in a realistic manner. For our example above, this may look similar to the following:

// Arrange
int a = 4;
int b = 2;

Remember, we’re not using any external tools just yet – the above code could simply be in the Program.cs of a console application, or whatever the equivalent is in your language of choice; that is, just a simple program.

Act

The next part of the test actually exercises the code. The key thing here is that this is a unit test, so you would expect this to test a single unit of functionality; i.e., this should be a single line. In our case, it might look like this:

// Act
int result = AddNumbers(a, b);

We’ll come back to concepts such as mocking later in the series, but for now, let’s just agree with the comment that this part should exercise actual code; for example, there would be no advantage to the following code:

// Act
int result = a + b;

Your test may pass, but all you’re really testing is you compiler / interpreter. Writing tests that don’t actually test anything that you’re interested in is one of the biggest mistakes that I’ve seen with people new to unit testing. I would argue that having no tests at all is more valuable than a test that appears to provide coverage, but does not. After all, if there is no test, then you know that you need to create a test.

Assert

The final part of the test is to validate that the test passes – arguably this is around 50% of what you get from a testing tool like XUnit or JUnit – however, the following will work:

// Assert
System.Diagnostics.Debug.Assert(result == 6);

As in fact, will the more universal:

// Assert
if (result != 6) throw new Exception("Fail");

Unlike with the Act section, you can check several things are true; however, the test should be geared towards a single assertion. It’s worth bearing in mind that your assertion is that the functionality works correctly, not that a specific result is produced. This means that the test that we’ve discussed in this post is too specific.

Broadening a Test

Thinking about other possible scenarios, it’s tempting to introduce a randomised element into the test; that is, given two random numbers, the function will return the same result as that which the system independently calculates. I’m not saying this is a bad approach, but it isn’t a consistent one. This kind of test often leads to tests failing on some runs, and passing on others.

First Principles

I have no doubt that this has fallen out of favour somewhere, but the FIRST acronym provides some useful principles for testing:

Fast. Independent. Repeatable. Self-validating. Timely.

I won’t cover each one of these, but the essence of this principle is that when you run a test, you should have confidence that you can re-run the test with the same result (given the same inputs), and that your tests should be relevant to what you’re testing.

How to Broaden Our Test

Given our constraints, one easy way to broaden the test scope is to simply introduce multiple defined input parameters. In our case, perhaps instead of having two integers to feed in, we have an array and iterate through the array.

Naming a Test

The final thing that I want to cover in this first section is naming. There are many opinions on this, so there’s no right way; however, there probably are wrong ways. In general terms, the test should be named in a way that any person reading it could ascertain what is being tested; one popular version of this is to use the Given/When/Then form:

Given_TwoValidNumbers_When_AddNumbers_Then_CorrectResultIsReturned

Another one, that I personally use, is the format: Method Name/State Under Test/Expected Result; for example:

AddNumbers_TwoValidNumbers_CorrectResultIsReturned

The key here is consistency (i.e., don’t mix and match), and clarity; the following is an example of a bad test name:

AddNumbers_Works

In the next post in this series, we’ll talk about more complex tests, and mocking.

Cyclomatic Complexity – What it is, why you should care, and how to reduce it using the Strategy Pattern

Cyclomatic complexity is one of those terms that makes you think you missed something when you were learning programming. However, the concept is a really simple one. Cyclomatic complexity is simply the number of paths through your code. There are more detailed explanations if you scan the web (involving edges and nodes), but for this post, we’ll just work with that.

Code Metrics

This article is not about .Net per se – what I’m writing here applies to any OO language; however, for the purpose of illustration, I’ll be using C#, and the Code Metrics Window in Visual Studio.

Cyclomatic Complexity

Given what we’ve just said, we can take the following program:

int x = 1;

And we can determine that the cyclomatic complexity is 1 – that is, there is only one way that this code can execute – no branches, no loops, just one statement. So, the cyclomatic complexity is 1:

Okay, let’s now add a single branch:

There are now two ways this code can execute (in actuality, there is only one, but cyclomatic complexity doesn’t follow the actual logical course – so let’s agree on two for the sake of argument).

We can now add a second condition:

The complexity now goes to three.

Okay, so that’s all very interesting, but what does that actually mean – why is it useful to know this number? The main answer to this is testability: if you know there are 3 possible execution paths for the code, then you know that you need a minimum of 3 tests to cover those paths. There’s a second part to that, which is that a method that requires one – three tests might be easy to change or debug, one that requires ten – twenty tests is definitely not.

So, the question is, how can we reduce this figure?

Reducing Cyclomatic Complexity

There are, obviously, more answers to this than would fit in this post, but here I’m going to focus on bringing in a strategy pattern.

Let’s consider the following code:

    // Night
    switch (thingToAutomate)
    {
        case "Door":
            Console.WriteLine("Lock");
            break;

        case "Window":
            Console.WriteLine("Close");
            break;

        case "TV":
            Console.WriteLine("Turn off");
            break;

        case "Lights":
            Console.WriteLine("Turn off");
            break;
    }

This method had a cyclomatic complexity of 5 – there are 4 options, but also the possibility that none are true.

The essence behind the strategy pattern is just that you assign functionality using polymorphism. If we consider the code above, doing that would actually increase the number of lines of code, and thereby increase the cyclomatic complexity. However, what if, in addition to the automation routine at night, we needed one for the morning:

// Morning
switch (thingToAutomate)
{
    case "Door":
        Console.WriteLine("Unlock");
        break;

    case "Window":
        Console.WriteLine("Open");
        break;

    case "TV":        
    case "Lights":        
        break;
}

Okay, now our cyclomatic complexity increases (to 9) – and will increase every time we need to vary our behaviour based on the thing that we’re automating. Instead, let’s consider the following:

    internal interface IThingToAutomateStrategy
    {
        void Morning();
        void Night();
    }

Let’s now imagine that we implement the interface for each thing:

    internal class Door : IThingToAutomateStrategy
    {
        public void Morning()
        {
            Console.WriteLine("Unlock");
        }

        public void Night()
        {
            Console.WriteLine("Lock");
        }
    }

    . . .

Admittedly, this does increase the lines of code, but we end up with simpler code, and it has a lower, overall, cyclomatic complexity:

    internal class Automation
    {
        public IThingToAutomateStrategy AutomateStrategy { get; set; }

        public void AutomateMorning()
        {
            AutomateStrategy.Morning();            
        }

        public void AutomateNight()
        {
            AutomateStrategy.Night();
        }
    }

We can then use this in our program like this:

IThingToAutomateStrategy automateStrategy;

switch (thingToAutomate)
{
    case "Door":
        automateStrategy = new Door();
        break;

    case "Window":
        automateStrategy = new Window();
        break;

    case "TV":
        automateStrategy = new TV();
        break;

    case "Lights":
        automateStrategy = new Lights();
        break;
}

automateStrategy.Morning();
automateStrategy.Night();

The cyclomatic complexity of this is back to 5; but the best part is that it doesn’t increase. Imagine the following new method:

    internal interface IThingToAutomateStrategy
    {
        void Morning();
        void Darkness();
        void Night();
    }

And the implementation:

    internal class Lights : IThingToAutomateStrategy
    {
        public void Darkness()
        {
            Console.WriteLine("Switch on");
        }

        public void Morning()
        {
            // Nothing to do here
        }

        public void Night()
        {
            Console.WriteLine("Turn Off");
        }
    }

It’s worth pointing out that we’re in breach of the ISP here – but since we’re only doing it to make a point, we’ll agree to let it slide.

Adding this to the code flow doesn’t affect the cyclomatic complexity score of that code file at all:

IThingToAutomateStrategy automateStrategy;

switch (thingToAutomate)
{
    case "Door":
        automateStrategy = new Door();
        break;

    case "Window":
        automateStrategy = new Window();
        break;

    case "TV":
        automateStrategy = new TV();
        break;

    case "Lights":
        automateStrategy = new Lights();
        break;
}

automateStrategy.Morning();
automateStrategy.Darkness();
automateStrategy.Night();

It’s worth noting that it does increase the overall complexity, as it counts as a single additional code path per thing.

References

https://docs.microsoft.com/en-us/visualstudio/code-quality/code-metrics-cyclomatic-complexity?WT.mc_id=DT-MVP-5004601

https://docs.microsoft.com/en-us/visualstudio/code-quality/how-to-generate-code-metrics-data?WT.mc_id=DT-MVP-5004601

https://codinghelmet.com/articles/reduce-cyclomatic-complexity-composite-design-pattern

https://www.c-sharpcorner.com/UploadFile/shinuraj587/strategy-pattern-in-net/

https://codewithshadman.com/strategy-pattern-csharp/

https://github.com/pcmichaels/StrategyExample

Integration Testing With In-Memory Entity Framework

As part of a project that I’m working on, I’ve been playing around with integration tests. In this post, I’m going to combine this previous post to cover a full end-to-end test that creates and tests an in-memory representation of the database.

As a quick caveat, there are some concerns over these type of in-memory database versions: for complex databases, that may well be true, but this is a very simple example. However, you may find that if you do try to apply this to something more complex that it doesn’t work as you’d expect.

Let’s start with setting up the WebApplicationFactory:

            var appFactory = new WebApplicationFactory<Program>()
                .WithWebHostBuilder(host =>
                {
                    host.ConfigureServices(services =>
                    {
                        var descriptor = services.SingleOrDefault(
                            d => d.ServiceType ==
                            typeof(DbContextOptions<MyDbContext>));

                        services.Remove(descriptor);

                        services.AddDbContext<MyDbContext>(options =>
                        {
                            options.UseInMemoryDatabase("InMemoryDB");
                        });
                    });
                });
            var httpClient = appFactory.CreateClient();

What we’re basically doing here is replacing the existing DB Context, with our in memory version. Next, we’ll prepare the payload:

            var myViewModel = new myViewModel()
            {
                MyValue = new Models.NewValue()
                {
                    Name = "test",
                    Uri = "www.test.com",
                    Description = "description"
                }
            };

            var json = JsonSerializer.Serialize(myViewModel);
            var content = new StringContent(
                json,
                System.Text.Encoding.UTF8,
                "application/json");

Finally, we can call the endpoint:

            // Act
            using var response = await httpClient.PostAsync(
                "/home/myendpoint", content);

In order to interrogate the context, we need to get the service scope:

            var scope = appFactory.Services.GetService<IServiceScopeFactory>()!.CreateScope();
            var dbContext = scope.ServiceProvider.GetService<MyDbContext>();

            Assert.NotNull(dbContext);
            Assert.Single(dbContext!.NewValues);

That should be all that you need. In addition to the caveats above, it’s not lightning fast either.

References

Integration Tests in Asp.Net

StackOverfow Question relating to adding DbContext to an integration test

Testing anAsp.Net web-app Using Integration Tests

Manually adding a DbContext for an integration test

Configuring your models with Entity Framework

One of the tricks I’ve used for a while when setting EF up in a project, is to use inheritance in order to share code, but not objects, across the business and data layers of the application.

Let’s take the following example, as shown above:

namespace MyProject.Models
{
    public class ResourceModel
    {

And now the data layer:

namespace MyProject.Data.Resources
{
    public class ResourceEntity : ResourceModel

This gives us a number of advantages: the code is shared between the objects, but the two objects are not the same. However, until recently, this presented an issue. Imagine that we had a property in the model that looked like this:

public TagModel[] Tags { get; } 

TagModel itself might have a similar inheritance structure; however, how would you return that from the data layer – since the inheritance would force it to return the same type.

Covariance return type

Fortunately, since C# 9 you can return covariants (Covariance is basically the idea that you can substitute a sub-type for a class).

What this means is that you can override the relevant property in the sub class (the data layer). First, you’ll need to make the property virtual:

    public class ResourceModel
    {
        public virtual TagModel[] Tags { get; } 
    }

And then just override it:

    public class ResourceEntity : ResourceModel
    {
        public int Id { get; set; }

        public override TagEntity[] Tags { get; }
    }

You can’t use anything like a List here, because (for example) a List of TagEntity has no relationship to a List of TagModel.

Hope this helps.

References

https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/proposals/csharp-9.0/covariant-returns?WT.mc_id=DT-MVP-5004601

https://github.com/dotnet/csharplang/issues/49

https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/override?WT.mc_id=DT-MVP-5004601

Using Pub/Sub (or the Fanout Pattern) in Rabbit MQ in .Net 6

I’ve previously spoken and written quite extensively about the pub/sub pattern using message brokers such as GCP and Azure. I’ve also posted a few articles about Rabbit MQ.

In this post, I’d like to cover the Rabbit MQ concept of pub/sub.

The Concept

Most message brokers broadly support two types of message exchange. The first type is a queue: that is, a single, persistent list of messages that can be read by one, or multiple consumers. The use case I usually use for this is sending e-mails: imagine you have a massive amount of e-mails to send: write them all to a queue, and then set 3 or 4 consumers reading the queue and sending the mails.

The second type is publish / subscribe, or pub/sub. This is, essentially, the concept that each consumer has its own private queue. Imagine that you want to notify all the applications in your system that a sales order has been raised: each interested party would register itself as a consumer and, when a message is sent, they would all receive that message. This pattern works well for distributed systems.

As I said, most message brokers broadly support these two concepts, although annoyingly, in different ways and with different labels. Here, we’ll show how RabbitMQ deals with this.

Setting up RabbitMQ

Technology has moved on since the last time I wrote about installing and running it. The following docker command should have you set-up in a couple of seconds:

docker run --rm -it --hostname my-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management

Once it’s running, you can view the dashboard here. If you haven’t changed anything, the default username / password is guest / guest.

Receiver

Before we get into any actual code, you’ll need to install the Rabbiq MQ Client NuGet Package.

For pub/sub, the first task is to set-up a receiver. The following code should do that for you:

var factory = new ConnectionFactory() { HostName = "localhost" };
using var connection = factory.CreateConnection();
using var channel = connection.CreateModel();

channel.ExchangeDeclare("SalesOrder", ExchangeType.Fanout);

var result = channel.QueueDeclare("OrderRaised", false, false, false, null);
string queueName = result.QueueName;
channel.QueueBind(queueName, "SalesOrder", "");

Console.WriteLine(result);
  
EventingBasicConsumer consumer = new EventingBasicConsumer(channel);
consumer.Received += Consumer_Received;
  
channel.BasicConsume(queueName, true, consumer);


Console.WriteLine("Receiving...");
Console.ReadLine();

static void Consumer_Received(object sender, BasicDeliverEventArgs e)
{
    var body = e.Body.ToArray();
    var message = Encoding.UTF8.GetString(body);

    Console.WriteLine(message);
}

In the code above, you’ll see that we first set-up an exchange called SalesOrder, and we tell that exchange that it’s a Fanout exchange.

We then declare a queue, and bind it to the exchange – that is, it will receive messages sent to that exchange. Notice that we receive from the queue.

Finally, we set-up the consumer, and tell it what to do when a message is received (in this case, just output to the console window).

Sender

For the sender, the code is much simpler:

static void SendNewMessage(string message)
{
    var factory = new ConnectionFactory() { HostName = "localhost" };
    using var connection = factory.CreateConnection();
    using var channel = connection.CreateModel();

    channel.ExchangeDeclare("SalesOrder", ExchangeType.Fanout);

    channel.BasicPublish("SalesOrder", "", false, null, Encoding.UTF8.GetBytes(message));
}

Notice that we don’t have any concept of the queue here, we simply publish to the exchange – what happens after that is no concern of the publisher.

Summary

I keep coming back to Rabbit – especially for demos and concepts, as it runs locally easily, and has many more options than the main cloud providers – at least in terms of actual messaging capability. If you’re just learning about message brokers, Rabbit is definitely a good place to start.

Testing an Asp.Net Web App Using Integration Testing

I’ve recently been playing around with a tool called Scrutor. I’m using this in a project and it’s working really well; however, I came across an issue (not related to the tool per se). I had created an interface, but hadn’t yet written a class to implement it. Scrutor realised this was the case and started moaning at me. Obviously, I hadn’t written any unit tests around the non-existent class, but I did have a reasonably good test coverage for the rest of the project; however the project wouldn’t actually run.

To be clear, what I’m saying here is that, despite the test suite that I had running successfully, the program wouldn’t even start. This feels like a very askew state of affairs.

Some irrelevant background, I had a very strange issue with my Lenovo laptop, whereby, following a firmware update, the USB-C ports just stopped working – including to accept charge – so my laptop died. Sadly, I hadn’t followed good practice, with commits, and so part of my code was lost.

I’ve previously played around with the concept of integration tests in Asp.Net Core+, so I thought that’s probably what I needed here. There are a few articles and examples out there, but I couldn’t find anything that worked with Asp.Net 6 – so this is that.

In this post, we’ll walk through the steps necessary to add a basic test to your Asp.Net 6 web site. Note that this is not comprehensive – some dependencies will trip this up (e.g. database access); however, it’s a start. The important thing is that the test will fail where there are basic set-up and configuration issues with the web app.

The Test Project

The first step is to configure a test project. Obviously, your dependencies will vary based on what tools you decide to use, but the following will work for Xunit:

<PackageReference Include="Microsoft.AspNetCore.Mvc.Testing" Version="6.0.5" />
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.2.0" />		
<PackageReference Include="xunit" Version="2.4.1" />
<PackageReference Include="xunit.runner.console" Version="2.4.1" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.4.5" />

(See this post on Xunit libraries for details on the basic Xunit dependency list for .Net 6.)

The key here is to set-up the Web Application Factory:

var appFactory = new WebApplicationFactory<Program>();
var httpClient = appFactory.CreateClient();

We’ll come back to some specific issues with this exact code shortly, but basically, we’re setting up the in-memory test harness for the service (which in this case, is our web-app). You can obviously do this for an API in exactly the same manner. The rest of our test then looks like this:

using var response = await httpClient.GetAsync("/");

Assert.True(response.IsSuccessStatusCode);

If your test fails, and you want a fighting chance of working out why, you may wish to replace the assertion with this:

var content = await response.Content.ReadAsStringAsync();

That’s basically it; however, as that currently stands, you’ll start getting errors (some that you can see, and some that you cannot). It makes sense to make the HttpClient static, or at least raise it to the class level, as you only need to actually create it once.

Accessing the Main Project

The other issue that you’ll get here is that, because we’re using .Net 6 top level statements in Program.cs, it will tell you that it’s inaccessible. In fact, top level code does generate an implicit class, but it’s internal. This can be worked around my simply adding the following to the end of your Program.cs code:

public partial class Program { } // so you can reference it from tests

(See the references below for details of where this idea came from.)

Summary

In this post, we’ve seen how you can create an integration test that will assert that, at the very least, your web app runs. This method is much faster than constantly having to actually run your project. It obviously makes no assertions about how it runs, and whether what it’s doing is correct.

References

Example of testing top level statements

GitHub Issue reporting error with top level statements being tested

Stack Overflow question on how to access Program.cs from in program using top level statements

Tutorial video on integration tests

A Cleaner Program.cs / Startup.cs with Scrutor

I’ve previously written about the Scrutor library. However, this post covers something that has long irritated me about using an IoC container. Typically, when you have a fairly complex site, you’ll end up with dozens of statements like the following:

builder.Services.AddScoped<ISearchService, SearchService>();
builder.Services.AddScoped<IResourceDataAccess, ResourceDataAccess>();

It turns out that one of the other things that Scrutor can do for you is to work out which dependencies you need to register. For example, let’s consider the two classes above; let’s say that the first is in the main assembly of the project:

builder.Services.Scan(scan => scan
    .FromCallingAssembly()
        .AddClasses(true)
            .AsMatchingInterface()
            .WithScopedLifetime());

So, what does this do?

Well, FromCallingAssembly points it at the main assembly (that is, the one which you’re calling this registration from). FromClasses(true) then returns a list of all public classes and interfaces.

Finally, AsMatchingInterface matches classes with their interfaces, assuming a one-to-one pairing (if you don’t have that then there are other options to cope with that); and WithScopedLifetime will register them as scoped.

That worked well, but when I ran it, I realised that the second class (ResourceDataAccess) hadn’t registered. The reason being that it wasn’t from the calling assembly, but lived in a referenced project. An easy way to fix this was:

builder.Services.Scan(scan => scan
    .FromCallingAssembly()
        .AddClasses(true)
            .AsMatchingInterface()
            .WithScopedLifetime()
    .FromAssemblyOf<IResourceDataAccess>()
        .AddClasses(true)
            .AsMatchingInterface()
            .WithScopedLifetime());

Notice that we can simply return to the start following the With…Lifetime(), and this time, we tell it to register any classes found in the same assembly as IResourceDataAccess.

If we look at the list of assemblies, we can see this has worked:

What this means is that, each time you add a new class, you don’t have to add a registration in the startup / program file. This is perhaps a good and bad thing: arguably, if the list gets so large that it’s noticeable, then you might have gauged your decomposition incorrectly.

Unit Testing a Console Application

I’ve previously written about some Unusual things to do with a Console Application, including creating a game in a console application.

This post covers another unusual thing to want to do, but I was recently writing a console application, and wondered how you could test it. That is, without mocking the Console out completely. It turns out that, not only is this possible, it’s actually quite straightforward.

The key here are the methods Console.SetIn and Console.SetOut. These allow you to redirect the console input and output. Let’s take the Hello World example – to unit test this, the first thing to do is to redirect the Console.Out:

var writer = new StringWriter();        
Console.SetOut(writer); 

You can now unit test this by simply checking the StringWriter:

        [Fact]
        public void HelloWorldTest()
        {
            // Arrange
            var writer = new StringWriter();        
            Console.SetOut(writer); 

            // Act
            RunHelloWorld();

            // Assert
            var sb = writer.GetStringBuilder();
            Assert.Equal("Hello, World!", sb.ToString().Trim());
        }

You can similarly test an input; let’s take the following method:

        public static void GetName()
        {
            Console.WriteLine("What is your name?");
            string name = Console.ReadLine();
            Console.WriteLine($"Hello, {name}");            
        }

We can test both the input and the output of this method:

        [Fact]
        public void GetNameTest()
        {
            // Arrange
            var writer = new StringWriter();        
            Console.SetOut(writer); 

            var textReader = new StringReader("Susan");
            Console.SetIn(textReader);

            // Act
            GetName();

            // Assert
            var sb = writer.GetStringBuilder();
            var lines = sb.ToString().Split(Environment.NewLine, StringSplitOptions.TrimEntries);
            Assert.Equal("Hello, Susan", lines[1]);

        }

I’m not saying it’s necessarily good practice to unit test, what is essentially, logging, but it’s interesting to know that it’s possible!

Using Scrutor to Implement the Decorator Pattern

I recently came across a very cool library, thanks to this video by Nick Chapsas. The library is Scrutor. In this post, I’m going to run through a version of the Open-Closed Principle that this makes possible.

An Overly Complex Hello World App

Let’s start by creating a needlessly complex app that prints Hello World. Instead of simply printing Hello World we’ll use DI to inject a service that prints it. Let’s start with the main program.cs code (in .Net 6):

using Microsoft.Extensions.DependencyInjection;
using scrutortest;

var serviceCollection = new ServiceCollection();

serviceCollection.AddSingleton<ITestLogger, TestLogger>();

var serviceProvider = serviceCollection.BuildServiceProvider();

var testLogger = serviceProvider.GetRequiredService<ITestLogger>();
testLogger.Log("hello world");

Impressive, eh? Here’s the interface that we now rely on:

internal interface ITestLogger
{
    public void Log(string message);
}

And here is our TestLogger class:

    internal class TestLogger : ITestLogger
    {
        public void Log(string message)
        {
            Console.WriteLine(message);
        }
    }

If you implement this, and run it, you’ll see that it works fine – almost as well as the one line version. However, let’s imagine that we now have a requirement to extend this class. After every message, we need to display —OVER— for… some reason.

Extending Our Overly Complex App to be Even More Pointless

There’s a few ways to do this: you can obviously just change the class itself, but that breaches the Open-Closed Principle. That’s where the Decorator Pattern comes in. Scrutor allows us to create a new class that looks like this:

    internal class TestLoggerExtended : ITestLogger
    {
        private readonly ITestLogger _testLogger;

        public TestLoggerExtended(ITestLogger testLogger)
        {
            _testLogger = testLogger;
        }

        public void Log(string message)
        {
            _testLogger.Log(message);
            _testLogger.Log("---OVER---");
        }
    }

There’s a few things of note here: firstly, we’re implementing the same interface as the main / first class; secondly, we’re Injecting said interface into our constructor; and finally, in the Log method, we’re calling the original class. Obviously, if you just register this in the DI container as normal, bad things will happen; so we use the Scrutor Decorate method:

using Microsoft.Extensions.DependencyInjection;
using scrutortest;

var serviceCollection = new ServiceCollection();

serviceCollection.AddSingleton<ITestLogger, TestLogger>();
serviceCollection.Decorate<ITestLogger, TestLoggerExtended>();

var serviceProvider = serviceCollection.BuildServiceProvider();

var testLogger = serviceProvider.GetRequiredService<ITestLogger>();
testLogger.Log("hello world");

If you now run this, you’ll see that the functionality is very similar to inheritance, but you haven’t coupled the two services directly:

Introduction to Azure Chaos Studio

Some time ago, I investigated the concept of chaos engineering. The principle behind Chaos Engineering is a very simply one: since your software is likely to encounter hostile conditions in the wild, why not introduce those conditions while (and when) you can control them, and then deal with the fallout then, instead of at 3am on a Sunday.

At the time, I was trying to deal with an on-site issue where the connection seemed to be randomly dropping. In the end, I solved this by writing something similar to Polly – albeit a much simpler version.

Microsoft have recently released a preview of something called Chaos Studio. It’s very much in its infancy now, but what is there looks very interesting.

The product is essentially divided into two sections: targets and experiments. Targets represent the thing that you intend to wrought chaos upon, and experiments are how that chaos will be wrought.

Scope

For this test, I’m going to use a VM. That’s mainly because what you can do with this product is currently limited to VMs, AKS, and Redis.

Create a VM and Check Availability

The first step is to create a VM. To be honest, it doesn’t matter what the VM is, because all we’ll be doing is switching it off. Start by checking the availability – you should be able to do that in Logs – and you should notice 100% availability, unless something has gone catastrophically wrong with your deployment.

Targets

The next step is to configure our target. In chaos studio, select Targets and pick the new VM:

Not that you’ve enabled the targets, you’ll need to grant permission to the chaos studio for the VMs. Inside the VM blade, select Access Control:

If you don’t grant this access, you’ll get a permissions error when you run the experiment. The next step is to create the experiment. In Chaos Studio, select Experiments and then Create:

This will bring up a screen similar to the following:

Let’s discuss a little the concepts here: we have step, branch, and fault. A step is a sequential action that you will execute, whilst a branch is a parallel action; that is, actions in different branches can happen at the same time. A fault is what you actually do – so the fault is the chaos! Let’s add a fault:

This asks me two things, what do I want the fault to happen on (you can only select targets that have previously been created) and what do I want the fault to be. In my case, I’ve created a two step process that turns the machine off, waits a minute, then turns it off again:

Now that the experiment is created, you can start it. You get a warning at this point that basically says “it’s your foot, and you’re currently pointing a high powered rifle at it!”:

If you now run this, and it’s worth bearing in mind that there’s no simulation here – if you do this on production infrastructure it will shut it down for you, then you’ll see the update of it running:

You can drill down into the details to see exactly what it’s doing, what stage, etc.:

The experiment kills the machine for 1 minute, then waits for a minute, then kills it again. If you have a look at the availability graph, you should be able to see that:

Summary

So far, I’m pretty impressed with this tool. When they’ve finished (and by that, I mean, they’ve given the ability to create your own chaos, and have expanded the targets to cover the entire Azure ecosystem), it’s going to be a really interesting testing tool.

References

Azure Friday Introduction to Chaos Studio