Integration Testing With In-Memory Entity Framework

As part of a project that I’m working on, I’ve been playing around with integration tests. In this post, I’m going to combine this previous post to cover a full end-to-end test that creates and tests an in-memory representation of the database.

As a quick caveat, there are some concerns over these type of in-memory database versions: for complex databases, that may well be true, but this is a very simple example. However, you may find that if you do try to apply this to something more complex that it doesn’t work as you’d expect.

Let’s start with setting up the WebApplicationFactory:

            var appFactory = new WebApplicationFactory<Program>()
                .WithWebHostBuilder(host =>
                {
                    host.ConfigureServices(services =>
                    {
                        var descriptor = services.SingleOrDefault(
                            d => d.ServiceType ==
                            typeof(DbContextOptions<MyDbContext>));

                        services.Remove(descriptor);

                        services.AddDbContext<MyDbContext>(options =>
                        {
                            options.UseInMemoryDatabase("InMemoryDB");
                        });
                    });
                });
            var httpClient = appFactory.CreateClient();

What we’re basically doing here is replacing the existing DB Context, with our in memory version. Next, we’ll prepare the payload:

            var myViewModel = new myViewModel()
            {
                MyValue = new Models.NewValue()
                {
                    Name = "test",
                    Uri = "www.test.com",
                    Description = "description"
                }
            };

            var json = JsonSerializer.Serialize(myViewModel);
            var content = new StringContent(
                json,
                System.Text.Encoding.UTF8,
                "application/json");

Finally, we can call the endpoint:

            // Act
            using var response = await httpClient.PostAsync(
                "/home/myendpoint", content);

In order to interrogate the context, we need to get the service scope:

            var scope = appFactory.Services.GetService<IServiceScopeFactory>()!.CreateScope();
            var dbContext = scope.ServiceProvider.GetService<MyDbContext>();

            Assert.NotNull(dbContext);
            Assert.Single(dbContext!.NewValues);

That should be all that you need. In addition to the caveats above, it’s not lightning fast either.

References

Integration Tests in Asp.Net

StackOverfow Question relating to adding DbContext to an integration test

Testing anAsp.Net web-app Using Integration Tests

Manually adding a DbContext for an integration test

Configuring your models with Entity Framework

One of the tricks I’ve used for a while when setting EF up in a project, is to use inheritance in order to share code, but not objects, across the business and data layers of the application.

Let’s take the following example, as shown above:

namespace MyProject.Models
{
    public class ResourceModel
    {

And now the data layer:

namespace MyProject.Data.Resources
{
    public class ResourceEntity : ResourceModel

This gives us a number of advantages: the code is shared between the objects, but the two objects are not the same. However, until recently, this presented an issue. Imagine that we had a property in the model that looked like this:

public TagModel[] Tags { get; } 

TagModel itself might have a similar inheritance structure; however, how would you return that from the data layer – since the inheritance would force it to return the same type.

Covariance return type

Fortunately, since C# 9 you can return covariants (Covariance is basically the idea that you can substitute a sub-type for a class).

What this means is that you can override the relevant property in the sub class (the data layer). First, you’ll need to make the property virtual:

    public class ResourceModel
    {
        public virtual TagModel[] Tags { get; } 
    }

And then just override it:

    public class ResourceEntity : ResourceModel
    {
        public int Id { get; set; }

        public override TagEntity[] Tags { get; }
    }

You can’t use anything like a List here, because (for example) a List of TagEntity has no relationship to a List of TagModel.

Hope this helps.

References

https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/proposals/csharp-9.0/covariant-returns?WT.mc_id=DT-MVP-5004601

https://github.com/dotnet/csharplang/issues/49

https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/override?WT.mc_id=DT-MVP-5004601

Using Pub/Sub (or the Fanout Pattern) in Rabbit MQ in .Net 6

I’ve previously spoken and written quite extensively about the pub/sub pattern using message brokers such as GCP and Azure. I’ve also posted a few articles about Rabbit MQ.

In this post, I’d like to cover the Rabbit MQ concept of pub/sub.

The Concept

Most message brokers broadly support two types of message exchange. The first type is a queue: that is, a single, persistent list of messages that can be read by one, or multiple consumers. The use case I usually use for this is sending e-mails: imagine you have a massive amount of e-mails to send: write them all to a queue, and then set 3 or 4 consumers reading the queue and sending the mails.

The second type is publish / subscribe, or pub/sub. This is, essentially, the concept that each consumer has its own private queue. Imagine that you want to notify all the applications in your system that a sales order has been raised: each interested party would register itself as a consumer and, when a message is sent, they would all receive that message. This pattern works well for distributed systems.

As I said, most message brokers broadly support these two concepts, although annoyingly, in different ways and with different labels. Here, we’ll show how RabbitMQ deals with this.

Setting up RabbitMQ

Technology has moved on since the last time I wrote about installing and running it. The following docker command should have you set-up in a couple of seconds:

docker run --rm -it --hostname my-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management

Once it’s running, you can view the dashboard here. If you haven’t changed anything, the default username / password is guest / guest.

Receiver

Before we get into any actual code, you’ll need to install the Rabbiq MQ Client NuGet Package.

For pub/sub, the first task is to set-up a receiver. The following code should do that for you:

var factory = new ConnectionFactory() { HostName = "localhost" };
using var connection = factory.CreateConnection();
using var channel = connection.CreateModel();

channel.ExchangeDeclare("SalesOrder", ExchangeType.Fanout);

var result = channel.QueueDeclare("OrderRaised", false, false, false, null);
string queueName = result.QueueName;
channel.QueueBind(queueName, "SalesOrder", "");

Console.WriteLine(result);
  
EventingBasicConsumer consumer = new EventingBasicConsumer(channel);
consumer.Received += Consumer_Received;
  
channel.BasicConsume(queueName, true, consumer);


Console.WriteLine("Receiving...");
Console.ReadLine();

static void Consumer_Received(object sender, BasicDeliverEventArgs e)
{
    var body = e.Body.ToArray();
    var message = Encoding.UTF8.GetString(body);

    Console.WriteLine(message);
}

In the code above, you’ll see that we first set-up an exchange called SalesOrder, and we tell that exchange that it’s a Fanout exchange.

We then declare a queue, and bind it to the exchange – that is, it will receive messages sent to that exchange. Notice that we receive from the queue.

Finally, we set-up the consumer, and tell it what to do when a message is received (in this case, just output to the console window).

Sender

For the sender, the code is much simpler:

static void SendNewMessage(string message)
{
    var factory = new ConnectionFactory() { HostName = "localhost" };
    using var connection = factory.CreateConnection();
    using var channel = connection.CreateModel();

    channel.ExchangeDeclare("SalesOrder", ExchangeType.Fanout);

    channel.BasicPublish("SalesOrder", "", false, null, Encoding.UTF8.GetBytes(message));
}

Notice that we don’t have any concept of the queue here, we simply publish to the exchange – what happens after that is no concern of the publisher.

Summary

I keep coming back to Rabbit – especially for demos and concepts, as it runs locally easily, and has many more options than the main cloud providers – at least in terms of actual messaging capability. If you’re just learning about message brokers, Rabbit is definitely a good place to start.

Testing an Asp.Net Web App Using Integration Testing

I’ve recently been playing around with a tool called Scrutor. I’m using this in a project and it’s working really well; however, I came across an issue (not related to the tool per se). I had created an interface, but hadn’t yet written a class to implement it. Scrutor realised this was the case and started moaning at me. Obviously, I hadn’t written any unit tests around the non-existent class, but I did have a reasonably good test coverage for the rest of the project; however the project wouldn’t actually run.

To be clear, what I’m saying here is that, despite the test suite that I had running successfully, the program wouldn’t even start. This feels like a very askew state of affairs.

Some irrelevant background, I had a very strange issue with my Lenovo laptop, whereby, following a firmware update, the USB-C ports just stopped working – including to accept charge – so my laptop died. Sadly, I hadn’t followed good practice, with commits, and so part of my code was lost.

I’ve previously played around with the concept of integration tests in Asp.Net Core+, so I thought that’s probably what I needed here. There are a few articles and examples out there, but I couldn’t find anything that worked with Asp.Net 6 – so this is that.

In this post, we’ll walk through the steps necessary to add a basic test to your Asp.Net 6 web site. Note that this is not comprehensive – some dependencies will trip this up (e.g. database access); however, it’s a start. The important thing is that the test will fail where there are basic set-up and configuration issues with the web app.

The Test Project

The first step is to configure a test project. Obviously, your dependencies will vary based on what tools you decide to use, but the following will work for Xunit:

<PackageReference Include="Microsoft.AspNetCore.Mvc.Testing" Version="6.0.5" />
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.2.0" />		
<PackageReference Include="xunit" Version="2.4.1" />
<PackageReference Include="xunit.runner.console" Version="2.4.1" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.4.5" />

(See this post on Xunit libraries for details on the basic Xunit dependency list for .Net 6.)

The key here is to set-up the Web Application Factory:

var appFactory = new WebApplicationFactory<Program>();
var httpClient = appFactory.CreateClient();

We’ll come back to some specific issues with this exact code shortly, but basically, we’re setting up the in-memory test harness for the service (which in this case, is our web-app). You can obviously do this for an API in exactly the same manner. The rest of our test then looks like this:

using var response = await httpClient.GetAsync("/");

Assert.True(response.IsSuccessStatusCode);

If your test fails, and you want a fighting chance of working out why, you may wish to replace the assertion with this:

var content = await response.Content.ReadAsStringAsync();

That’s basically it; however, as that currently stands, you’ll start getting errors (some that you can see, and some that you cannot). It makes sense to make the HttpClient static, or at least raise it to the class level, as you only need to actually create it once.

Accessing the Main Project

The other issue that you’ll get here is that, because we’re using .Net 6 top level statements in Program.cs, it will tell you that it’s inaccessible. In fact, top level code does generate an implicit class, but it’s internal. This can be worked around my simply adding the following to the end of your Program.cs code:

public partial class Program { } // so you can reference it from tests

(See the references below for details of where this idea came from.)

Summary

In this post, we’ve seen how you can create an integration test that will assert that, at the very least, your web app runs. This method is much faster than constantly having to actually run your project. It obviously makes no assertions about how it runs, and whether what it’s doing is correct.

References

Example of testing top level statements

GitHub Issue reporting error with top level statements being tested

Stack Overflow question on how to access Program.cs from in program using top level statements

Tutorial video on integration tests

A Cleaner Program.cs / Startup.cs with Scrutor

I’ve previously written about the Scrutor library. However, this post covers something that has long irritated me about using an IoC container. Typically, when you have a fairly complex site, you’ll end up with dozens of statements like the following:

builder.Services.AddScoped<ISearchService, SearchService>();
builder.Services.AddScoped<IResourceDataAccess, ResourceDataAccess>();

It turns out that one of the other things that Scrutor can do for you is to work out which dependencies you need to register. For example, let’s consider the two classes above; let’s say that the first is in the main assembly of the project:

builder.Services.Scan(scan => scan
    .FromCallingAssembly()
        .AddClasses(true)
            .AsMatchingInterface()
            .WithScopedLifetime());

So, what does this do?

Well, FromCallingAssembly points it at the main assembly (that is, the one which you’re calling this registration from). FromClasses(true) then returns a list of all public classes and interfaces.

Finally, AsMatchingInterface matches classes with their interfaces, assuming a one-to-one pairing (if you don’t have that then there are other options to cope with that); and WithScopedLifetime will register them as scoped.

That worked well, but when I ran it, I realised that the second class (ResourceDataAccess) hadn’t registered. The reason being that it wasn’t from the calling assembly, but lived in a referenced project. An easy way to fix this was:

builder.Services.Scan(scan => scan
    .FromCallingAssembly()
        .AddClasses(true)
            .AsMatchingInterface()
            .WithScopedLifetime()
    .FromAssemblyOf<IResourceDataAccess>()
        .AddClasses(true)
            .AsMatchingInterface()
            .WithScopedLifetime());

Notice that we can simply return to the start following the With…Lifetime(), and this time, we tell it to register any classes found in the same assembly as IResourceDataAccess.

If we look at the list of assemblies, we can see this has worked:

What this means is that, each time you add a new class, you don’t have to add a registration in the startup / program file. This is perhaps a good and bad thing: arguably, if the list gets so large that it’s noticeable, then you might have gauged your decomposition incorrectly.

Unit Testing a Console Application

I’ve previously written about some Unusual things to do with a Console Application, including creating a game in a console application.

This post covers another unusual thing to want to do, but I was recently writing a console application, and wondered how you could test it. That is, without mocking the Console out completely. It turns out that, not only is this possible, it’s actually quite straightforward.

The key here are the methods Console.SetIn and Console.SetOut. These allow you to redirect the console input and output. Let’s take the Hello World example – to unit test this, the first thing to do is to redirect the Console.Out:

var writer = new StringWriter();        
Console.SetOut(writer); 

You can now unit test this by simply checking the StringWriter:

        [Fact]
        public void HelloWorldTest()
        {
            // Arrange
            var writer = new StringWriter();        
            Console.SetOut(writer); 

            // Act
            RunHelloWorld();

            // Assert
            var sb = writer.GetStringBuilder();
            Assert.Equal("Hello, World!", sb.ToString().Trim());
        }

You can similarly test an input; let’s take the following method:

        public static void GetName()
        {
            Console.WriteLine("What is your name?");
            string name = Console.ReadLine();
            Console.WriteLine($"Hello, {name}");            
        }

We can test both the input and the output of this method:

        [Fact]
        public void GetNameTest()
        {
            // Arrange
            var writer = new StringWriter();        
            Console.SetOut(writer); 

            var textReader = new StringReader("Susan");
            Console.SetIn(textReader);

            // Act
            GetName();

            // Assert
            var sb = writer.GetStringBuilder();
            var lines = sb.ToString().Split(Environment.NewLine, StringSplitOptions.TrimEntries);
            Assert.Equal("Hello, Susan", lines[1]);

        }

I’m not saying it’s necessarily good practice to unit test, what is essentially, logging, but it’s interesting to know that it’s possible!

Using Scrutor to Implement the Decorator Pattern

I recently came across a very cool library, thanks to this video by Nick Chapsas. The library is Scrutor. In this post, I’m going to run through a version of the Open-Closed Principle that this makes possible.

An Overly Complex Hello World App

Let’s start by creating a needlessly complex app that prints Hello World. Instead of simply printing Hello World we’ll use DI to inject a service that prints it. Let’s start with the main program.cs code (in .Net 6):

using Microsoft.Extensions.DependencyInjection;
using scrutortest;

var serviceCollection = new ServiceCollection();

serviceCollection.AddSingleton<ITestLogger, TestLogger>();

var serviceProvider = serviceCollection.BuildServiceProvider();

var testLogger = serviceProvider.GetRequiredService<ITestLogger>();
testLogger.Log("hello world");

Impressive, eh? Here’s the interface that we now rely on:

internal interface ITestLogger
{
    public void Log(string message);
}

And here is our TestLogger class:

    internal class TestLogger : ITestLogger
    {
        public void Log(string message)
        {
            Console.WriteLine(message);
        }
    }

If you implement this, and run it, you’ll see that it works fine – almost as well as the one line version. However, let’s imagine that we now have a requirement to extend this class. After every message, we need to display —OVER— for… some reason.

Extending Our Overly Complex App to be Even More Pointless

There’s a few ways to do this: you can obviously just change the class itself, but that breaches the Open-Closed Principle. That’s where the Decorator Pattern comes in. Scrutor allows us to create a new class that looks like this:

    internal class TestLoggerExtended : ITestLogger
    {
        private readonly ITestLogger _testLogger;

        public TestLoggerExtended(ITestLogger testLogger)
        {
            _testLogger = testLogger;
        }

        public void Log(string message)
        {
            _testLogger.Log(message);
            _testLogger.Log("---OVER---");
        }
    }

There’s a few things of note here: firstly, we’re implementing the same interface as the main / first class; secondly, we’re Injecting said interface into our constructor; and finally, in the Log method, we’re calling the original class. Obviously, if you just register this in the DI container as normal, bad things will happen; so we use the Scrutor Decorate method:

using Microsoft.Extensions.DependencyInjection;
using scrutortest;

var serviceCollection = new ServiceCollection();

serviceCollection.AddSingleton<ITestLogger, TestLogger>();
serviceCollection.Decorate<ITestLogger, TestLoggerExtended>();

var serviceProvider = serviceCollection.BuildServiceProvider();

var testLogger = serviceProvider.GetRequiredService<ITestLogger>();
testLogger.Log("hello world");

If you now run this, you’ll see that the functionality is very similar to inheritance, but you haven’t coupled the two services directly:

Introduction to Azure Chaos Studio

Some time ago, I investigated the concept of chaos engineering. The principle behind Chaos Engineering is a very simply one: since your software is likely to encounter hostile conditions in the wild, why not introduce those conditions while (and when) you can control them, and then deal with the fallout then, instead of at 3am on a Sunday.

At the time, I was trying to deal with an on-site issue where the connection seemed to be randomly dropping. In the end, I solved this by writing something similar to Polly – albeit a much simpler version.

Microsoft have recently released a preview of something called Chaos Studio. It’s very much in its infancy now, but what is there looks very interesting.

The product is essentially divided into two sections: targets and experiments. Targets represent the thing that you intend to wrought chaos upon, and experiments are how that chaos will be wrought.

Scope

For this test, I’m going to use a VM. That’s mainly because what you can do with this product is currently limited to VMs, AKS, and Redis.

Create a VM and Check Availability

The first step is to create a VM. To be honest, it doesn’t matter what the VM is, because all we’ll be doing is switching it off. Start by checking the availability – you should be able to do that in Logs – and you should notice 100% availability, unless something has gone catastrophically wrong with your deployment.

Targets

The next step is to configure our target. In chaos studio, select Targets and pick the new VM:

Not that you’ve enabled the targets, you’ll need to grant permission to the chaos studio for the VMs. Inside the VM blade, select Access Control:

If you don’t grant this access, you’ll get a permissions error when you run the experiment. The next step is to create the experiment. In Chaos Studio, select Experiments and then Create:

This will bring up a screen similar to the following:

Let’s discuss a little the concepts here: we have step, branch, and fault. A step is a sequential action that you will execute, whilst a branch is a parallel action; that is, actions in different branches can happen at the same time. A fault is what you actually do – so the fault is the chaos! Let’s add a fault:

This asks me two things, what do I want the fault to happen on (you can only select targets that have previously been created) and what do I want the fault to be. In my case, I’ve created a two step process that turns the machine off, waits a minute, then turns it off again:

Now that the experiment is created, you can start it. You get a warning at this point that basically says “it’s your foot, and you’re currently pointing a high powered rifle at it!”:

If you now run this, and it’s worth bearing in mind that there’s no simulation here – if you do this on production infrastructure it will shut it down for you, then you’ll see the update of it running:

You can drill down into the details to see exactly what it’s doing, what stage, etc.:

The experiment kills the machine for 1 minute, then waits for a minute, then kills it again. If you have a look at the availability graph, you should be able to see that:

Summary

So far, I’m pretty impressed with this tool. When they’ve finished (and by that, I mean, they’ve given the ability to create your own chaos, and have expanded the targets to cover the entire Azure ecosystem), it’s going to be a really interesting testing tool.

References

Azure Friday Introduction to Chaos Studio

Azure Monitor – Failures and Triggering an Alert from Application Insights

Azure Application Monitoring allows for a lot more functionality than just Application Insights. In this post, I’m going to walk through setting up and triggering an an alert.

Before we trigger an alert, we need to have something to trigger an alert for. If you’re reading this, you may already have an app service deployed into Azure, but for testing purposes, I’ve created a fresh MVC template, and added the following code:

This will cause an error when the privacy menu option is pressed, around 1 in 3 times (ctrl-f5 can give you several).

Failures

If you have Application Insights set-up for the app service, you can select the Failures blade:

Looking at this image, there’s three points of particular interest. First, I’ve selected the Exceptions tab – this lets me see only the requests and any exceptions that resulted. As you can see, there was a spike where I’ve highlighted. Finally, on the right-hand side of the screen, the exceptions are broken down by type; in this case, I only have one. I can drill into this by selecting the count, or by selecting Samples at the bottom.

The next step is to set up an alert when I get a given number of exceptions.

Creating an Alert

To set up a new alert, select Alerts under Monitoring:

As you can see, it tells us that we have no alerts. In fact, there’s a distinction to be drawn here; what this means is that no alerts have actually been activated. To create a new Alert Rule select Create -> Alert rule:

In creating a new alert, there’s three main things to consider: scope, condition, and action.

Scope

The scope of the alert is what it can monitor: that is, what do you want to be alerted about. In our case, that’s the app service:

Condition

The next section is condition. There are many options here, but the one that we’re interested for this is Exceptions:

After selecting this, you’ll be asked about the Signal Logic – that is, what is it about the exceptions that you wish to cause an alert. In our case, we want an alert where the number (count) of exceptions exceeds 3 in a 5 minute period:

Once you select this, it’ll give you an idea of what this might cost. In my tests so far, this has been around $1 – 2 / year or so.

Actions

The next section is Actions: once the alert fires, what do you want it to do? This brings into play Action Groups. Here, we’ll create a new Action Group

You can tell it here to e-mail, send an SMS, etc.:

It is not obvious (at least to me) from the screen above, but on the right-hand side, you need to select OK before it will let you continue.

We’re going to skip the other tabs in the Action Group and jump to Review + create, then select Create. This will bring you back to the Actions tab, and select that as the default action. You’ll also get a notification that you’re in that action group:

Finally, in Details you can name the alert, and give it a severity; for example:

Once you create this, you’ll be taken back to the Alerts tab – which will still, and confusingly, be empty. You can see your new Alert in the Alert rules section:

Triggering the Alert

To trigger the alert, I’m now going to force my site to crash (as described earlier) – remember that the condition is greater than 3. Once I get four exceptions, I wait for the alert to trigger. At this point, I should get an e-mail, telling me that the alert is triggered:

Finally, we can see that this triggered in the Alerts section. If you drill into this, it will helpfully tell you why it has triggered:

Once the period has passed, and the exceptions have dropped below 4, you’ll get another mail informing you that the danger has passed.

Summary

We’ve just scratched the surface of Azure Alerts here, but even this gives us a taste for how useful these things are. In future posts, I’m going to drill into this a bit further.

Azure Automation Run Books – Setup

I’ve recently been investigating Azure Automation RunBooks. Essentially, this gives you a way to execute some code (currently Powershell or Python) to perform some basic tasks against your infrastructure.

For this post, we’ll focus on setting up a (default) runbook, and just making it run. Let’s start by creating an automation account:

From here, you can create your automation account:

Once this creates, it gives you a couple of example run-books:

If we have a look at the tutorial with identity, it gives us the following Powershell Script:

<#
    .DESCRIPTION
        An example runbook which gets all the ARM resources using the Managed Identity
    .NOTES
        AUTHOR: Azure Automation Team
        LASTEDIT: Oct 26, 2021
#>
"Please enable appropriate RBAC permissions to the system identity of this automation account. Otherwise, the runbook may fail..."
try
{
    "Logging in to Azure..."
    Connect-AzAccount -Identity
}
catch {
    Write-Error -Message $_.Exception
    throw $_.Exception
}
#Get all ARM resources from all resource groups
$ResourceGroups = Get-AzResourceGroup
foreach ($ResourceGroup in $ResourceGroups)
{    
    Write-Output ("Showing resources in resource group " + $ResourceGroup.ResourceGroupName)
    $Resources = Get-AzResource -ResourceGroupName $ResourceGroup.ResourceGroupName
    foreach ($Resource in $Resources)
    {
        Write-Output ($Resource.Name + " of type " +  $Resource.ResourceType)
    }
    Write-Output ("")
}

Looking at this script, it only really does two things: connects to Azure using managed identity, and then runs through all the resource groups in the subscription and prints them out.

If you run this:

Then you’ll see the following warning in the output (basically saying that you should set-up the permissions, or things won’t work):

If you now switch to Errors, you’ll see a confusing error (caused by the fact that we haven’t set-up the permissions, and so things don’t work):

In order to correct this, you need to give the run-book appropriate permissions. Head over to the automation account resource, and select Identity:

Select Add role assignments.

Because this script is listing the resources in the subscription, you’ll need to be generous with the permissions:

If you run that now, it should display all the resource groups fine: