Category Archives: .Net 6

Using Pub/Sub (or the Fanout Pattern) in Rabbit MQ in .Net 6

I’ve previously spoken and written quite extensively about the pub/sub pattern using message brokers such as GCP and Azure. I’ve also posted a few articles about Rabbit MQ.

In this post, I’d like to cover the Rabbit MQ concept of pub/sub.

The Concept

Most message brokers broadly support two types of message exchange. The first type is a queue: that is, a single, persistent list of messages that can be read by one, or multiple consumers. The use case I usually use for this is sending e-mails: imagine you have a massive amount of e-mails to send: write them all to a queue, and then set 3 or 4 consumers reading the queue and sending the mails.

The second type is publish / subscribe, or pub/sub. This is, essentially, the concept that each consumer has its own private queue. Imagine that you want to notify all the applications in your system that a sales order has been raised: each interested party would register itself as a consumer and, when a message is sent, they would all receive that message. This pattern works well for distributed systems.

As I said, most message brokers broadly support these two concepts, although annoyingly, in different ways and with different labels. Here, we’ll show how RabbitMQ deals with this.

Setting up RabbitMQ

Technology has moved on since the last time I wrote about installing and running it. The following docker command should have you set-up in a couple of seconds:

docker run --rm -it --hostname my-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management

Once it’s running, you can view the dashboard here. If you haven’t changed anything, the default username / password is guest / guest.

Receiver

Before we get into any actual code, you’ll need to install the Rabbiq MQ Client NuGet Package.

For pub/sub, the first task is to set-up a receiver. The following code should do that for you:

var factory = new ConnectionFactory() { HostName = "localhost" };
using var connection = factory.CreateConnection();
using var channel = connection.CreateModel();

channel.ExchangeDeclare("SalesOrder", ExchangeType.Fanout);

var result = channel.QueueDeclare("OrderRaised", false, false, false, null);
string queueName = result.QueueName;
channel.QueueBind(queueName, "SalesOrder", "");

Console.WriteLine(result);
  
EventingBasicConsumer consumer = new EventingBasicConsumer(channel);
consumer.Received += Consumer_Received;
  
channel.BasicConsume(queueName, true, consumer);


Console.WriteLine("Receiving...");
Console.ReadLine();

static void Consumer_Received(object sender, BasicDeliverEventArgs e)
{
    var body = e.Body.ToArray();
    var message = Encoding.UTF8.GetString(body);

    Console.WriteLine(message);
}

In the code above, you’ll see that we first set-up an exchange called SalesOrder, and we tell that exchange that it’s a Fanout exchange.

We then declare a queue, and bind it to the exchange – that is, it will receive messages sent to that exchange. Notice that we receive from the queue.

Finally, we set-up the consumer, and tell it what to do when a message is received (in this case, just output to the console window).

Sender

For the sender, the code is much simpler:

static void SendNewMessage(string message)
{
    var factory = new ConnectionFactory() { HostName = "localhost" };
    using var connection = factory.CreateConnection();
    using var channel = connection.CreateModel();

    channel.ExchangeDeclare("SalesOrder", ExchangeType.Fanout);

    channel.BasicPublish("SalesOrder", "", false, null, Encoding.UTF8.GetBytes(message));
}

Notice that we don’t have any concept of the queue here, we simply publish to the exchange – what happens after that is no concern of the publisher.

Summary

I keep coming back to Rabbit – especially for demos and concepts, as it runs locally easily, and has many more options than the main cloud providers – at least in terms of actual messaging capability. If you’re just learning about message brokers, Rabbit is definitely a good place to start.

Testing an Asp.Net Web App Using Integration Testing

I’ve recently been playing around with a tool called Scrutor. I’m using this in a project and it’s working really well; however, I came across an issue (not related to the tool per se). I had created an interface, but hadn’t yet written a class to implement it. Scrutor realised this was the case and started moaning at me. Obviously, I hadn’t written any unit tests around the non-existent class, but I did have a reasonably good test coverage for the rest of the project; however the project wouldn’t actually run.

To be clear, what I’m saying here is that, despite the test suite that I had running successfully, the program wouldn’t even start. This feels like a very askew state of affairs.

Some irrelevant background, I had a very strange issue with my Lenovo laptop, whereby, following a firmware update, the USB-C ports just stopped working – including to accept charge – so my laptop died. Sadly, I hadn’t followed good practice, with commits, and so part of my code was lost.

I’ve previously played around with the concept of integration tests in Asp.Net Core+, so I thought that’s probably what I needed here. There are a few articles and examples out there, but I couldn’t find anything that worked with Asp.Net 6 – so this is that.

In this post, we’ll walk through the steps necessary to add a basic test to your Asp.Net 6 web site. Note that this is not comprehensive – some dependencies will trip this up (e.g. database access); however, it’s a start. The important thing is that the test will fail where there are basic set-up and configuration issues with the web app.

The Test Project

The first step is to configure a test project. Obviously, your dependencies will vary based on what tools you decide to use, but the following will work for Xunit:

<PackageReference Include="Microsoft.AspNetCore.Mvc.Testing" Version="6.0.5" />
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.2.0" />		
<PackageReference Include="xunit" Version="2.4.1" />
<PackageReference Include="xunit.runner.console" Version="2.4.1" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.4.5" />

(See this post on Xunit libraries for details on the basic Xunit dependency list for .Net 6.)

The key here is to set-up the Web Application Factory:

var appFactory = new WebApplicationFactory<Program>();
var httpClient = appFactory.CreateClient();

We’ll come back to some specific issues with this exact code shortly, but basically, we’re setting up the in-memory test harness for the service (which in this case, is our web-app). You can obviously do this for an API in exactly the same manner. The rest of our test then looks like this:

using var response = await httpClient.GetAsync("/");

Assert.True(response.IsSuccessStatusCode);

If your test fails, and you want a fighting chance of working out why, you may wish to replace the assertion with this:

var content = await response.Content.ReadAsStringAsync();

That’s basically it; however, as that currently stands, you’ll start getting errors (some that you can see, and some that you cannot). It makes sense to make the HttpClient static, or at least raise it to the class level, as you only need to actually create it once.

Accessing the Main Project

The other issue that you’ll get here is that, because we’re using .Net 6 top level statements in Program.cs, it will tell you that it’s inaccessible. In fact, top level code does generate an implicit class, but it’s internal. This can be worked around my simply adding the following to the end of your Program.cs code:

public partial class Program { } // so you can reference it from tests

(See the references below for details of where this idea came from.)

Summary

In this post, we’ve seen how you can create an integration test that will assert that, at the very least, your web app runs. This method is much faster than constantly having to actually run your project. It obviously makes no assertions about how it runs, and whether what it’s doing is correct.

References

Example of testing top level statements

GitHub Issue reporting error with top level statements being tested

Stack Overflow question on how to access Program.cs from in program using top level statements

Tutorial video on integration tests

A Cleaner Program.cs / Startup.cs with Scrutor

I’ve previously written about the Scrutor library. However, this post covers something that has long irritated me about using an IoC container. Typically, when you have a fairly complex site, you’ll end up with dozens of statements like the following:

builder.Services.AddScoped<ISearchService, SearchService>();
builder.Services.AddScoped<IResourceDataAccess, ResourceDataAccess>();

It turns out that one of the other things that Scrutor can do for you is to work out which dependencies you need to register. For example, let’s consider the two classes above; let’s say that the first is in the main assembly of the project:

builder.Services.Scan(scan => scan
    .FromCallingAssembly()
        .AddClasses(true)
            .AsMatchingInterface()
            .WithScopedLifetime());

So, what does this do?

Well, FromCallingAssembly points it at the main assembly (that is, the one which you’re calling this registration from). FromClasses(true) then returns a list of all public classes and interfaces.

Finally, AsMatchingInterface matches classes with their interfaces, assuming a one-to-one pairing (if you don’t have that then there are other options to cope with that); and WithScopedLifetime will register them as scoped.

That worked well, but when I ran it, I realised that the second class (ResourceDataAccess) hadn’t registered. The reason being that it wasn’t from the calling assembly, but lived in a referenced project. An easy way to fix this was:

builder.Services.Scan(scan => scan
    .FromCallingAssembly()
        .AddClasses(true)
            .AsMatchingInterface()
            .WithScopedLifetime()
    .FromAssemblyOf<IResourceDataAccess>()
        .AddClasses(true)
            .AsMatchingInterface()
            .WithScopedLifetime());

Notice that we can simply return to the start following the With…Lifetime(), and this time, we tell it to register any classes found in the same assembly as IResourceDataAccess.

If we look at the list of assemblies, we can see this has worked:

What this means is that, each time you add a new class, you don’t have to add a registration in the startup / program file. This is perhaps a good and bad thing: arguably, if the list gets so large that it’s noticeable, then you might have gauged your decomposition incorrectly.

Unit Testing a Console Application

I’ve previously written about some Unusual things to do with a Console Application, including creating a game in a console application.

This post covers another unusual thing to want to do, but I was recently writing a console application, and wondered how you could test it. That is, without mocking the Console out completely. It turns out that, not only is this possible, it’s actually quite straightforward.

The key here are the methods Console.SetIn and Console.SetOut. These allow you to redirect the console input and output. Let’s take the Hello World example – to unit test this, the first thing to do is to redirect the Console.Out:

var writer = new StringWriter();        
Console.SetOut(writer); 

You can now unit test this by simply checking the StringWriter:

        [Fact]
        public void HelloWorldTest()
        {
            // Arrange
            var writer = new StringWriter();        
            Console.SetOut(writer); 

            // Act
            RunHelloWorld();

            // Assert
            var sb = writer.GetStringBuilder();
            Assert.Equal("Hello, World!", sb.ToString().Trim());
        }

You can similarly test an input; let’s take the following method:

        public static void GetName()
        {
            Console.WriteLine("What is your name?");
            string name = Console.ReadLine();
            Console.WriteLine($"Hello, {name}");            
        }

We can test both the input and the output of this method:

        [Fact]
        public void GetNameTest()
        {
            // Arrange
            var writer = new StringWriter();        
            Console.SetOut(writer); 

            var textReader = new StringReader("Susan");
            Console.SetIn(textReader);

            // Act
            GetName();

            // Assert
            var sb = writer.GetStringBuilder();
            var lines = sb.ToString().Split(Environment.NewLine, StringSplitOptions.TrimEntries);
            Assert.Equal("Hello, Susan", lines[1]);

        }

I’m not saying it’s necessarily good practice to unit test, what is essentially, logging, but it’s interesting to know that it’s possible!

Using Scrutor to Implement the Decorator Pattern

I recently came across a very cool library, thanks to this video by Nick Chapsas. The library is Scrutor. In this post, I’m going to run through a version of the Open-Closed Principle that this makes possible.

An Overly Complex Hello World App

Let’s start by creating a needlessly complex app that prints Hello World. Instead of simply printing Hello World we’ll use DI to inject a service that prints it. Let’s start with the main program.cs code (in .Net 6):

using Microsoft.Extensions.DependencyInjection;
using scrutortest;

var serviceCollection = new ServiceCollection();

serviceCollection.AddSingleton<ITestLogger, TestLogger>();

var serviceProvider = serviceCollection.BuildServiceProvider();

var testLogger = serviceProvider.GetRequiredService<ITestLogger>();
testLogger.Log("hello world");

Impressive, eh? Here’s the interface that we now rely on:

internal interface ITestLogger
{
    public void Log(string message);
}

And here is our TestLogger class:

    internal class TestLogger : ITestLogger
    {
        public void Log(string message)
        {
            Console.WriteLine(message);
        }
    }

If you implement this, and run it, you’ll see that it works fine – almost as well as the one line version. However, let’s imagine that we now have a requirement to extend this class. After every message, we need to display —OVER— for… some reason.

Extending Our Overly Complex App to be Even More Pointless

There’s a few ways to do this: you can obviously just change the class itself, but that breaches the Open-Closed Principle. That’s where the Decorator Pattern comes in. Scrutor allows us to create a new class that looks like this:

    internal class TestLoggerExtended : ITestLogger
    {
        private readonly ITestLogger _testLogger;

        public TestLoggerExtended(ITestLogger testLogger)
        {
            _testLogger = testLogger;
        }

        public void Log(string message)
        {
            _testLogger.Log(message);
            _testLogger.Log("---OVER---");
        }
    }

There’s a few things of note here: firstly, we’re implementing the same interface as the main / first class; secondly, we’re Injecting said interface into our constructor; and finally, in the Log method, we’re calling the original class. Obviously, if you just register this in the DI container as normal, bad things will happen; so we use the Scrutor Decorate method:

using Microsoft.Extensions.DependencyInjection;
using scrutortest;

var serviceCollection = new ServiceCollection();

serviceCollection.AddSingleton<ITestLogger, TestLogger>();
serviceCollection.Decorate<ITestLogger, TestLoggerExtended>();

var serviceProvider = serviceCollection.BuildServiceProvider();

var testLogger = serviceProvider.GetRequiredService<ITestLogger>();
testLogger.Log("hello world");

If you now run this, you’ll see that the functionality is very similar to inheritance, but you haven’t coupled the two services directly:

Isolated Azure Function in .Net 6

I’ve recently been working with Azure Isolated Functions for .Net 6. This is a kind of getting started guide – especially if you’re coming across from non-isolated.

What’s an Isolated Function

As is explained here, an isolated function is a function that runs out of process and self-hosted. Previously, there were issues with dependency conflicts because you were married to the function host.

What’s the Difference Between an Isolated an Non-Isolated Function?

A non-isolated function can have just one file in the project; for example:

And in that file, you can have a single method, decorated with a Function attribute:

        [FunctionName("Function1")]
        public async Task<HttpResponseData> Run(. . .

However, for an Isolated Function, you’ll need a Program.cs, with something along the lines of the following as a minimum:

        public static async Task Main(string[] args)
        {
            var host = new HostBuilder()
                .ConfigureFunctionsWorkerDefaults()
                .Build();

            await host.RunAsync();
        }

Further, the dependency libraries change; Isolated Functions use the following libraries (these obviously depend slightly on your bindings, but are a good start):

  <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Abstractions" Version="1.1.0" />	 
  <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.3.0" />
  <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.6.0" />
  <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http" Version="3.0.13" />

Finally, you’ll need to change your decorator to:

[Function("Function1")]

From:

[FunctionName("Function1")]

FunctionName uses the old WebJobs namespace.

Some possible errors…

At least one binding must be declared.

This error typically happens in the following scenario: the method has a [Function] decorator, but within the method signature, there are no valid Bindings – that is, nothing that the Azure Function ecosystem understands. For example; the following signature would give that error:

[Function("Function1")]
public void MyFunc()
{
}

Specified condition “$(SelfContained)” evaluates to “” instead of a boolean.

For this, you need to specify the output type to be an executable:

<PropertyGroup>
    <TargetFramework>net6.0</TargetFramework>
    <AzureFunctionsVersion>v4</AzureFunctionsVersion>
    <OutputType>Exe</OutputType>
    <Nullable>enable</Nullable>
</PropertyGroup>

Xunit Tests Won’t Run After Upgrade to .Net 6

Some time ago, while trying to get .Net Core 3.1 to work with Xunit, I discovered that 2.4.1 was the correct library to use for xunit.runner.visualstudio. At the time, I wasn’t sure why this was the case.

Recently, after upgrading an Azure Function to .Net 6 from 5, I came across almost the reverse problem. It turns out that 2.4.3 actually works fine for xunit.runner.visualstudio, however, you need to include the following library as well:

Microsoft.NET.Test.Sdk

For .Net 6, if you want to run Xunit, then you need the following libraries:

<ItemGroup>

	<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.0.0" />

	<PackageReference Include="xunit" Version="2.4.1" />
	<PackageReference Include="xunit.runner.console" Version="2.4.1" />
	<PackageReference Include="xunit.runner.visualstudio" Version="2.4.3" />
</ItemGroup>

References

https://stackoverflow.com/questions/69972184/xunit-tests-no-longer-working-after-upgrade-from-net-5-to-net-6-q-a

Chaos Monkey – Part 4 – Creating an Asp.Net 6 Application that Caches an Error

This is a really strange post, but it’s a line up for a different post; however, I felt it made sense to be a post in its own right – it follows on from a trend I have of creating things that break on purpose. For example, here’s a post from a few years ago where I discussed how you might force a machine to run out of memory.

In this case, I’m creating a simple application that runs fine, but at a random point, it generates an error, which it caches, and then is broken until the application is restarted.

Why?

I’m working on some alerting and resilience experiments at the minute, and having an unstable application is useful for those tests. Also, this is not an unusual scenario – I mean, obviously, writing an application that purposes crashes after it’s broken, and from then on, is unusual; but having an application that does this somewhere in your estate may not be so unusual.

How

I’ve set-up a bog standard Asp.Net MVC 6 application. I then installed the following package:

Install-Package System.Runtime.Caching

Finally, I changed the default Privacy controller action to potentially crash:

public IActionResult Privacy()
{
    string result = Crash();
    return View(model: result);
}

Here, I’m feeding a string into the privacy view as its model. The Crash method has a 1 in 10 chance of caching an error:

        private string Crash()
        {
            if (!_memoryCache.TryGetValue("Error", out string errorCache))
            {
                if (_random.Next(10) == 1)
                {
                    _memoryCache.Set("Error", "Now broken!");
                    return "Now broken";
                }
            }
            else
            {
                throw new Exception("Some exception");
            }

            return "Working fine";
        }

I then just display the model in the view (privacy.cshtml):

@model string
@{
    ViewData["Title"] = "Privacy Policy";
}
<h1>@ViewData["Title"]</h1>
<h1>@Model</h1>

<p>Use this page to detail your site's privacy policy.</p>

Now, if you run it, somewhere between 2 and 15 times, you’re likely to see it break, and need to restart to fix.

How to Set-up Hangfire with a Dashboard in .Net 6 Inside a Docker Container

In this earlier post I wrote about how you might set-up hangfire in .Net 6 using Lite Storage.

In this post, we’ll talk about the Hangfire dashboard, and specifically, some challenges that may arise when trying to run that inside a container.

I won’t go into the container specifically, although if you’re interested in how the container might be set-up then see this beginner’s guide to Docker.

Let’s quickly look at the Docker Compose file, though:

services:
  my-api:
    build: .\MyApi
    ports: 
      - "5010:80"      
    logging: 
      driver: "json-file"

Here you can see that my-api maps port 5010 to port 80.

Hangfire

Let’s see how we would set-up the Hangfire Dashboard:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddHttpClient();
builder.Services.AddLogging();

builder.Services.AddHangfire(configuration =>
{
    configuration.UseLiteDbStorage("./hf.db");
    
});
builder.Services.AddHangfireServer();

// Add services here

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

var options = new DashboardOptions()
{
    Authorization = new[] { new MyAuthorizationFilter() }
};
app.UseHangfireDashboard("/hangfire", options);

app.MapPost(" . . .

app.Run();

public class MyAuthorizationFilter : IDashboardAuthorizationFilter
{
    public bool Authorize(DashboardContext context) => true;
}

This is the set-up, but there’s a few bits to unpack here.

UseHangfireDashboard

The UseHangfireDashboard basically let’s hangifre know that you want the dashboard setting up. However, by default, it will only allow local connections; which does not include mapped connections via Docker.

DashboardOptions.Authorization

The Authorization property allows you to specify who can view the dashboard. As you can see here, I’ve passed in a custom filter that bypasses all security – probably don’t do this in production – but you can substitute the MyAuthorizationFilter for any implementation you see fit.

Note that if you don’t override this, then attempting to access the dashboard will return a 401 error if you’re running the dashboard from inside a container.

Accessing the Dashboard

Navigating to localhost:5010 on the host will take you here:

Docker error with .Net 6 runtime image

While trying to set-up a docker image for a .Net 6 console application, I found that, although it built fine, I got the an error when trying to run it. Let’s imagine that I used the following commands to build and run:

docker build -t pcm-exe
docker run pcm-exe

The result from the run command was the following:

It was not possible to find any compatible framework version
The framework ‘Microsoft.NETCore.App’, version ‘6.0.0’ (x64) was not found.
– The following frameworks were found:
6.0.0-rc.2.21480.5 at [/usr/share/dotnet/shared/Microsoft.NETCore.App]

You can resolve the problem by installing the specified framework and/or SDK.

The specified framework can be found at:
– https://aka.ms/dotnet-core-applaunch?framework=Microsoft.NETCore.App&framework_version=6.0.0&arch=x64&rid=debian.11-x64

This had me stumped for a while, as I was under the impression that when images are updated, docker knows to go and download them – this is not the case. I discovered this by running an inspect on the runtime image from the dockerfile defined here:

FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base

The inspect command:

docker inspect mcr.microsoft.com/dotnet/runtime:6.0

This gave the following result:

“Env”: [
“PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin”,
“ASPNETCORE_URLS=http://+:80”,
“DOTNET_RUNNING_IN_CONTAINER=true”,
“DOTNET_VERSION=6.0.0-rc.2.21480.5”
],

At this point there seemed to be two options – you can just remove the image – that would force a re-download; however, a quicker way is to pull the new version of the image:

docker pull mcr.microsoft.com/dotnet/runtime:6.0 

After rebuilding the image, running a subsequent inspect shows that we’re now on the correct version:

          "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "ASPNETCORE_URLS=http://+:80",
                "DOTNET_RUNNING_IN_CONTAINER=true",
                "DOTNET_VERSION=6.0.0"
            ],