How to Set-up Hangfire with a Dashboard in .Net 6 Inside a Docker Container

In this earlier post I wrote about how you might set-up hangfire in .Net 6 using Lite Storage.

In this post, we’ll talk about the Hangfire dashboard, and specifically, some challenges that may arise when trying to run that inside a container.

I won’t go into the container specifically, although if you’re interested in how the container might be set-up then see this beginner’s guide to Docker.

Let’s quickly look at the Docker Compose file, though:

services:
  my-api:
    build: .\MyApi
    ports: 
      - "5010:80"      
    logging: 
      driver: "json-file"

Here you can see that my-api maps port 5010 to port 80.

Hangfire

Let’s see how we would set-up the Hangfire Dashboard:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddHttpClient();
builder.Services.AddLogging();

builder.Services.AddHangfire(configuration =>
{
    configuration.UseLiteDbStorage("./hf.db");
    
});
builder.Services.AddHangfireServer();

// Add services here

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

var options = new DashboardOptions()
{
    Authorization = new[] { new MyAuthorizationFilter() }
};
app.UseHangfireDashboard("/hangfire", options);

app.MapPost(" . . .

app.Run();

public class MyAuthorizationFilter : IDashboardAuthorizationFilter
{
    public bool Authorize(DashboardContext context) => true;
}

This is the set-up, but there’s a few bits to unpack here.

UseHangfireDashboard

The UseHangfireDashboard basically let’s hangifre know that you want the dashboard setting up. However, by default, it will only allow local connections; which does not include mapped connections via Docker.

DashboardOptions.Authorization

The Authorization property allows you to specify who can view the dashboard. As you can see here, I’ve passed in a custom filter that bypasses all security – probably don’t do this in production – but you can substitute the MyAuthorizationFilter for any implementation you see fit.

Note that if you don’t override this, then attempting to access the dashboard will return a 401 error if you’re running the dashboard from inside a container.

Accessing the Dashboard

Navigating to localhost:5010 on the host will take you here:

Configuring Docker to use a Dev Cert When Calling out to the Host Machine

I’ve recently being wrestling with trying to get an ASP.Net service within a docker container to call a service running outside of that container (but on the same machine). As you’ll see as we get further into the post, this is a lot more difficult that it first appears. Let’s start with a diagram to illustrate the problem:

The diagram above illustrates, at a very basic level, what I was trying to achieve. Essentially, have the service running inside docker call to a service outside of docker. In real life, the service (2) would be remote: very likely on a different physical server, and definitely have an allocated domain address; however, for this experiment, it lives on the same physical (or virtual) machine as the docker host.

Before I continue, I must point out that the solution to this comes by way of some amazing help by Rob Richardson: he gave a talk at NDC Porto that got be about 70% of the way there, and helped me out further to actually get this working!

Referencing a Service outside of Docker from within Docker

Firstly, let’s consider a traditional docker problem: if I load Asp.Net Service (2) then I would do so in a browser referencing localhost. However, if I reference localhost from within docker, that refers to the localhost of the container, not the host machine. The way around this is with host.docker.internal: this gives you a path to the host machine of the docker container.

Certificates – The Problem

Okay, so onto the next (and main) issue: when I try to call Asp.Net Service (2) from the docker container, I get an SSL error:

The remote certificate is invalid according to the validation procedure: RemoteCertificateNameMismatch, RemoteCertificateChainErrors

Why

The reason has to do with the way that certificates work; and, in some cases, don’t work. Firstly, if you watch the linked video, you’ll see that the dev-cert functionality in Linux has a slight flaw – in that it doesn’t do anything*. Secondly, because you’re jumping (effectively) across machines here, you can’t just issue a dev cert to each anyway, as it will be a different dev cert; and thirdly, dev-certs are issues to localhost by default: but as we saw, we’re actually trying to contact host.docker.internal.

Just to elaborate on the trust chain; let’s consider the following diagram:

In this diagram, Certificate A is based on the Root Certificate – if the machine trusts the root certificate, then it will trust Certificate A – however, the same machine will only trust Certificate B if it is explicitly told to do so.

This means that the dev cert for the container will not be trusted on the host, and vice-versa – as neither have any trust chain and relationship – this is the problem, but it’s also the solution.

Okay, so that’s the why – onto the how…

mkcert

Let’s start with introducing mkcert – this is an incredibly useful tool that hugely simplifies the whole process; it can be installed via chocolatey:

choco install mkcert

If you don’t want to use Chocolatey, then the repo is here.

Essentially, what this allows us to do is to create a trusted root certificate, from which, we can base our other certificates. So, once this is installed, we can create a new trusted root certificate like this:

mkcert -install

This installs our trusted root certificate; which we can see here:

This will also generate the following files (on Windows, these will be in %localappdata%\mkcert):

rootCA.pem
rootCA-key.pem

These are the root certificates, so the next thing we need is a certificate that covers the specific domain. You can do that by simply calling mkcert with the appropriate domain(s):

mkcert localhost host.docker.internal

This creates a valid cert for both localhost and host.docker.internal:

localhost.pem
localhost-key.pem

You may wish to rename these to be something slightly more descriptive, but for the purpose of this post, this is sufficient.

Docker

Almost there now – we have our certificates, but we need to copy them to the correct location. Because we’ve run mkcert -install the root certificate is already on the local machine; however, we now need that on the docker client (Asp.Net Service (1) from the diagram above). Firstly, let’s download a mkcert.exe from here for the relevant version of Linux that you’re running.

Let’s copy both the rootCA.pem and rootCA-key.pem into our Asp.Net Service (1) project and then change the dockerfile:

. . .
FROM base AS final
WORKDIR /app
COPY mkcert /usr/local/bin
COPY rootCA*.pem /root/.local/share/mkcert/
RUN chmod +x /usr/local/bin/mkcert \
  && mkcert -install \
  && rm -rf /usr/local/bin/mkcert 
COPY --from=publish /app/publish .
. . .

A few things to mention here:

1. The rest of this file is from the standard Ast.Net docker file. See this post for possible modifications to that file.
2. Each time you execute a RUN command docker makes a temporary image, hence why combining three lines (on line 7) with the && makes sense.
3. When you run the mkcert -install it will pick up the root certificate that you copy into the /root/.local/share/mkcert.
4. Make sure that these lines apply to the runtime version of the image, and not the SDK version (there’s absolutely no point in adding a certificate to the SDK version).
5. The last line (rm -rf /usr/local/bin/mkcert) just cleans up the mkcert files.

The Service

The final part is to copy the generated certificates (localhost.pem and localhost-key.pem) over to the service application (Asp.Net Service (2)). Finally, in the appsettings.json, we need to tell Kestrel to use that key:

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "AllowedHosts": "*",
  "Kestrel": {
    "Certificates": {
      "Default": {
        "Path": "localhost-host.pem",
        "KeyPath": "localhost-host-key.pem"
      }
    }
  }
}

That’s it! If you open up the Asp.Net Service (2), you can check the certificate, and see that it’s based on the mkcert root:

References and Acknowledgements

As I said at the start, this video and Rob himself helped me a lot with this – so thanks to him!

It’s also worth mentioning that without mkcert this process would be considerably more difficult!

Footnotes

* actually, that’s not strictly true – Rob points out in his video the nuance here; but the takeaway is that it’s unlikely to be helpful

Beginner’s Guide to Docker – Part 4 – Viewing Docker Logs

In this post (part of a series around docker basics) I’ll be disucssing how you can view logs for running docker containers.

As usual with my posts, there’s very little here that you can’t get from the the docs – although I’d like to think these posts are slightly more accessible (especially for myself).

Viewing Logs

In fact, the process of viewing logs is remarkably easy; once you’ve launched the container, you need the ID:

docker ps

Once you have the container ID, you can feed it into the docker container logs command:

docker container logs --follow dcd204c97421

The follow argument basically leaves a session open, and updates with any new log entries (leave it out for a static view of the logs).

The docker desktop app actually provides the same functionality out of the box (but provides you with a nice search and formatting facility).

Writing Logs

Obviously, this varies from language to language, so I’ll stick to .Net here. It’s actually very straightforward:

builder.Services.AddLogging();

You can now log in the same way as you would for any other .net application; simply inject into the constructor:

        public MyService(
            ILogger<MyService> logger)
        {
            _logger = logger;
        }

And then log as you would normally:

_logger.LogWarning("something happened");

References

Constructor Injection in .Net

Docker Container Logs

Call the Azure DevOps REST API

In this post, I introduced the DevOps CLI. Here, I’m going to expand on that by interrogating the DevOps API, and generating a new work item in the board. A few years ago I did the same thing in TFS.

Configuration

The first step here is to generate a personal access token. You can do this from the CLI, see here for details on how to do that.

From the UI, generating a personal access token is trivial; from your project, select Personal Access Tokens from the drop down menu:

In real life, the next screen is quite important, as you’ll want to scope down the access to the bare minimum. However, we’re just playing around, so for test purposes, we can grant full access:

You’ll then be given the token – take a copy of this:

The Code – Getting a list of projects in the organisation

The following code (heavily based on this link) should get a list of team projects within the organisation that you provide:

using System.Net.Http.Headers;

string personalaccesstoken = "abcdef";
string organisation = "devopsplayground1";

using HttpClient client = new HttpClient();

client.DefaultRequestHeaders.Accept.Add(
	new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));

client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic",
	Convert.ToBase64String(
		System.Text.ASCIIEncoding.ASCII.GetBytes(
			string.Format("{0}:{1}", "", personalaccesstoken))));

using HttpResponseMessage response = await client.GetAsync(
			$"https://dev.azure.com/{organisation}/_apis/projects");

response.EnsureSuccessStatusCode();
string responseBody = await response.Content.ReadAsStringAsync();
Console.WriteLine(responseBody);

personalaccesstoken is taken from the access token that you generated earlier, and the organisation is the name of your DevOps organisation; you can find it here if you’re unsure:

The Code – Creating a DevOps Work Item

Now that we can get a list of projects, we can pretty much do anything via the API; for example, if you wanted a list of work item types, you might use this:

    HttpResponseMessage responseWIT = await client.GetAsync(
                $"https://dev.azure.com/{organisation}/{project}/_apis/wit/workitemtypes?api-version=6.1-preview.2");

    responseWIT.EnsureSuccessStatusCode();
    string responseBodyWIT = await responseWIT.Content.ReadAsStringAsync();
    Console.WriteLine(responseBodyWIT);    

Updating or creating is a little different; let’s take creating a new work item. If I’m honest, the interface here doesn’t feel particularly RESTful, but nevertheless:

    var obj = new[] { new { from = (string?)null, op = "add", path = "/fields/System.Title", value = "pcmtest" } };
    string serialisedObj = System.Text.Json.JsonSerializer.Serialize(obj);
    var content = new StringContent(
        serialisedObj,
        Encoding.UTF8, 
        "application/json-patch+json");

    string type = "bug";
    HttpResponseMessage response = await client.PatchAsync(
                $"https://dev.azure.com/{organisation}/{project}/_apis/wit/workitems/${type}?api-version=6.1-preview.3",
                content);

    //response.EnsureSuccessStatusCode();
    string responseBody = await response.Content.ReadAsStringAsync();
    Console.WriteLine(responseBody);

See here for the docs. There’s a few things to note here:

1. The type = “bug” comes from the types that we got above.
2. Note that the content that’s passed in to the Api call is an array – if you miss that and simply pass in a single object, you’ll get the error:

You must pass a valid patch document in the body of the request.

3. You will not see the above error unless, like me, you remove the EnsureSuccessStatusCode() call!
4. The API version matters, but it will give you a sensible error if you get it wrong.
5. Yes, this is a Patch call, even though you’re creating a new resource!

Here’s the new work item:

References

https://docs.microsoft.com/en-us/rest/api/azure/devops/?view=azure-devops-rest-6.1&WT.mc_id=DT-MVP-5004601

Running Docker Compose Interactively

I’ve recently been doing a fair bit of investigation into docker. My latest post was investigating the idea of setting up a sidercar pattern just using docker compose. In this post, I’m exploring the concept of launching an interactive shell using docker compose.

Why is this so hard?

Docker, and by extension, docker compose, are not really designed to run interactive tasks: they fare best left with ephemeral background tasks. However, in my particular use case, I needed to run a console app which accepted a keyboard input.

The system will let you create the console app as far as actually running, but once you issue a docker compose up, you’ll get something like the following error:

Unhandled exception. System.InvalidOperationException: Cannot read keys when either application does not have a console or when console input has been redirected. Try Console.Read.

How to run Interactively

In fact, there’s a few issues here, the first (as pointed out by this post) is that you need to tell docker compose to pass through the interactive mode to the container:

     
  my-app:
    build: .\My.App
    stdin_open: true
    tty: true    
    depends_on: 
      - "some-api"

stdin_open and tty translate to passing -it to the container. However, that doesn’t solve the problem entirely; if you issue a docker compose up you’ll still see the same issue.

The reason here is that there are (or can be) multiple containers, and so there’s no sensible way of knowing how to deal with that. In fact the solution is to simply tell docker compose which one to run; for example:

docker compose build --no-cache
docker compose run my-app

In the example above, that will also run the api some-api because we’ve declared a dependency. Note that build is optional; although while developing and debugging, I find it useful to execute both.

Docker error with .Net 6 runtime image

While trying to set-up a docker image for a .Net 6 console application, I found that, although it built fine, I got the an error when trying to run it. Let’s imagine that I used the following commands to build and run:

docker build -t pcm-exe
docker run pcm-exe

The result from the run command was the following:

It was not possible to find any compatible framework version
The framework ‘Microsoft.NETCore.App’, version ‘6.0.0’ (x64) was not found.
– The following frameworks were found:
6.0.0-rc.2.21480.5 at [/usr/share/dotnet/shared/Microsoft.NETCore.App]

You can resolve the problem by installing the specified framework and/or SDK.

The specified framework can be found at:
– https://aka.ms/dotnet-core-applaunch?framework=Microsoft.NETCore.App&framework_version=6.0.0&arch=x64&rid=debian.11-x64

This had me stumped for a while, as I was under the impression that when images are updated, docker knows to go and download them – this is not the case. I discovered this by running an inspect on the runtime image from the dockerfile defined here:

FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base

The inspect command:

docker inspect mcr.microsoft.com/dotnet/runtime:6.0

This gave the following result:

“Env”: [
“PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin”,
“ASPNETCORE_URLS=http://+:80”,
“DOTNET_RUNNING_IN_CONTAINER=true”,
“DOTNET_VERSION=6.0.0-rc.2.21480.5”
],

At this point there seemed to be two options – you can just remove the image – that would force a re-download; however, a quicker way is to pull the new version of the image:

docker pull mcr.microsoft.com/dotnet/runtime:6.0 

After rebuilding the image, running a subsequent inspect shows that we’re now on the correct version:

          "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "ASPNETCORE_URLS=http://+:80",
                "DOTNET_RUNNING_IN_CONTAINER=true",
                "DOTNET_VERSION=6.0.0"
            ],

Dependency Injection in Minimal APIs in .Net 6

Minimal Apis in .Net 6 are really an absolutely amazing feature – you can create an API in about 5 or 6 lines of code. If you have a look online, you’ll see a plethora of examples… but unfortunately, they all show you how to write a “Hello World” API.

I’ve recently been playing with these, and initially found the least obvious part to be the DI part. In a controller, you’d simply inject the dependencies into the constructor of the controller; however, the approach is more nuanced with the minimal APIs. Essentially, you use the parameters of the method itself.

Let’s investigate this by looking at a very simple post method:

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

app.MapPost("/test", () =>
{
    return Results.Ok();
});

app.Run();

This is a cut down version of the code you’ll get from the weather forecast example in the default template. Now, let’s assume that we want to use the IHttpClientFactory from the controller; we can simply do this:

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddHttpClient(); // Add HttpClient here

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

app.MapPost("/test", (IHttpClientFactory httpClientFactory) =>
{
    var client = httpClientFactory.CreateClient();
    return Results.Ok();
});

app.Run();

As you can see, we’ve injected IHttpClientFactory into our method. This works well, but what if we now want to accept a serialised body in our request; consider the following example:

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddHttpClient(); // Add HttpClient here

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

app.MapPost("/test", (MyClass myClass, IHttpClientFactory httpClientFactory) =>
{
    var client = httpClientFactory.CreateClient();
    return Results.Ok();
});

app.Run();

We’re passing in MyClass; and, in fact, this will now work – when you run it, you’ll see from swagger that it will expect you to pass the contents of MyClass into the endpoint. “That’s great,” I hear you ask, “But what if I want to register MyClass in the DI and pass that in?” To which I’ll respond: “Well – this actually works out of the box”:

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddHttpClient(); // Add HttpClient here
builder.Services.AddSingleton<MyClass>(new MyClass() { SomeProperty = "aardvark"});

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

app.MapPost("/test", (MyClass myClass, IHttpClientFactory httpClientFactory) =>
{
    var client = httpClientFactory.CreateClient();
    return Results.Ok();
});

app.Run();

The DI is clever enough to determine that you’ve registered a singleton, and so you don’t need that information passing into the endpoint. Again, I hear the reader query this: “All very well, but what if I haven’t registered the class in the DI, but I wanted to and forgot?” Again, I’d respond: “Ah – an excellent question!” You can, indeed force the issue: just as you can use the [FromBody] attribute to insist that the endpoint takes the value from the endpoint body, so you can use the [FromServices] attribute to tell it that you want the class to be resolved from the DI:

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddHttpClient(); // Add HttpClient here
//builder.Services.AddSingleton<MyClass>(new MyClass() { SomeProperty = "aardvark"});

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

app.MapPost("/test", ([FromServices]MyClass myClass, IHttpClientFactory httpClientFactory) =>
{
    var client = httpClientFactory.CreateClient();
    return Results.Ok();
});

app.Run();

The above code will run, and when you call the endpoint, it will crash, telling you that you haven’t told it what you want MyClass to be.

Summary

All pretty nifty if you ask me – it works by default for the 90% case, and for the last 10% there are some simple attributes you can set to force the issue.

I should probably just add that, generally speaking, adding functionality directly to the controller / endpoint method is a bad idea. Most of the time, what you’ll really want to do in the endpoint is simply resolve a service and call a method there.

References

https://benfoster.io/blog/mvc-to-minimal-apis-aspnet-6/

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/minimal-apis

https://www.hanselman.com/blog/exploring-a-minimal-web-api-with-aspnet-core-6

Printing Extended Ascii Characters in Console Apps

I’ve written a fair few articles on using console apps, especially for the purpose of writing games. This post, for example, is the start of a series on creating a snake game using a C# console app.

Disclaimer: this post is largely just bringing together the information listed at the bottom of this article into a single place. I take no credit for the code herein (with the exception of the rectangle itself).

One thing that used to be common knowledge, back when I wrote games and apps in Turbo Pascal and C, was that you could use the extended character set in order to create a rudimentary set of graphics. In this post, I’m going to cover my re-discovery of those characters. We’ll draw a rectangle in a console app.

Back in the Borland days, the way to add these to your program was to hold Alt-Gr and then type the ASCII code. In .Net Framework, these were, broadly, included; however, in .Net Core+, you need to add a NuGet package:

<PackageReference Include="System.Text.Encoding.CodePages" Version="6.0.0" />

Once you’ve done this, you need the following magic line to allow you to actually use the code pages:

System.Text.Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);

Before we get into using this, let’s see what’s available. Firstly, this is how you would list the code pages:

var encs = CodePagesEncodingProvider.Instance.GetEncodings();
foreach (var enc in encs.OrderBy(a => a.Name))
{
    Console.WriteLine($"{enc.Name} {enc.CodePage}");
}

The specific code page that we’re interested in is IBM437:

Encoding cp437 = Encoding.GetEncoding("IBM437");
byte[] source = new byte[1];
for (byte i = 0x20; i < 0xFE; i++)
{
    source[0] = i;
    Console.WriteLine($"{i}, {cp437.GetString(source)}");
}

We can display a single character like this:

Console.WriteLine($"{cp437.GetString(new byte[1] { 217 })}");

Okay, so we now have all the tools, it’s just assembling them in the right order:

Console.Write($"{cp437.GetString(new byte[1] { 218 })}");
for (int i = 1; i < 20; i++)
{
    Console.Write($"{cp437.GetString(new byte[1] { 196 })}");
}
Console.WriteLine($"{cp437.GetString(new byte[1] { 191 })}");

for (int i = 1; i < 5; i++)
{
    Console.Write($"{cp437.GetString(new byte[1] { 179 })}");
    Console.Write(new String(' ', 19));
    Console.WriteLine($"{cp437.GetString(new byte[1] { 179 })}");
}

Console.Write($"{cp437.GetString(new byte[1] { 192 })}");
for (int i = 1; i < 20; i++)
{
    Console.Write($"{cp437.GetString(new byte[1] { 196 })}");
}
Console.WriteLine($"{cp437.GetString(new byte[1] { 217 })}");

References

https://stackoverflow.com/questions/28005405/how-can-i-turn-on-special-ascii-characters-in-visual-studio

https://stackoverflow.com/questions/58439415/visual-studio-2019-dont-recognize-ascii-characters

https://stackoverflow.com/questions/17619279/extended-ascii-in-c-sharp

https://stackoverflow.com/questions/49215791/vs-code-c-sharp-system-notsupportedexception-no-data-is-available-for-encodin

https://stackoverflow.com/questions/50858209/system-notsupportedexception-no-data-is-available-for-encoding-1252

Running Hangfire with .Net 6 and Lite Storage

One of the things that I’ve been looking into recently is Hangfire. I’ve used this in the past, but not for a while, and so I wanted to get familiar with it again. I imagine there’ll be a couple more posts on the way, but in this one, I’m going to detail how to get Hangfire running using .Net 6, and a Lite DB Storage. I did initially intend to create this with a SQLite DB; however, when I did, I found that the Hangfire messages were sending multiple times; after some searching, I came across this GitHub issue, which suggested the Hangfire.Lite storage.

In this particular article, we’ll look specifically at running Hangfire with an Asp.Net MVC project. In .Net 6, the default template looks like this:

The first step here is that you’ll need some additional refences; here’s the csproj:

<Project Sdk="Microsoft.NET.Sdk.Web">

	<PropertyGroup>
		<TargetFramework>net6.0</TargetFramework>
		<Nullable>enable</Nullable>
		<ImplicitUsings>enable</ImplicitUsings>
	</PropertyGroup>

	<ItemGroup>
		<PackageReference Include="Hangfire.AspNetCore" Version="1.7.27" />
		<PackageReference Include="Hangfire.Core" Version="1.7.27" />
		<PackageReference Include="Hangfire.LiteDB" Version="0.4.1" />
	</ItemGroup>

</Project>

In addition to the Hangfire Core and AspNetCore projects, there’s the LiteDB package.

The next step is, in program.cs:

using Hangfire;
using Hangfire.LiteDB;

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddControllersWithViews();

// HF
builder.Services.AddHangfire(configuration =>
{
    configuration.UseLiteDbStorage("./hf.db");    
});
builder.Services.AddHangfireServer();
// HF - End

var app = builder.Build();

// Configure the HTTP request pipeline.
if (!app.Environment.IsDevelopment())
{
    app.UseExceptionHandler("/Home/Error");
    // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
    app.UseHsts();
}
. . .

That’s actually it – that’s all you need to add Hangfire! However, it won’t, technically speaking, do anything. To test, edit the HomeCrontroller.cs:

        public IActionResult Index()
        {
            var id = BackgroundJob.Enqueue(() => DoSomething());
            return View();
        }

        public void DoSomething()
        {
            Console.WriteLine("Test");
            Console.WriteLine($"test2 {Guid.NewGuid()}");
        }

When you run the project, you should see Hangfire fire up and start printing outputs:

When you run it, don’t be too impatient – but it should print the test output.

References

https://github.com/HangfireIO/Hangfire/issues/1025

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/routing?view=aspnetcore-6.0

https://docs.hangfire.io/en/latest/getting-started/index.html