MySql Auto-Increment

While playing around with MySql recently, I discovered a strange little quirk. An automatically incremented field cannot be reset – that is, it cannot be reset to a lower value than it currently is. For example, say that you insert a lot of records into a table, or that you manually add a key that’s very high:

MySql.Data.MySqlClient.MySqlException: ‘Duplicate entry ‘2147483647’ for key ‘test_table.PRIMARY”

In my case, I was simply playing around with some settings – if you have a table with more records than that then you may have other issues than this. However, my initial idea was to reset the count:

ALTER TABLE myschema.test_table AUTO_INCREMENT = 1

It turns out that, whilst you can do this, you can only do it for numbers higher than it currently is. Presumably this is to prevent conflicts or whatever. Anyway, the way around it for me was to simply insert a new record, but to override the setting of the auto-increment:

INSERT INTO `myschema`.`test_table`
(`firstname`,
`surname`,
`key`)
VALUES
('',
'',
1);

You would then need to remove this record. Not ideal, but the only way I could get this to work.

datagen

A few weeks ago, I was looking into making a change to a project in work that uses DbUp. For some reason, I took away from that the overwhelming urge to write my own data generator. It’s far from finished, but I came up with datagen. This currently only comes in a MySql flavour, but my plan is to add a few more database engines.

The idea behind this is that you can generate pseudo data in your database. It’s not a tool in its own right, because I wanted to allow it to be customisable. To install , simply reference the package:

<PackageReference Include="datagen.MySql" Version="1.0.0" />

You can then populate the data in an entire schema. Just create a console app (this works with any app type that can physically access the database):

using datagen.Core;
using datagen.MySql;
using datagen.MySql.MySql;

var valueGenerator = new ValueGenerator(
    true,
    DateTime.Now,
    DateTime.Now.AddDays(-100),
    DateTime.Now.AddDays(10));

string connectionString = "Server=127.0.0.1;Port=3306;Database=datagentest;Uid=root;Pwd=password;AllowUserVariables=True";

var mySqlDefaults = new MySqlDefaults(connectionString);

var generate = new Generate(
    connectionString,
    valueGenerator,
    mySqlDefaults.DataTypeParser,
    mySqlDefaults .UniqueKeyGenerator);
await generate.FillSchema(20, "datagentest");

The code above allows you to create a ValueGenerator – there is a default one in the package, but you can easily write your own. FillSchema then adds 20 rows to every table in the schema.

Limitations

There are currently a few limitations – the main two being that this will only work with MySql currently, and that it does not deal with foreign keys (it will just omit that data).

Feel free to contribute, offer suggestions, or contibute.

Terraform adding control of an object not created through Terraform

In recent investigations into Terraform, I came across a situation where I’d created a resource not known to Terraform, but which I now wished to manage through it. Initially, I simply tried to create the resource. Interestingly, this goes through the plan stage, but errors at the apply stage with something similar to the following error:

│ Error: A resource with the ID “/subscriptions/34a7cc79-9bea-4d04-86a1-01f3b260bdb0/resourceGroups/ServiceBusTest/providers/Microsoft.Logic/workflows/NotifyDeadLetter” already exists – to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for “azurerm_logic_app_workflow” for more information.

│ with azurerm_logic_app_workflow.logicapp,
│ on basic.tf line 22, in resource “azurerm_logic_app_workflow” “logicapp”:
│ 22: resource “azurerm_logic_app_workflow” “logicapp” {

This is a really useful error – if you do, in fact, look at the docs for that resource then you’ll see at the end it gives the following way of importing:

terraform import azurerm_logic_app_workflow.workflow1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Logic/workflows/workflow1

So, simply replace the values that you’ve been given in the error message and you end up with:

terraform import azurerm_logic_app_workflow.logicapp /subscriptions/34a7cc79-9bea-4d04-86a1-01f3b260bdb0/resourceGroups/ServiceBusTest/providers/Microsoft.Logic/workflows/NotifyDeadLetter

The result of running this is:

PS C:\Users\pcmic\source\repos\tf-logicapp-test> terraform import azurerm_logic_app_workflow.logicapp /subscriptions/34a7cc79-9bea-4d04-86a1-01f3b260bdb0/resourceGroups/ServiceBusTest/providers/Microsoft.Logic/workflows/NotifyDeadLetter
azurerm_logic_app_workflow.logicapp: Importing from ID “/subscriptions/34a7cc79-9bea-4d04-86a1-01f3b260bdb0/resourceGroups/ServiceBusTest/providers/Microsoft.Logic/workflows/NotifyDeadLetter”…
azurerm_logic_app_workflow.logicapp: Import prepared!
Prepared azurerm_logic_app_workflow for import
azurerm_logic_app_workflow.logicapp: Refreshing state… [id=/subscriptions/34a7cc79-9bea-4d04-86a1-01f3b260bdb0/resourceGroups/ServiceBusTest/providers/Microsoft.Logic/workflows/NotifyDeadLetter]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

This now adds that resource into your state file (.tfstate):

    {
      "mode": "managed",
      "type": "azurerm_logic_app_workflow",
      "name": "logicapp",
. . .

Reading an Azure Dead Letter Queue with Azure.Messaging.ServiceBus

Some time ago, I wrote about how you can read the dead letter queue. I also wrote this post on the evolution of Azure Service Bus libraries.

By way of a recap, a Dead Letter Queue is a sub queue that allows for a message that would otherwise clog up a queue (for example, it can’t be processed for some reason) to go to the dead letter queue and be removed from processing.

In this post, I’ll cover how you can read a dead letter queue using the new Azure.Messaging.ServiceBus library.

Force a Dead Letter Message

There are basically two ways that a message can end up in a dead letter queue: either it breaks the rules (too many retries, too many redirects, etc.), or it is explicitly placed in a dead letter queue. To do the latter, the process is as follows:

            var serviceBusClient = new ServiceBusClient(connectionString);
            var messageReceiver = serviceBusClient.CreateReceiver(QUEUE_NAME);
            var message = await messageReceiver.ReceiveMessageAsync();

            string messageBody = Encoding.UTF8.GetString(message.Body);

            await messageReceiver.DeadLetterMessageAsync(message, "Really bad message");

The main difference here (other than that the previous method was DeadLetterAsync) is that you pass the entire message, rather than just the lock token.

Reading a Dead Letter Message

There’s a few quirks here – firstly, the dead letter reason, delivery count, etc. were part of an array known as SystemProperties, whereas they are now just properties – which does make them far more accessible and discoverable. Here’s the code to read the dead letter queue:

            var serviceBusClient = new ServiceBusClient(connectionString);
            var deadLetterReceiver = serviceBusClient.CreateReceiver(FormatDeadLetterPath());
            
            var message = await deadLetterReceiver.ReceiveMessageAsync();

            string messageBody = Encoding.UTF8.GetString(message.Body);

            Console.WriteLine("Message received: {0}", messageBody);

            // Previous versions had these as properties
            // https://www.pmichaels.net/2021/01/23/read-the-dead-letter-queue/
            if (!string.IsNullOrWhiteSpace(message.DeadLetterReason))
            {
                Console.WriteLine("Reason: {0} ", message.DeadLetterReason);
            }
            if (!string.IsNullOrWhiteSpace(message.DeadLetterErrorDescription))
            {
                Console.WriteLine("Description: {0} ", message.DeadLetterErrorDescription);
            }

            Console.WriteLine($"Message {message.MessageId} ({messageBody}) had a delivery count of {message.DeliveryCount}");

Again, most of the changes are simply naming. It’s worth mentioning the FormatDeadLetterPath() function. This was previously part of a static helper class EntityNameHelper; here, I’ve tried to replicate that behaviour locally (as it seems to have been removed):

        private static string QUEUE_NAME = "dead-letter-demo";
        private static string DEAD_LETTER_PATH = "$deadletterqueue";
        
        static string FormatDeadLetterPath() =>
            $"{QUEUE_NAME}/{DEAD_LETTER_PATH}";

Resubmitting a Dead Letter Message

This is something that I covered in my original post on this. It’s not inbuilt behaviour – but you basically copy the message and re-submit. In fact, this is much, much easier now:

            var serviceBusClient = new ServiceBusClient(connectionString);

            var deadLetterReceiver = serviceBusClient.CreateReceiver(FormatDeadLetterPath());
            var sender = serviceBusClient.CreateSender(QUEUE_NAME);

            var deadLetterMessage = await deadLetterReceiver.ReceiveMessageAsync();            

            using var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled);

            var resubmitMessage = new ServiceBusMessage(deadLetterMessage);
            await sender.SendMessageAsync(resubmitMessage);
            //throw new Exception("aa"); // - to prove the transaction
            await deadLetterReceiver.CompleteMessageAsync(deadLetterMessage);

            scope.Complete();            

Most of what’s here has previously been covered; the old Message.Clone is now much neater (but slightly less obvious) in that you simply pass the old method in as a constructor parameter. Because the Dead Letter Reason, et al, are now properties, there’s no longer a need to manually deal with them not getting copied across.

The transaction simply checks that either the dead letter message is correctly re-submitted, or it remains a dead letter.

Summary

The new library makes the code much more concise and discoverable. We’ve seen how to force a dead letter; how to receive and view the contents of the Dead Letter Queue, and finally, how to resubmit a dead lettered message.

Isolated Azure Function in .Net 6

I’ve recently been working with Azure Isolated Functions for .Net 6. This is a kind of getting started guide – especially if you’re coming across from non-isolated.

What’s an Isolated Function

As is explained here, an isolated function is a function that runs out of process and self-hosted. Previously, there were issues with dependency conflicts because you were married to the function host.

What’s the Difference Between an Isolated an Non-Isolated Function?

A non-isolated function can have just one file in the project; for example:

And in that file, you can have a single method, decorated with a Function attribute:

        [FunctionName("Function1")]
        public async Task<HttpResponseData> Run(. . .

However, for an Isolated Function, you’ll need a Program.cs, with something along the lines of the following as a minimum:

        public static async Task Main(string[] args)
        {
            var host = new HostBuilder()
                .ConfigureFunctionsWorkerDefaults()
                .Build();

            await host.RunAsync();
        }

Further, the dependency libraries change; Isolated Functions use the following libraries (these obviously depend slightly on your bindings, but are a good start):

  <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Abstractions" Version="1.1.0" />	 
  <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.3.0" />
  <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.6.0" />
  <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.Http" Version="3.0.13" />

Finally, you’ll need to change your decorator to:

[Function("Function1")]

From:

[FunctionName("Function1")]

FunctionName uses the old WebJobs namespace.

Some possible errors…

At least one binding must be declared.

This error typically happens in the following scenario: the method has a [Function] decorator, but within the method signature, there are no valid Bindings – that is, nothing that the Azure Function ecosystem understands. For example; the following signature would give that error:

[Function("Function1")]
public void MyFunc()
{
}

Specified condition “$(SelfContained)” evaluates to “” instead of a boolean.

For this, you need to specify the output type to be an executable:

<PropertyGroup>
    <TargetFramework>net6.0</TargetFramework>
    <AzureFunctionsVersion>v4</AzureFunctionsVersion>
    <OutputType>Exe</OutputType>
    <Nullable>enable</Nullable>
</PropertyGroup>

Xunit Tests Won’t Run After Upgrade to .Net 6

Some time ago, while trying to get .Net Core 3.1 to work with Xunit, I discovered that 2.4.1 was the correct library to use for xunit.runner.visualstudio. At the time, I wasn’t sure why this was the case.

Recently, after upgrading an Azure Function to .Net 6 from 5, I came across almost the reverse problem. It turns out that 2.4.3 actually works fine for xunit.runner.visualstudio, however, you need to include the following library as well:

Microsoft.NET.Test.Sdk

For .Net 6, if you want to run Xunit, then you need the following libraries:

<ItemGroup>

	<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.0.0" />

	<PackageReference Include="xunit" Version="2.4.1" />
	<PackageReference Include="xunit.runner.console" Version="2.4.1" />
	<PackageReference Include="xunit.runner.visualstudio" Version="2.4.3" />
</ItemGroup>

References

https://stackoverflow.com/questions/69972184/xunit-tests-no-longer-working-after-upgrade-from-net-5-to-net-6-q-a

Chaos Monkey – Part 4 – Creating an Asp.Net 6 Application that Caches an Error

This is a really strange post, but it’s a line up for a different post; however, I felt it made sense to be a post in its own right – it follows on from a trend I have of creating things that break on purpose. For example, here’s a post from a few years ago where I discussed how you might force a machine to run out of memory.

In this case, I’m creating a simple application that runs fine, but at a random point, it generates an error, which it caches, and then is broken until the application is restarted.

Why?

I’m working on some alerting and resilience experiments at the minute, and having an unstable application is useful for those tests. Also, this is not an unusual scenario – I mean, obviously, writing an application that purposes crashes after it’s broken, and from then on, is unusual; but having an application that does this somewhere in your estate may not be so unusual.

How

I’ve set-up a bog standard Asp.Net MVC 6 application. I then installed the following package:

Install-Package System.Runtime.Caching

Finally, I changed the default Privacy controller action to potentially crash:

public IActionResult Privacy()
{
    string result = Crash();
    return View(model: result);
}

Here, I’m feeding a string into the privacy view as its model. The Crash method has a 1 in 10 chance of caching an error:

        private string Crash()
        {
            if (!_memoryCache.TryGetValue("Error", out string errorCache))
            {
                if (_random.Next(10) == 1)
                {
                    _memoryCache.Set("Error", "Now broken!");
                    return "Now broken";
                }
            }
            else
            {
                throw new Exception("Some exception");
            }

            return "Working fine";
        }

I then just display the model in the view (privacy.cshtml):

@model string
@{
    ViewData["Title"] = "Privacy Policy";
}
<h1>@ViewData["Title"]</h1>
<h1>@Model</h1>

<p>Use this page to detail your site's privacy policy.</p>

Now, if you run it, somewhere between 2 and 15 times, you’re likely to see it break, and need to restart to fix.

How to Set-up Hangfire with a Dashboard in .Net 6 Inside a Docker Container

In this earlier post I wrote about how you might set-up hangfire in .Net 6 using Lite Storage.

In this post, we’ll talk about the Hangfire dashboard, and specifically, some challenges that may arise when trying to run that inside a container.

I won’t go into the container specifically, although if you’re interested in how the container might be set-up then see this beginner’s guide to Docker.

Let’s quickly look at the Docker Compose file, though:

services:
  my-api:
    build: .\MyApi
    ports: 
      - "5010:80"      
    logging: 
      driver: "json-file"

Here you can see that my-api maps port 5010 to port 80.

Hangfire

Let’s see how we would set-up the Hangfire Dashboard:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddHttpClient();
builder.Services.AddLogging();

builder.Services.AddHangfire(configuration =>
{
    configuration.UseLiteDbStorage("./hf.db");
    
});
builder.Services.AddHangfireServer();

// Add services here

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
    app.UseSwagger();
    app.UseSwaggerUI();
}

app.UseHttpsRedirection();

var options = new DashboardOptions()
{
    Authorization = new[] { new MyAuthorizationFilter() }
};
app.UseHangfireDashboard("/hangfire", options);

app.MapPost(" . . .

app.Run();

public class MyAuthorizationFilter : IDashboardAuthorizationFilter
{
    public bool Authorize(DashboardContext context) => true;
}

This is the set-up, but there’s a few bits to unpack here.

UseHangfireDashboard

The UseHangfireDashboard basically let’s hangifre know that you want the dashboard setting up. However, by default, it will only allow local connections; which does not include mapped connections via Docker.

DashboardOptions.Authorization

The Authorization property allows you to specify who can view the dashboard. As you can see here, I’ve passed in a custom filter that bypasses all security – probably don’t do this in production – but you can substitute the MyAuthorizationFilter for any implementation you see fit.

Note that if you don’t override this, then attempting to access the dashboard will return a 401 error if you’re running the dashboard from inside a container.

Accessing the Dashboard

Navigating to localhost:5010 on the host will take you here:

Configuring Docker to use a Dev Cert When Calling out to the Host Machine

I’ve recently being wrestling with trying to get an ASP.Net service within a docker container to call a service running outside of that container (but on the same machine). As you’ll see as we get further into the post, this is a lot more difficult that it first appears. Let’s start with a diagram to illustrate the problem:

The diagram above illustrates, at a very basic level, what I was trying to achieve. Essentially, have the service running inside docker call to a service outside of docker. In real life, the service (2) would be remote: very likely on a different physical server, and definitely have an allocated domain address; however, for this experiment, it lives on the same physical (or virtual) machine as the docker host.

Before I continue, I must point out that the solution to this comes by way of some amazing help by Rob Richardson: he gave a talk at NDC Porto that got be about 70% of the way there, and helped me out further to actually get this working!

Referencing a Service outside of Docker from within Docker

Firstly, let’s consider a traditional docker problem: if I load Asp.Net Service (2) then I would do so in a browser referencing localhost. However, if I reference localhost from within docker, that refers to the localhost of the container, not the host machine. The way around this is with host.docker.internal: this gives you a path to the host machine of the docker container.

Certificates – The Problem

Okay, so onto the next (and main) issue: when I try to call Asp.Net Service (2) from the docker container, I get an SSL error:

The remote certificate is invalid according to the validation procedure: RemoteCertificateNameMismatch, RemoteCertificateChainErrors

Why

The reason has to do with the way that certificates work; and, in some cases, don’t work. Firstly, if you watch the linked video, you’ll see that the dev-cert functionality in Linux has a slight flaw – in that it doesn’t do anything*. Secondly, because you’re jumping (effectively) across machines here, you can’t just issue a dev cert to each anyway, as it will be a different dev cert; and thirdly, dev-certs are issues to localhost by default: but as we saw, we’re actually trying to contact host.docker.internal.

Just to elaborate on the trust chain; let’s consider the following diagram:

In this diagram, Certificate A is based on the Root Certificate – if the machine trusts the root certificate, then it will trust Certificate A – however, the same machine will only trust Certificate B if it is explicitly told to do so.

This means that the dev cert for the container will not be trusted on the host, and vice-versa – as neither have any trust chain and relationship – this is the problem, but it’s also the solution.

Okay, so that’s the why – onto the how…

mkcert

Let’s start with introducing mkcert – this is an incredibly useful tool that hugely simplifies the whole process; it can be installed via chocolatey:

choco install mkcert

If you don’t want to use Chocolatey, then the repo is here.

Essentially, what this allows us to do is to create a trusted root certificate, from which, we can base our other certificates. So, once this is installed, we can create a new trusted root certificate like this:

mkcert -install

This installs our trusted root certificate; which we can see here:

This will also generate the following files (on Windows, these will be in %localappdata%\mkcert):

rootCA.pem
rootCA-key.pem

These are the root certificates, so the next thing we need is a certificate that covers the specific domain. You can do that by simply calling mkcert with the appropriate domain(s):

mkcert localhost host.docker.internal

This creates a valid cert for both localhost and host.docker.internal:

localhost.pem
localhost-key.pem

You may wish to rename these to be something slightly more descriptive, but for the purpose of this post, this is sufficient.

Docker

Almost there now – we have our certificates, but we need to copy them to the correct location. Because we’ve run mkcert -install the root certificate is already on the local machine; however, we now need that on the docker client (Asp.Net Service (1) from the diagram above). Firstly, let’s download a mkcert.exe from here for the relevant version of Linux that you’re running.

Let’s copy both the rootCA.pem and rootCA-key.pem into our Asp.Net Service (1) project and then change the dockerfile:

. . .
FROM base AS final
WORKDIR /app
COPY mkcert /usr/local/bin
COPY rootCA*.pem /root/.local/share/mkcert/
RUN chmod +x /usr/local/bin/mkcert \
  && mkcert -install \
  && rm -rf /usr/local/bin/mkcert 
COPY --from=publish /app/publish .
. . .

A few things to mention here:

1. The rest of this file is from the standard Ast.Net docker file. See this post for possible modifications to that file.
2. Each time you execute a RUN command docker makes a temporary image, hence why combining three lines (on line 7) with the && makes sense.
3. When you run the mkcert -install it will pick up the root certificate that you copy into the /root/.local/share/mkcert.
4. Make sure that these lines apply to the runtime version of the image, and not the SDK version (there’s absolutely no point in adding a certificate to the SDK version).
5. The last line (rm -rf /usr/local/bin/mkcert) just cleans up the mkcert files.

The Service

The final part is to copy the generated certificates (localhost.pem and localhost-key.pem) over to the service application (Asp.Net Service (2)). Finally, in the appsettings.json, we need to tell Kestrel to use that key:

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "AllowedHosts": "*",
  "Kestrel": {
    "Certificates": {
      "Default": {
        "Path": "localhost-host.pem",
        "KeyPath": "localhost-host-key.pem"
      }
    }
  }
}

That’s it! If you open up the Asp.Net Service (2), you can check the certificate, and see that it’s based on the mkcert root:

References and Acknowledgements

As I said at the start, this video and Rob himself helped me a lot with this – so thanks to him!

It’s also worth mentioning that without mkcert this process would be considerably more difficult!

Footnotes

* actually, that’s not strictly true – Rob points out in his video the nuance here; but the takeaway is that it’s unlikely to be helpful