Using Scrutor to Implement the Decorator Pattern

I recently came across a very cool library, thanks to this video by Nick Chapsas. The library is Scrutor. In this post, I’m going to run through a version of the Open-Closed Principle that this makes possible.

An Overly Complex Hello World App

Let’s start by creating a needlessly complex app that prints Hello World. Instead of simply printing Hello World we’ll use DI to inject a service that prints it. Let’s start with the main program.cs code (in .Net 6):

using Microsoft.Extensions.DependencyInjection;
using scrutortest;

var serviceCollection = new ServiceCollection();

serviceCollection.AddSingleton<ITestLogger, TestLogger>();

var serviceProvider = serviceCollection.BuildServiceProvider();

var testLogger = serviceProvider.GetRequiredService<ITestLogger>();
testLogger.Log("hello world");

Impressive, eh? Here’s the interface that we now rely on:

internal interface ITestLogger
{
    public void Log(string message);
}

And here is our TestLogger class:

    internal class TestLogger : ITestLogger
    {
        public void Log(string message)
        {
            Console.WriteLine(message);
        }
    }

If you implement this, and run it, you’ll see that it works fine – almost as well as the one line version. However, let’s imagine that we now have a requirement to extend this class. After every message, we need to display —OVER— for… some reason.

Extending Our Overly Complex App to be Even More Pointless

There’s a few ways to do this: you can obviously just change the class itself, but that breaches the Open-Closed Principle. That’s where the Decorator Pattern comes in. Scrutor allows us to create a new class that looks like this:

    internal class TestLoggerExtended : ITestLogger
    {
        private readonly ITestLogger _testLogger;

        public TestLoggerExtended(ITestLogger testLogger)
        {
            _testLogger = testLogger;
        }

        public void Log(string message)
        {
            _testLogger.Log(message);
            _testLogger.Log("---OVER---");
        }
    }

There’s a few things of note here: firstly, we’re implementing the same interface as the main / first class; secondly, we’re Injecting said interface into our constructor; and finally, in the Log method, we’re calling the original class. Obviously, if you just register this in the DI container as normal, bad things will happen; so we use the Scrutor Decorate method:

using Microsoft.Extensions.DependencyInjection;
using scrutortest;

var serviceCollection = new ServiceCollection();

serviceCollection.AddSingleton<ITestLogger, TestLogger>();
serviceCollection.Decorate<ITestLogger, TestLoggerExtended>();

var serviceProvider = serviceCollection.BuildServiceProvider();

var testLogger = serviceProvider.GetRequiredService<ITestLogger>();
testLogger.Log("hello world");

If you now run this, you’ll see that the functionality is very similar to inheritance, but you haven’t coupled the two services directly:

Introduction to Azure Chaos Studio

Some time ago, I investigated the concept of chaos engineering. The principle behind Chaos Engineering is a very simply one: since your software is likely to encounter hostile conditions in the wild, why not introduce those conditions while (and when) you can control them, and then deal with the fallout then, instead of at 3am on a Sunday.

At the time, I was trying to deal with an on-site issue where the connection seemed to be randomly dropping. In the end, I solved this by writing something similar to Polly – albeit a much simpler version.

Microsoft have recently released a preview of something called Chaos Studio. It’s very much in its infancy now, but what is there looks very interesting.

The product is essentially divided into two sections: targets and experiments. Targets represent the thing that you intend to wrought chaos upon, and experiments are how that chaos will be wrought.

Scope

For this test, I’m going to use a VM. That’s mainly because what you can do with this product is currently limited to VMs, AKS, and Redis.

Create a VM and Check Availability

The first step is to create a VM. To be honest, it doesn’t matter what the VM is, because all we’ll be doing is switching it off. Start by checking the availability – you should be able to do that in Logs – and you should notice 100% availability, unless something has gone catastrophically wrong with your deployment.

Targets

The next step is to configure our target. In chaos studio, select Targets and pick the new VM:

Not that you’ve enabled the targets, you’ll need to grant permission to the chaos studio for the VMs. Inside the VM blade, select Access Control:

If you don’t grant this access, you’ll get a permissions error when you run the experiment. The next step is to create the experiment. In Chaos Studio, select Experiments and then Create:

This will bring up a screen similar to the following:

Let’s discuss a little the concepts here: we have step, branch, and fault. A step is a sequential action that you will execute, whilst a branch is a parallel action; that is, actions in different branches can happen at the same time. A fault is what you actually do – so the fault is the chaos! Let’s add a fault:

This asks me two things, what do I want the fault to happen on (you can only select targets that have previously been created) and what do I want the fault to be. In my case, I’ve created a two step process that turns the machine off, waits a minute, then turns it off again:

Now that the experiment is created, you can start it. You get a warning at this point that basically says “it’s your foot, and you’re currently pointing a high powered rifle at it!”:

If you now run this, and it’s worth bearing in mind that there’s no simulation here – if you do this on production infrastructure it will shut it down for you, then you’ll see the update of it running:

You can drill down into the details to see exactly what it’s doing, what stage, etc.:

The experiment kills the machine for 1 minute, then waits for a minute, then kills it again. If you have a look at the availability graph, you should be able to see that:

Summary

So far, I’m pretty impressed with this tool. When they’ve finished (and by that, I mean, they’ve given the ability to create your own chaos, and have expanded the targets to cover the entire Azure ecosystem), it’s going to be a really interesting testing tool.

References

Azure Friday Introduction to Chaos Studio

Azure Monitor – Failures and Triggering an Alert from Application Insights

Azure Application Monitoring allows for a lot more functionality than just Application Insights. In this post, I’m going to walk through setting up and triggering an an alert.

Before we trigger an alert, we need to have something to trigger an alert for. If you’re reading this, you may already have an app service deployed into Azure, but for testing purposes, I’ve created a fresh MVC template, and added the following code:

This will cause an error when the privacy menu option is pressed, around 1 in 3 times (ctrl-f5 can give you several).

Failures

If you have Application Insights set-up for the app service, you can select the Failures blade:

Looking at this image, there’s three points of particular interest. First, I’ve selected the Exceptions tab – this lets me see only the requests and any exceptions that resulted. As you can see, there was a spike where I’ve highlighted. Finally, on the right-hand side of the screen, the exceptions are broken down by type; in this case, I only have one. I can drill into this by selecting the count, or by selecting Samples at the bottom.

The next step is to set up an alert when I get a given number of exceptions.

Creating an Alert

To set up a new alert, select Alerts under Monitoring:

As you can see, it tells us that we have no alerts. In fact, there’s a distinction to be drawn here; what this means is that no alerts have actually been activated. To create a new Alert Rule select Create -> Alert rule:

In creating a new alert, there’s three main things to consider: scope, condition, and action.

Scope

The scope of the alert is what it can monitor: that is, what do you want to be alerted about. In our case, that’s the app service:

Condition

The next section is condition. There are many options here, but the one that we’re interested for this is Exceptions:

After selecting this, you’ll be asked about the Signal Logic – that is, what is it about the exceptions that you wish to cause an alert. In our case, we want an alert where the number (count) of exceptions exceeds 3 in a 5 minute period:

Once you select this, it’ll give you an idea of what this might cost. In my tests so far, this has been around $1 – 2 / year or so.

Actions

The next section is Actions: once the alert fires, what do you want it to do? This brings into play Action Groups. Here, we’ll create a new Action Group

You can tell it here to e-mail, send an SMS, etc.:

It is not obvious (at least to me) from the screen above, but on the right-hand side, you need to select OK before it will let you continue.

We’re going to skip the other tabs in the Action Group and jump to Review + create, then select Create. This will bring you back to the Actions tab, and select that as the default action. You’ll also get a notification that you’re in that action group:

Finally, in Details you can name the alert, and give it a severity; for example:

Once you create this, you’ll be taken back to the Alerts tab – which will still, and confusingly, be empty. You can see your new Alert in the Alert rules section:

Triggering the Alert

To trigger the alert, I’m now going to force my site to crash (as described earlier) – remember that the condition is greater than 3. Once I get four exceptions, I wait for the alert to trigger. At this point, I should get an e-mail, telling me that the alert is triggered:

Finally, we can see that this triggered in the Alerts section. If you drill into this, it will helpfully tell you why it has triggered:

Once the period has passed, and the exceptions have dropped below 4, you’ll get another mail informing you that the danger has passed.

Summary

We’ve just scratched the surface of Azure Alerts here, but even this gives us a taste for how useful these things are. In future posts, I’m going to drill into this a bit further.

Azure Automation Run Books – Setup

I’ve recently been investigating Azure Automation RunBooks. Essentially, this gives you a way to execute some code (currently Powershell or Python) to perform some basic tasks against your infrastructure.

For this post, we’ll focus on setting up a (default) runbook, and just making it run. Let’s start by creating an automation account:

From here, you can create your automation account:

Once this creates, it gives you a couple of example run-books:

If we have a look at the tutorial with identity, it gives us the following Powershell Script:

<#
    .DESCRIPTION
        An example runbook which gets all the ARM resources using the Managed Identity
    .NOTES
        AUTHOR: Azure Automation Team
        LASTEDIT: Oct 26, 2021
#>
"Please enable appropriate RBAC permissions to the system identity of this automation account. Otherwise, the runbook may fail..."
try
{
    "Logging in to Azure..."
    Connect-AzAccount -Identity
}
catch {
    Write-Error -Message $_.Exception
    throw $_.Exception
}
#Get all ARM resources from all resource groups
$ResourceGroups = Get-AzResourceGroup
foreach ($ResourceGroup in $ResourceGroups)
{    
    Write-Output ("Showing resources in resource group " + $ResourceGroup.ResourceGroupName)
    $Resources = Get-AzResource -ResourceGroupName $ResourceGroup.ResourceGroupName
    foreach ($Resource in $Resources)
    {
        Write-Output ($Resource.Name + " of type " +  $Resource.ResourceType)
    }
    Write-Output ("")
}

Looking at this script, it only really does two things: connects to Azure using managed identity, and then runs through all the resource groups in the subscription and prints them out.

If you run this:

Then you’ll see the following warning in the output (basically saying that you should set-up the permissions, or things won’t work):

If you now switch to Errors, you’ll see a confusing error (caused by the fact that we haven’t set-up the permissions, and so things don’t work):

In order to correct this, you need to give the run-book appropriate permissions. Head over to the automation account resource, and select Identity:

Select Add role assignments.

Because this script is listing the resources in the subscription, you’ll need to be generous with the permissions:

If you run that now, it should display all the resource groups fine:

Composable Delegates

I’ve been playing around with delegates recently, and came across something that was new to me. Whilst I was familiar with the concept of assigning a delegate; for example:

delegate void delegate1();

private static void UseDelegate()
{
    delegate1 mydelegate;
    mydelegate = func1;

    mydelegate();
}

static void func1() =>
        Console.WriteLine("func1");

I hadn’t realised that it was possible to create a composed delegate, that is, you can simply do the following:

delegate void myDelegate();

private static void ComposableDelegate()
{
    myDelegate del1 = M1;
    myDelegate del2 = M2;
    myDelegate composedDel = del1 + del2;
        
    composedDel();
}

private static void M1() => Console.WriteLine("M1");
private static void M2() => Console.WriteLine("M2");

You can do the exact same thing with Action or Func delegates; for example:

Action action = () => Console.WriteLine("M1");
Action action2 = () => Console.WriteLine("M2");
Action composedDel = action + action2;

composedDel();

References

Combining Delegates

MySql Auto-Increment

While playing around with MySql recently, I discovered a strange little quirk. An automatically incremented field cannot be reset – that is, it cannot be reset to a lower value than it currently is. For example, say that you insert a lot of records into a table, or that you manually add a key that’s very high:

MySql.Data.MySqlClient.MySqlException: ‘Duplicate entry ‘2147483647’ for key ‘test_table.PRIMARY”

In my case, I was simply playing around with some settings – if you have a table with more records than that then you may have other issues than this. However, my initial idea was to reset the count:

ALTER TABLE myschema.test_table AUTO_INCREMENT = 1

It turns out that, whilst you can do this, you can only do it for numbers higher than it currently is. Presumably this is to prevent conflicts or whatever. Anyway, the way around it for me was to simply insert a new record, but to override the setting of the auto-increment:

INSERT INTO `myschema`.`test_table`
(`firstname`,
`surname`,
`key`)
VALUES
('',
'',
1);

You would then need to remove this record. Not ideal, but the only way I could get this to work.

datagen

A few weeks ago, I was looking into making a change to a project in work that uses DbUp. For some reason, I took away from that the overwhelming urge to write my own data generator. It’s far from finished, but I came up with datagen. This currently only comes in a MySql flavour, but my plan is to add a few more database engines.

The idea behind this is that you can generate pseudo data in your database. It’s not a tool in its own right, because I wanted to allow it to be customisable. To install , simply reference the package:

<PackageReference Include="datagen.MySql" Version="1.0.0" />

You can then populate the data in an entire schema. Just create a console app (this works with any app type that can physically access the database):

using datagen.Core;
using datagen.MySql;
using datagen.MySql.MySql;

var valueGenerator = new ValueGenerator(
    true,
    DateTime.Now,
    DateTime.Now.AddDays(-100),
    DateTime.Now.AddDays(10));

string connectionString = "Server=127.0.0.1;Port=3306;Database=datagentest;Uid=root;Pwd=password;AllowUserVariables=True";

var mySqlDefaults = new MySqlDefaults(connectionString);

var generate = new Generate(
    connectionString,
    valueGenerator,
    mySqlDefaults.DataTypeParser,
    mySqlDefaults .UniqueKeyGenerator);
await generate.FillSchema(20, "datagentest");

The code above allows you to create a ValueGenerator – there is a default one in the package, but you can easily write your own. FillSchema then adds 20 rows to every table in the schema.

Limitations

There are currently a few limitations – the main two being that this will only work with MySql currently, and that it does not deal with foreign keys (it will just omit that data).

Feel free to contribute, offer suggestions, or contibute.

Terraform adding control of an object not created through Terraform

In recent investigations into Terraform, I came across a situation where I’d created a resource not known to Terraform, but which I now wished to manage through it. Initially, I simply tried to create the resource. Interestingly, this goes through the plan stage, but errors at the apply stage with something similar to the following error:

│ Error: A resource with the ID “/subscriptions/34a7cc79-9bea-4d04-86a1-01f3b260bdb0/resourceGroups/ServiceBusTest/providers/Microsoft.Logic/workflows/NotifyDeadLetter” already exists – to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for “azurerm_logic_app_workflow” for more information.

│ with azurerm_logic_app_workflow.logicapp,
│ on basic.tf line 22, in resource “azurerm_logic_app_workflow” “logicapp”:
│ 22: resource “azurerm_logic_app_workflow” “logicapp” {

This is a really useful error – if you do, in fact, look at the docs for that resource then you’ll see at the end it gives the following way of importing:

terraform import azurerm_logic_app_workflow.workflow1 /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/Microsoft.Logic/workflows/workflow1

So, simply replace the values that you’ve been given in the error message and you end up with:

terraform import azurerm_logic_app_workflow.logicapp /subscriptions/34a7cc79-9bea-4d04-86a1-01f3b260bdb0/resourceGroups/ServiceBusTest/providers/Microsoft.Logic/workflows/NotifyDeadLetter

The result of running this is:

PS C:\Users\pcmic\source\repos\tf-logicapp-test> terraform import azurerm_logic_app_workflow.logicapp /subscriptions/34a7cc79-9bea-4d04-86a1-01f3b260bdb0/resourceGroups/ServiceBusTest/providers/Microsoft.Logic/workflows/NotifyDeadLetter
azurerm_logic_app_workflow.logicapp: Importing from ID “/subscriptions/34a7cc79-9bea-4d04-86a1-01f3b260bdb0/resourceGroups/ServiceBusTest/providers/Microsoft.Logic/workflows/NotifyDeadLetter”…
azurerm_logic_app_workflow.logicapp: Import prepared!
Prepared azurerm_logic_app_workflow for import
azurerm_logic_app_workflow.logicapp: Refreshing state… [id=/subscriptions/34a7cc79-9bea-4d04-86a1-01f3b260bdb0/resourceGroups/ServiceBusTest/providers/Microsoft.Logic/workflows/NotifyDeadLetter]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

This now adds that resource into your state file (.tfstate):

    {
      "mode": "managed",
      "type": "azurerm_logic_app_workflow",
      "name": "logicapp",
. . .

Reading an Azure Dead Letter Queue with Azure.Messaging.ServiceBus

Some time ago, I wrote about how you can read the dead letter queue. I also wrote this post on the evolution of Azure Service Bus libraries.

By way of a recap, a Dead Letter Queue is a sub queue that allows for a message that would otherwise clog up a queue (for example, it can’t be processed for some reason) to go to the dead letter queue and be removed from processing.

In this post, I’ll cover how you can read a dead letter queue using the new Azure.Messaging.ServiceBus library.

Force a Dead Letter Message

There are basically two ways that a message can end up in a dead letter queue: either it breaks the rules (too many retries, too many redirects, etc.), or it is explicitly placed in a dead letter queue. To do the latter, the process is as follows:

            var serviceBusClient = new ServiceBusClient(connectionString);
            var messageReceiver = serviceBusClient.CreateReceiver(QUEUE_NAME);
            var message = await messageReceiver.ReceiveMessageAsync();

            string messageBody = Encoding.UTF8.GetString(message.Body);

            await messageReceiver.DeadLetterMessageAsync(message, "Really bad message");

The main difference here (other than that the previous method was DeadLetterAsync) is that you pass the entire message, rather than just the lock token.

Reading a Dead Letter Message

There’s a few quirks here – firstly, the dead letter reason, delivery count, etc. were part of an array known as SystemProperties, whereas they are now just properties – which does make them far more accessible and discoverable. Here’s the code to read the dead letter queue:

            var serviceBusClient = new ServiceBusClient(connectionString);
            var deadLetterReceiver = serviceBusClient.CreateReceiver(FormatDeadLetterPath());
            
            var message = await deadLetterReceiver.ReceiveMessageAsync();

            string messageBody = Encoding.UTF8.GetString(message.Body);

            Console.WriteLine("Message received: {0}", messageBody);

            // Previous versions had these as properties
            // https://www.pmichaels.net/2021/01/23/read-the-dead-letter-queue/
            if (!string.IsNullOrWhiteSpace(message.DeadLetterReason))
            {
                Console.WriteLine("Reason: {0} ", message.DeadLetterReason);
            }
            if (!string.IsNullOrWhiteSpace(message.DeadLetterErrorDescription))
            {
                Console.WriteLine("Description: {0} ", message.DeadLetterErrorDescription);
            }

            Console.WriteLine($"Message {message.MessageId} ({messageBody}) had a delivery count of {message.DeliveryCount}");

Again, most of the changes are simply naming. It’s worth mentioning the FormatDeadLetterPath() function. This was previously part of a static helper class EntityNameHelper; here, I’ve tried to replicate that behaviour locally (as it seems to have been removed):

        private static string QUEUE_NAME = "dead-letter-demo";
        private static string DEAD_LETTER_PATH = "$deadletterqueue";
        
        static string FormatDeadLetterPath() =>
            $"{QUEUE_NAME}/{DEAD_LETTER_PATH}";

Resubmitting a Dead Letter Message

This is something that I covered in my original post on this. It’s not inbuilt behaviour – but you basically copy the message and re-submit. In fact, this is much, much easier now:

            var serviceBusClient = new ServiceBusClient(connectionString);

            var deadLetterReceiver = serviceBusClient.CreateReceiver(FormatDeadLetterPath());
            var sender = serviceBusClient.CreateSender(QUEUE_NAME);

            var deadLetterMessage = await deadLetterReceiver.ReceiveMessageAsync();            

            using var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled);

            var resubmitMessage = new ServiceBusMessage(deadLetterMessage);
            await sender.SendMessageAsync(resubmitMessage);
            //throw new Exception("aa"); // - to prove the transaction
            await deadLetterReceiver.CompleteMessageAsync(deadLetterMessage);

            scope.Complete();            

Most of what’s here has previously been covered; the old Message.Clone is now much neater (but slightly less obvious) in that you simply pass the old method in as a constructor parameter. Because the Dead Letter Reason, et al, are now properties, there’s no longer a need to manually deal with them not getting copied across.

The transaction simply checks that either the dead letter message is correctly re-submitted, or it remains a dead letter.

Summary

The new library makes the code much more concise and discoverable. We’ve seen how to force a dead letter; how to receive and view the contents of the Dead Letter Queue, and finally, how to resubmit a dead lettered message.