Category Archives: C#

Playing with Azure Event Hub

I’ve recently been playing with the Azure Event Hub. This is basically a way of transmitting large amounts* of data between systems. In a later post, I may try and test these limits by designing some kind of game based on this.

As a quick disclaimer, it’s worth bearing in mind that I am playing with this technology, and so much of the content of this post can be found in the links at the bottom of this post – you won’t find anything original here – just a record of my findings. You may find more (and more accurate) information in those.

Event Hub Namespace

The first step, as with many Azure services, is to create a namespace:

For a healthy amount of data transference, you’ll pay around £10 per month.

Finally, we’ll create event hub within the namespace:

When you create the event hub, it asks how many partitions you need. This basically splits the message delivery; and it’s clever enough to work out, if you have 3 partitions and two listeners that one should have two slots, and one, one slot:

We’ll need an access policy so that we have permission to listen:

New Console Apps

We’ll need to create two applications: a producer and a consumer.

Let’s start with a producer. Create a new console app and add this NuGet library.

Here’s the code:

class Program
{
    private static EventHubClient eventHubClient;
    private const string EhConnectionString = "Endpoint=sb://pcm-testeventhub.servicebus.windows.net/;SharedAccessKeyName=Publisher;SharedAccessKey=key;EntityPath=pcm-eventhub1";
    private const string EhEntityPath = "pcm-eventhub1";
 
    public static async Task Main(string[] args)
    {
        EventHubsConnectionStringBuilder connectionStringBuilder = new EventHubsConnectionStringBuilder(EhConnectionString)
        {
            EntityPath = EhEntityPath
        };
 
        eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());
 
        while (true)
        {
            Console.Write("Please enter message to send: ");
            string message = Console.ReadLine();
            if (string.IsNullOrWhiteSpace(message)) break;
 
            await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));
        }
 
        await eventHubClient.CloseAsync();
 
        Console.WriteLine("Press ENTER to exit.");
        Console.ReadLine();
    }
}

Consumer

Next we’ll create a consumer; so the first thing we’ll need is to grant permissions for listening:

We’ll create a second new console application with this same library and the processor library, too.

class Program
{
    private const string EhConnectionString = "Endpoint=sb://pcm-testeventhub.servicebus.windows.net/;SharedAccessKeyName=Listener;SharedAccessKey=key;EntityPath=pcm-eventhub1";
    private const string EhEntityPath = "pcm-eventhub1";
    private const string StorageContainerName = "eventhub";
    private const string StorageAccountName = "pcmeventhubstorage";
    private const string StorageAccountKey = "key";
 
    private static readonly string StorageConnectionString = string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}", StorageAccountName, StorageAccountKey);
 
    static async Task Main(string[] args)
    {
        Console.WriteLine("Registering EventProcessor...");
 
        var eventProcessorHost = new EventProcessorHost(
            EhEntityPath,
            PartitionReceiver.DefaultConsumerGroupName,
            EhConnectionString,
            StorageConnectionString,
            StorageContainerName);
 
        // Registers the Event Processor Host and starts receiving messages
        await eventProcessorHost.RegisterEventProcessorAsync<EventsProcessor>();
 
        Console.WriteLine("Receiving. Press ENTER to stop worker.");
        Console.ReadLine();
 
        // Disposes of the Event Processor Host
        await eventProcessorHost.UnregisterEventProcessorAsync();
    }
}

class EventsProcessor : IEventProcessor
{
    public Task CloseAsync(PartitionContext context, CloseReason reason)
    {
        Console.WriteLine($"Processor Shutting Down. Partition '{context.PartitionId}', Reason: '{reason}'.");
        return Task.CompletedTask;
    }
 
    public Task OpenAsync(PartitionContext context)
    {
        Console.WriteLine($"SimpleEventProcessor initialized. Partition: '{context.PartitionId}'");
        return Task.CompletedTask;
    }
 
    public Task ProcessErrorAsync(PartitionContext context, Exception error)
    {
        Console.WriteLine($"Error on Partition: {context.PartitionId}, Error: {error.Message}");
        return Task.CompletedTask;
    }
 
    public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
    {
        foreach (var eventData in messages)
        {
            var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
            Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
        }
 
        return context.CheckpointAsync();
    }
}

As you can see, we can now transmit data through the Event Hub into client applications:

Footnotes

*Large, in terms of frequency, rather than volume – for example, transmitting a small message twice a second, rather than uploading a petabyte of data

References

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-receive-eph

What can you do with a logic app? Part three – Creating a Logic App Client

One of the things that are missing from Azure Logic apps is the ability to integrate human interaction. Microsoft do have their own version of an interactive workflow (PowerApps), which is (obviously) far better than what you can produce by following this post.

In this post, we’ll create a very basic client for a logic app. Obviously, with some thought, this could easily be extended to allow a fully functional, interactive, workflow system.

Basic Logic App

Let’s start by designing our logic app. The app in question is going to be a very simple one. It’s format is going to be that it will add a message to a logging queue (just so it has something to do), then we’ll ask the user a question; and we’ll do this by putting a message onto a topic: left or right. Based on the user’s response, we’ll either write a message to the queue saying left, or right. Let’s have a look at our Logic App design:

It’s worth pointing out a few things about this design:
1. The condition uses the expression base64ToString() to convert the encoded message into plain text.
2. Where the workflow picks up, it uses a peek-lock, and then completes the message at the end. It looks like it’s a ‘feature’ of logic apps that an automatic complete on this trigger will not actually complete the message (plus, this is actually a better design).

Queues and Topics

The “Log to message queue” action above is putting an entry into a queue; so a quick note about why we’re using a queue for logging, and a topic for the interaction with the user. In a real life version of this system, we might have many users, but they might all want to perform the same action. Let’s say that they all are part of a sales process, and the actions are actually actions along that process; adding these to a queue maintains their sequence. Here’s the queue and topic layout that I’m using for this post:

Multiple Triggers

As you can see, we actually have two triggers in this workflow. The first starts the workflow (so we’ll drop a message into the topic to start it), and the second waits for a second message to go into the topic.

To add a trigger part way through the workflow, simply add an action, search and select “Triggers”:

Because we have a trigger part way through the workflow, what we have effectively issued here is an await statement. Once a message appears in the subscription, the workflow will continue where it left off:

As soon as a message is posted, the workflow carries on:

Client Application

For the client application, we could simply use the Service Bus Explorer (in fact, the screenshots above were taken from using this to simulate messages in the topic). However, the point of this post is to create a client, and so we will… although we’ll just create a basic console app for now.

We need the client to do two things: read from a topic subscription, and write to a topic. I haven’t exactly been here before, but I will be heavily plagiarising from here, here, and here.

Let’s create a console application:

Once that’s done, we’ll need the service bus client library: Install it from here.

The code is generally quite straight-forward, and looks a lot like the code to read and write to queues. The big difference is that you don’t read from a topic, but from a subscription to a topic (a topic can have many subscriptions):

class Program
{
    
    static async Task Main(string[] args)
    {
        MessageHandler messageHandler = new MessageHandler();
        messageHandler.RegisterToRead("secondstage", "sub1");
 
        await WaitForever();
    }
 
    private static async Task WaitForever()
    {
        while (true) await Task.Delay(5000);
    }
}
public class MessageHandler
{
    private string _connectionString = "service bus connection string details";
    private ISubscriptionClient _subscriptionClient;
    public void RegisterToRead(string topicName, string subscriptionName)
    {            
        _subscriptionClient = new SubscriptionClient(_connectionString, topicName, subscriptionName);
 
        MessageHandlerOptions messageHandlerOptions = new MessageHandlerOptions(ExceptionReceived)
        {
            AutoComplete = false,
            MaxAutoRenewDuration = new TimeSpan(1, 0, 0)
        };
 
        _subscriptionClient.RegisterMessageHandler(ProcessMessage, messageHandlerOptions);
 
    }
 
    private async Task ProcessMessage(Message message, CancellationToken cancellationToken)
    {
        string messageText = Encoding.UTF8.GetString(message.Body);
 
        Console.WriteLine(messageText);
        string leftOrRight = Console.ReadLine();
 
        await _subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
 
        await SendResponse(leftOrRight, "userinput");
    }
 
    private async Task SendResponse(string leftOrRight, string topicName)
    {
        TopicClient topicClient = new TopicClient(_connectionString, topicName);
        Message message = new Message(Encoding.UTF8.GetBytes(leftOrRight));
        await topicClient.SendAsync(message);
    }
 
    private Task ExceptionReceived(ExceptionReceivedEventArgs arg)
    {
        Console.WriteLine(arg.Exception.ToString());
        return Task.CompletedTask;
    }
}

If we run it, then when the logic app reaches the second trigger, we’ll get a message from the subscription and ask directions:

Based on the response, the logic app will execute either the right or left branch of code.

Summary

Having worked with workflow systems in the past, one recurring feature of them is that they start to get used for things that don’t fit into a workflow, resulting in a needlessly over-complex system. I imagine that Logic Apps are no exception to this rule, and in 10 years time, people will roll their eyes at how Logic Apps have been used where a simple web service would have done the whole job.

The saving grace here is source control. The workflow inside a Logic App is simply a JSON file, and so it can be source controlled, added to a CI pipeline, and all the good things that you might expect. Whether or not a more refined version of what I have described here makes any sense is another question.

There are many downsides to this approach: firstly, you are fighting against the Service Bus by asking it to wait for input (that part is a very fixable problem with a bit of an adjustment to the messages); secondly, you would presumably need some form of timeout (again, a fixable problem that will probably feature in a future post). The biggest issue here is that you are likely introducing complex conditional logic with no way to unit test; this isn’t, per se, fixable; however, you can introduce some canary logic (again, this will probably be the feature of a future post).

References

https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config

https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions

https://stackoverflow.com/questions/28127001/the-lock-supplied-is-invalid-either-the-lock-expired-or-the-message-has-alread

Short Walks – XUnit Warning

As with many of these posts – this is more of a “note to self”.

Say you have an assertion that looks something like this in your Xunit test:

Assert.True(myEnumerable.Any(a => a.MyValue == "1234"));

In later versions (not sure exactly which one this was introduced it), you’ll get the following warning:

warning xUnit2012: Do not use Enumerable.Any() to check if a value exists in a collection.

So, Xunit has a nice little feature where you can use the following syntax instead:

Assert.Contains(myEnumerable, a => a.MyValue == "1234");

Using NSubstitute for partial mocks

I have previously written about how to, effectively, subclass using Nsubstitute; in this post, I’ll cover how to partially mock out that class.

Before I get into the solution; what follows is a workaround to allow badly written, or legacy code to be tested without refactoring. If you’re reading this and thinking you need this solution then my suggestion would be to refactor and use some form of dependency injection. However, for various reasons, that’s not always possible (hence this post).

Here’s our class to test:

public class MyFunkyClass
{
    public virtual void MethodOne()
    {        
        throw new Exception("I do some direct DB access");
    }
 
    public virtual int MethodTwo()
    {
        throw new Exception("I do some direct DB access and return a number");

        return new Random().Next(5);
    }
 
    public virtual int MethodThree()
    {
        MethodOne();
        if (MethodTwo() <= 3)
        {
            return 1;
        }
 
        return 2;
    }
}

The problem

Okay, so let’s write our first test:

[Fact]
public void Test1()
{
    // Arrange
    MyFunkyClass myFunkyClass = new MyFunkyClass();
 
    // Act
    int result = myFunkyClass.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

So, what’s wrong with that?

Well, we have some (simulated) DB access, so the code will error.

Not the but a solution

The first thing to do here is to mock out MethodOne(), as it has (pseudo) DB access:

[Fact]
public void Test1()
{
    // Arrange
    MyFunkyClass myFunkyClass = Substitute.ForPartsOf<MyFunkyClass>();
    myFunkyClass.When(a => a.MethodOne()).DoNotCallBase();
 
    // Act
    int result = myFunkyClass.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

Running this test now will fail with:

Message: System.Exception : I do some direct DB access and return a number

We’re past the first hurdle. We can presumably do the same thing for MethodTwo:

[Fact]
public void Test1()
{
    // Arrange
    MyFunkyClass myFunkyClass = Substitute.ForPartsOf<MyFunkyClass>();
    myFunkyClass.When(a => a.MethodOne()).DoNotCallBase();
    myFunkyClass.When(a => a.MethodTwo()).DoNotCallBase();
 
    // Act
    int result = myFunkyClass.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

Now when we run the code, the test still fails, but it no longer accesses the DB:

Message: Assert.Equal() Failure
Expected: 2
Actual: 1

The problem here is that, even though we don’t want MethodTwo to execute, we do want it to return a predefined result. Once we’ve told it not to call the base method, you can then tell it to return whatever we choose (there are separate events – see the bottom of this post for a more detailed explanation of why); for example:

[Fact]
public void Test1()
{
    // Arrange
    MyFunkyClass myFunkyClass = Substitute.ForPartsOf<MyFunkyClass>();
    myFunkyClass.When(a => a.MethodOne()).DoNotCallBase();
    myFunkyClass.When(a => a.MethodTwo()).DoNotCallBase();
    myFunkyClass.MethodTwo().Returns(5);
 
    // Act
    int result = myFunkyClass.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

And now the test passes.

TLDR – What is this actually doing?

To understand this better; we could do this entire process manually. Only when you’ve felt the pain of a manual mock, can you really see what mocking frameworks such as NSubtitute are doing for us.

Let’s assume that we don’t have a mocking framework at all, but that we still want to test MethodThree() above. One approach that we could take is to subclass MyFunkyClass, and then test that subclass:

Here’s what that might look like:

class MyFunkyClassTest : MyFunkyClass
{
    public override void MethodOne()
    {
        //base.MethodOne();
    }
 
    public override int MethodTwo()
    {
        //return base.MethodTwo();
        return 5;
    }
}

As you can see, now that we’ve subclassed MyFunkyClass, we can override the behaviour of the relevant virtual methods.

In the case of MethodOne, we’ve effectively issued a DoNotCallBase(), (by not calling base!).

For MethodTwo, we’ve issued a DoNotCallBase, and then a Returns statement.

Let’s add a new test to use this new, manual method:

[Fact]
public void Test2()
{
    // Arrange 
    MyFunkyClassTest myFunkyClassTest = new MyFunkyClassTest();
 
    // Act
    int result = myFunkyClassTest.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

That’s much cleaner – why not always use manual mocks?

It is much cleaner if you always want MethodThree to return 5. Once you need it to return 2 then you have two choices, either you create a new mock class, or you start putting logic into your mock. The latter, if done wrongly can end up with code that is unreadable and difficult to maintain; and if done correctly will end up in a mini version of NSubstitute.

Finally, however well you write the mocks, as soon as you have more than one for a single class then every change to the class (for example, changing a method’s parameters or return type) results in a change to more than one test class.

It’s also worth mentioning again that this problem is one that has already been solved, cleanly, by dependency injection.

What can you do with a logic app? Part One – Send tweets at random intervals based on a defined data set

I thought I’d start another of my patented series’. This one is about finding interesting things that can be done with Azure Logic Apps.

Let’s say, for example, that you have something that you want to say; for example, if you were Richard Dawkins or Ricky Gervais, you might want to repeatedly tell everyone that there is no God; or if you were Google, you might want to tell everyone how .Net runs on your platform; or if you were Microsoft, you might want to tell people how it’s a “Different Microsoft” these days.

The thing that I want to repeatedly tell everyone is that I’ve written some blog posts. For this purpose, I’m going to set-up a logic app that, based on a random interval, sends a tweet from my account (https://twitter.com/paul_michaels), informing people of one of my posts. It will get this information from a simple Azure storage table; let’s start there: first, we’ll need a storage account:

Then a table:

We’ll enter some data using Storage Explorer:

After entering a few records (three in this case – because the train journey would need to be across Russia or something for me to fill my entire back catalogue in manually – I might come back and see about automatically scraping this data from WordPress one day).

In order to create our logic app, we need a singular piece of custom logic. As you might expect, there’s no randomised action or result, so we’ll have to create that as a function:

For Logic App integration, a Generic WebHook seems to work best:

Here’s the code:

#r "Newtonsoft.Json"
using System;
using System.Net;
using Newtonsoft.Json;
static Random _rnd;

public static async Task<object> Run(HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");
    if (_rnd == null) _rnd = new Random();
    string rangeStr = req.GetQueryNameValuePairs()
        .FirstOrDefault(q => string.Compare(q.Key, "range", true) == 0)
        .Value;
    int range = int.Parse(rangeStr);
    int num = _rnd.Next(range - 1) + 1; 
    var response = req.CreateResponse(HttpStatusCode.OK, new
    {
        number = num
    });
    return response;
}

Okay – back to the logic app. Let’s create one:

The logic app that we want will be (currently) a recurrence; let’s start with every hour (if you’re following along then you might need to adjust this while testing – be careful, though, as it will send a tweet every second if you tell it to):

Add the function:

Add the input variables (remember that the parameters read by the function above are passed in via the query):

One thing to realise about Azure functions is they rely heavily on passing JSON around. For this purpose, you’ll use the JSON Parser action a lot. My advice would be to name them sensibly, and not “Parse JSON” and “Parse JSON 2” as I’ve done here:

The JSON Parser action requires a schema – that’s how it knows what your data looks like. You can get the schema by selecting the option to use a sample payload, and just grabbing the output from above (when you tested the function – if you didn’t test the function then you obviously trust me far more than you should and, as a reward, you can simply copy the output from below):

That will then generate a schema for you:

Note: if you get the schema wrong then the run report will error, but it will give you a dump of the JSON that it had – so another approach would be to enter anything and then take the actual JSON from the logs.

Now we’ll add a condition based on the output. Now that we’ve parsed the JSON, “number” (or output from the previous step) is available:

So, we’ll check if the number is 1 – meaning there’s a 1 in 10 chance that the condition will be true. We don’t care if it’s false, but if it’s true then we’ll send a tweet. Before we do that , though – we need to go the data table and find out what to send. Inside the “true” branch, we’ll add an “Azure Table Storage – Get Entities” call:

This asks you for a storage connection (the name is just for you to name the connection to the storage account). Typically, after getting this data, you would call for each to run through the entries. Because there is currently no way to count the entries in the table, we’ll iterate through each entry, but we’ll do it slowly, and we’ll use our random function to ensure that all are not sent.

Let’s start with not sending all items:

All the subsequent logic is inside the true branch. The next thing is to work out how long to delay:

Now we have a number between 1 and 60, we can wait for that length of time:

The next step is to send the tweet, but because we need specific data from the table, it’s back to our old friend: Parse JSON (it looks like every Workflow will contain around 50% of these parse tasks – although, obviously, you could bake this sort of thing into a function).

To get the data for the tweet, we’ll need to parse the JSON for the current item:

Once you’ve done this, you’ll have access to the parts of the record and can add the Tweet action:

And we have a successful run… and some tweets:

Setting up Entity Framework Core for a Console Application – One Error at a Time

Entity Framework can be difficult to get started with: especially if you come from a background of accessing the database directly, it can seem like there are endless meaningless errors that appear. In this post, I try to set-up EF Core using a .Net Core Console application. In order to better understand the errors, we’ll just do the minimum in each step; and be guided by the errors.

The first step is to create a .Net Core Console Application.

NuGet Packages

To use Entity Framework, you’ll first need to install the NuGet packages; to follow this post, you’ll need these two (initially) 1:

PM> Install-Package Microsoft.EntityFrameworkCore
PM> Install-Package Microsoft.EntityFrameworkCore.Tools

Model / Entities

The idea behind Entity Framework is that you represent database entities, or tables as they used to be known, with in memory objects. So the first step is to create a model:

namespace ConsoleApp1.Model
{
    public class MyData
    {
        public string FieldOne { get; set; }
 
    }
}

We’ve created the model, so the next thing is to create the DB:

PM> Update-Database

In the package manager console.

First Error – DbContext

The first error you get is:

No DbContext was found in assembly ‘ConsoleApp1’. Ensure that you’re using the correct assembly and that the type is neither abstract nor generic.

Okay, so let’s create a DbContext. The recommended pattern (as described here) is to inherit from DbContext:

namespace ConsoleApp1
{
    public class MyDbContext : DbContext
    {
    }
}

Okay, we’ve created a DbContext – let’s go again:

PM> Update-Database

Second Error – Database Provider

The next error is:

System.InvalidOperationException: No database provider has been configured for this DbContext. A provider can be configured by overriding the DbContext.OnConfiguring method or by using AddDbContext on the application service provider.

So we’ve moved on a little. The next thing we need to do is to configure a provider. Because in this case, I’m using SQL Server, I’ll need another NuGet package:

PM> Install-Package Microsoft.EntityFrameworkCore.SqlServer

Then configure the DbContext to use it:

public class MyDbContext : DbContext
{
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        string cn = @"Server=.\SQLEXPRESS;Database=test-db;User Id= . . .";
        optionsBuilder.UseSqlServer(cn);
 
        base.OnConfiguring(optionsBuilder);
    }
}

And again:

PM> Update-Database

Third Error – No Migrations

Strictly speaking this isn’t an actual error. It’s more a sign that nothing has happened:

No migrations were applied. The database is already up to date.
Done.

A quick look in SSMS shows that, whilst it has created the DB, it hasn’t created the table:

So we need to add a migration? Well, if we call Add-Migration here, we’ll get this: 2

That’s because we need to tell EF what data we care about. So, in the DbContext, we can let it know that we’re interested in a data set (or, if you like, table) called MyData:

public class MyDbContext : DbContext
{
    public DbSet<MyData> MyData { get; set; }

Right – now we can call:

PM> Add-Migration FirstMigration

Fourth Error – Primary Key

The next error is more about EF’s inner workings.:

System.InvalidOperationException: The entity type ‘MyData’ requires a primary key to be defined.

Definitely progress. Now we’re being moaned at because EF wants to know what the primary key for the table is, and we haven’t told it (Entity Framework, unlike SQL Server insists on a primary key). That requires a small change to the model:

using System.ComponentModel.DataAnnotations;
 
namespace ConsoleApp1.Model
{
    public class MyData
    {
        [Key]
        public string FieldOne { get; set; }
 
    }
}

This time,

PM> Add-Migration FirstMigration

Produces this:

    public partial class FirstMigration : Migration
    {
        protected override void Up(MigrationBuilder migrationBuilder)
        {
            migrationBuilder.CreateTable(
                name: "MyData",
                columns: table => new
                {
                    FieldOne = table.Column<string>(nullable: false)
                },
                constraints: table =>
                {
                    table.PrimaryKey("PK_MyData", x => x.FieldOne);
                });
        }
 
        protected override void Down(MigrationBuilder migrationBuilder)
        {
            migrationBuilder.DropTable(
                name: "MyData");
        }
    }

Which looks much more like we’ll get a table – let’s try:

PM> update-database
Applying migration '20180224075857_FirstMigration'.
Done.
PM> 

Success

And it has, indeed, created a table!

References

https://docs.microsoft.com/en-us/ef/core/miscellaneous/cli/powershell

https://docs.microsoft.com/en-us/ef/core/miscellaneous/configuring-dbcontext

https://www.learnentityframeworkcore.com/walkthroughs/console-application

Foot Notes

Short Walks – C# Pattern Matching to Match Ranges

Back in 2010, working at the time in a variety of languages, including VB, I asked this question on StackOverflow. In VB, you could put a range inside a switch statement, and I wanted to know how you could do that in C#. The (correct) answer at the time was that you can’t.

Fast forward just eight short years, and suddenly, it’s possible. The new feature of pattern matching in C# 7.0 has made this possible.

You can now write something like this (this is C# 7.1 because of Async Main):

static async Task Main(string[] args)
{            
    for (int i = 0; i <= 20; i++)
    {
        switch (i)
        {
            case var test when test <= 2:
                Console.WriteLine("Less than 2");
                break;
 
            case var test when test > 2 && test < 10:
                Console.WriteLine("Between 2 and 10");
                break;
 
            case var test when test >= 10:
                Console.WriteLine("10 or more");
                break;
        }
 
        await Task.Delay(500);
    }
 
    Console.ReadLine();
}

References

https://docs.microsoft.com/en-us/dotnet/csharp/pattern-matching

https://visualstudiomagazine.com/articles/2017/02/01/pattern-matching.aspx

Creating a Scheduled Azure Function

I’ve previously written about creating Azure functions. I’ve also written about how to react to service bus queues. In this post, I wanted to cover creating a scheduled function. Basically, these allow you to create a scheduled task that executes at a given interval, or at a given time.

Timer Trigger

The first thing to do is create a function with a type of Timer Trigger:

Schedule / CRON format

The next thing is to understand the schedule, or CRON, format. The format is:

{second} {minute} {hour} {day} {month} {day-of-week}

Scheduled Intervals

The example you’ll see when you create this looks like this:

0 */5 * * * *

The notation */[number] means once every number; so */5 would mean once every 5… and then look at the placeholder to work out 5 what; in this case it means once every 5 minutes. So, for example:

*/10 * * * * *

Would be once every 10 seconds.

Scheduled Times

Specifying numbers means the schedule will execute at that time; so:

0 0 0 * * *

Would execute every time the hour, minute and second all hit zero – so once per day at midnight; and:

0 * * * * *

Would execute every time the second hits zero – so once per minute; and:

0 0 * * * 1

Would execute once per hour on a Monday (as the last placeholder is the day of the week).

Time constraints

These can be specified in any column in the format [lower bound]-[upper bound], and they restrict the timer to a range of values; for example:

0 */20 5-10 * * *

Means every 20 minutes between 5 and 10am (as you can see, the different types can be used in conjunction).

Asterisks (*)

You’ll notice above that there are asterisks in every placeholder that a value has not been specified. The askerisk signifies that the schedule will execute at every interval within that placeholder; for example:

* * * * * *

Means every second; and:

0 * * * * *

Means every minute.

Back to the function

Upon starting, the function will detail when the next several executions will take place:

But what if you don’t know what the schedule will be at compile time. As with many of the variables in an Azure Function, you can simply substitute the value for a placeholder:

[FunctionName("MyFunc")]
public static void Run([TimerTrigger("%schedule%")]TimerInfo myTimer, TraceWriter log)
{
    log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
}

This value can then be provided inside the local.settings.json:

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "DefaultEndpointsProto . . .",
    "AzureWebJobsDashboard": "DefaultEndpointsProto . . .",
    "schedule": "0 * * * * *"
  }
}

References

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer

http://cronexpressiondescriptor.azurewebsites.net/?expression=1+*+*+*+*+*&locale=en

Using Unity With Azure Functions

Azure Functions are a relatively new kid on the block when it comes to the Microsoft Azure stack. I’ve previously written about them, and their limitations. One such limitation seems to be that they don’t lend themselves very well to using dependency injection. However, it is certainly not impossible to make them do so.

In this particular post, we’ll have a look at how you might use an IoC container (Unity in this case) in order to leverage DI inside an Azure function.

New Azure Functions Project

I’ve covered this before in previous posts, in Visual Studio, you can now create a new Azure Functions project:

That done, you should have a project that looks something like this:

As you can see, the elephant in the room here is there are no functions; let’s correct that:

Be sure to call your function something descriptive… like “Function1”. For the purposes of this post, it doesn’t matter what kind of function you create, but I’m going to create a “Generic Web Hook”.

Install Unity

The next step is to install Unity (at the time of writing):

Install-Package Unity -Version 5.5.6

Static Variables Inside Functions

It’s worth bearing mind that a static variable works the way you would expect, were the function a locally hosted process. That is, if you write a function such as this:

[FunctionName("Function1")]
public static object Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");
    
    System.Threading.Thread.Sleep(10000);
    log.Info($"Index is {test}");
    return req.CreateResponse(HttpStatusCode.OK, new
    {
        greeting = $"Hello {test++}!"
    });
}

And access it from a web browser, or postman, or both as the same time, you’ll get incrementing numbers:

Whilst the values are shared across the instances, you can’t cause a conflict by updating something in one function while reading it in another (I tried pretty hard to cause this to break). What this means, then, is that we can store an IoC container that will maintain state across function calls. Obviously, this is not intended for persisting state, so you should assume your state could be lost at any time (as indeed it can).

Registering the Unity Container

One method of doing this is to use the Lazy object. This pretty much passed me by in .Net 4 (which is, apparently, when it came out). It basically provides a slightly neater way of doing this kind of thing:

private List<string> _myList;
public List<string> MyList
{
    get
    {
        if (_myList == null)
        {
            _myList = new List<string>();
        }
        return _myList;
    }
}

The “lazy” method would be:

public Lazy<List<string>> MyList = new Lazy<List<string>>(() =>
{
    List<string> newList = new List<string>();
    return newList;
});

With that in mind, we can do something like this:

public static class Function1
{
     private static Lazy<IUnityContainer> _container =
         new Lazy<IUnityContainer>(() =>
         {
             IUnityContainer container = InitialiseUnityContainer();
             return container;
         });

InitialiseUnityContainer needs to return a new instance of the container:

public static IUnityContainer InitialiseUnityContainer()
{
    UnityContainer container = new UnityContainer();
    container.RegisterType<IMyClass1, MyClass1>();
    container.RegisterType<IMyClass2, MyClass2>();
    return container;
}

After that, you’ll need to resolve the parent dependency, then you can use standard constructor injection; for example, if MyClass1 orchestrates your functionality; you could use:

_container.Value.Resolve<IMyClass1>().DoStuff();

In Practise

Let’s apply all of that to our Functions App. Here’s two new classes:

public interface IMyClass1
{
    string GetOutput();
}
 
public interface IMyClass2
{
    void AddString(List<string> strings);
}
public class MyClass1 : IMyClass1
{
    private readonly IMyClass2 _myClass2;
 
    public MyClass1(IMyClass2 myClass2)
    {
        _myClass2 = myClass2;
    }
 
    public string GetOutput()
    {
        List<string> teststrings = new List<string>();
 
        for (int i = 0; i <= 10; i++)
        {
            _myClass2.AddString(teststrings);
        }
 
        return string.Join(",", teststrings);
    }
}
public class MyClass2 : IMyClass2
{
    public void AddString(List<string> strings)
    {
        Thread.Sleep(1000);
        strings.Add($"{DateTime.Now}");
    }
}

And the calling code looks like this:

[FunctionName("Function1")]
public static object Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");
 
    string output = _container.Value.Resolve<IMyClass1>().GetOutput();
    return req.CreateResponse(HttpStatusCode.OK, new
    {
        output
    });
}

Running it, we get an output that we might expect:

References

https://github.com/Azure/azure-webjobs-sdk/issues/1206

Using the Builder Pattern for Validation

When doing validation, There’s a number of options to how you approach it: you could simply have a series of conditional statements testing logical criteria, you could follow the chain of responsibility pattern, use some form of polymorphism with the strategy pattern, or even, as I outline below, try using the builder pattern.

Let’s first break down the options. We’ll start with the strategy pattern, because that’s where I started when I was looking into this. It’s a bit like a screwdriver – it’s usually the first thing you reach for and, if you encounter a nail, you might just just tap it with the blunt end.

Strategy Pattern

The strategy pattern is just a way of implementing polymorphism: the idea being that you implement some form of logic, and then override key parts of it; for example, in the validation case, you might come up with an abstract base class such as this:

public abstract class ValidatorBase<T>
{        
    public ValidationResult Validate(T validateElement)
    {
        ValidationResult result = new ValidationResult();
 
        if (CheckIsValid(validateElement))
        {
            result = OnIsValid();                
        }
        else
        {
            result = OnIsNotValid();
        }
 
        return result;
    }
 
    protected virtual ValidationResult OnIsValid()
    {
        return null;
    }

    . . .

You can inherit from this for each type of validation and then override key parts of the class (such as `CheckIsValid`).

Finally, you can call all the validation in a single function such as this:

public bool Validate()
{
    bool isValid = true;
 
    IEnumerable<ValidatorBase> validators = typeof(ValidatorBase)
        .Assembly.GetTypes()
        .Where(t => t.IsSubclassOf(typeof(ValidatorBase)) && !t.IsAbstract)
        .Select(t => (ValidatorBase)Activator.CreateInstance(t));
 
    foreach (ValidatorBase validator in validators)
    {
        ValidationResult result = validator.Validate();
        if (!result.IsValid)
        {
            isValid = false;
            Errors.AddRange(result.Errors);
        }
 
        if (result.StopValidation)
        {
            break;
        }
    }
 
    return isValid;
}

There are good and bad sides to this pattern: it’s familiar and well tried; unfortunately, it results in a potential explosion of code volume (if you have 30 validation checks, you’ll need 30 classes), which makes it difficult to read. It also doesn’t deal with the scenario whereby one validation condition depends on the success of another.

So what about the chain of responsibility that we mentioned earlier?

Chain of responsibility

This pattern, as described in the linked article above, works by implementing a link between a class that validates your data, and the class that will validate it next: in other words, a linked list.

This idea does work well, and is relatively easy to implement; however, it can become unwieldy to use; for example, you might have a class like this:

private static bool InvokeValidation(ValidationRule rule)
{
    bool result = rule.ValidationFunction.Invoke();
    if (result && rule.Successor != null)
    {
        return InvokeValidation(rule.Successor);
    }
 
    return result;
}

But to build up the rules, you might have a series of calls such as this:

ValidationRule rule2 = new ValidationRule();
rule.ValidationFunction = () => MyTest();

ValidationRule rule1 = new ValidationRule();
rule.ValidationFunction = () => MyTest();
Rule.Successor = rule2;

As you can see, it doesn’t make for very readable code. Admittedly, with a little imagination, you could probably re-order it a little. What you could also do is use the Builder Pattern…

Builder Pattern

The builder pattern came to fame with Asp.Net Core; where during configuration, you could write something like:

services
    .AddMvc()
    .AddTagHelpersAsServices()
    .AddSessionStateTempDataProvider();

So, the idea behind it is that you call a method that returns an instance of itself, allowing you to repeatedly call methods to build a state. This concept overlays quite neatly onto the concept of validation.

You might have something along the lines of:

public class Validator
{
    private List<ValidationRule> _logic;        
 
    public ValidatorEngine AddRule(Func<bool> validationRule)
    {
        ValidatorLogic logic = new ValidatorLogic()
        {
            ValidationFunction = validationRule
        };
        _logic.Add(logic);
 
        return this;
    }

So now, you can call:

myValidator
    .AddRule(() => MyTest())
    .AddRule(() => MyTest2())
    …

I think you’ll agree, this makes the code much neater.

References

http://blogs.tedneward.com/patterns/Builder-CSharp/

http://piotrluksza.com/2016/04/19/chain-of-responsibility-elegant-way-to-handle-complex-validation/