Category Archives: Azure

Playing with Azure Event Hub

I’ve recently been playing with the Azure Event Hub. This is basically a way of transmitting large amounts* of data between systems. In a later post, I may try and test these limits by designing some kind of game based on this.

As a quick disclaimer, it’s worth bearing in mind that I am playing with this technology, and so much of the content of this post can be found in the links at the bottom of this post – you won’t find anything original here – just a record of my findings. You may find more (and more accurate) information in those.

Event Hub Namespace

The first step, as with many Azure services, is to create a namespace:

For a healthy amount of data transference, you’ll pay around £10 per month.

Finally, we’ll create event hub within the namespace:

When you create the event hub, it asks how many partitions you need. This basically splits the message delivery; and it’s clever enough to work out, if you have 3 partitions and two listeners that one should have two slots, and one, one slot:

We’ll need an access policy so that we have permission to listen:

New Console Apps

We’ll need to create two applications: a producer and a consumer.

Let’s start with a producer. Create a new console app and add this NuGet library.

Here’s the code:

class Program
{
    private static EventHubClient eventHubClient;
    private const string EhConnectionString = "Endpoint=sb://pcm-testeventhub.servicebus.windows.net/;SharedAccessKeyName=Publisher;SharedAccessKey=key;EntityPath=pcm-eventhub1";
    private const string EhEntityPath = "pcm-eventhub1";
 
    public static async Task Main(string[] args)
    {
        EventHubsConnectionStringBuilder connectionStringBuilder = new EventHubsConnectionStringBuilder(EhConnectionString)
        {
            EntityPath = EhEntityPath
        };
 
        eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());
 
        while (true)
        {
            Console.Write("Please enter message to send: ");
            string message = Console.ReadLine();
            if (string.IsNullOrWhiteSpace(message)) break;
 
            await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));
        }
 
        await eventHubClient.CloseAsync();
 
        Console.WriteLine("Press ENTER to exit.");
        Console.ReadLine();
    }
}

Consumer

Next we’ll create a consumer; so the first thing we’ll need is to grant permissions for listening:

We’ll create a second new console application with this same library and the processor library, too.

class Program
{
    private const string EhConnectionString = "Endpoint=sb://pcm-testeventhub.servicebus.windows.net/;SharedAccessKeyName=Listener;SharedAccessKey=key;EntityPath=pcm-eventhub1";
    private const string EhEntityPath = "pcm-eventhub1";
    private const string StorageContainerName = "eventhub";
    private const string StorageAccountName = "pcmeventhubstorage";
    private const string StorageAccountKey = "key";
 
    private static readonly string StorageConnectionString = string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}", StorageAccountName, StorageAccountKey);
 
    static async Task Main(string[] args)
    {
        Console.WriteLine("Registering EventProcessor...");
 
        var eventProcessorHost = new EventProcessorHost(
            EhEntityPath,
            PartitionReceiver.DefaultConsumerGroupName,
            EhConnectionString,
            StorageConnectionString,
            StorageContainerName);
 
        // Registers the Event Processor Host and starts receiving messages
        await eventProcessorHost.RegisterEventProcessorAsync<EventsProcessor>();
 
        Console.WriteLine("Receiving. Press ENTER to stop worker.");
        Console.ReadLine();
 
        // Disposes of the Event Processor Host
        await eventProcessorHost.UnregisterEventProcessorAsync();
    }
}

class EventsProcessor : IEventProcessor
{
    public Task CloseAsync(PartitionContext context, CloseReason reason)
    {
        Console.WriteLine($"Processor Shutting Down. Partition '{context.PartitionId}', Reason: '{reason}'.");
        return Task.CompletedTask;
    }
 
    public Task OpenAsync(PartitionContext context)
    {
        Console.WriteLine($"SimpleEventProcessor initialized. Partition: '{context.PartitionId}'");
        return Task.CompletedTask;
    }
 
    public Task ProcessErrorAsync(PartitionContext context, Exception error)
    {
        Console.WriteLine($"Error on Partition: {context.PartitionId}, Error: {error.Message}");
        return Task.CompletedTask;
    }
 
    public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
    {
        foreach (var eventData in messages)
        {
            var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
            Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
        }
 
        return context.CheckpointAsync();
    }
}

As you can see, we can now transmit data through the Event Hub into client applications:

Footnotes

*Large, in terms of frequency, rather than volume – for example, transmitting a small message twice a second, rather than uploading a petabyte of data

References

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-receive-eph

What can you do with a logic app? Part three – Creating a Logic App Client

One of the things that are missing from Azure Logic apps is the ability to integrate human interaction. Microsoft do have their own version of an interactive workflow (PowerApps), which is (obviously) far better than what you can produce by following this post.

In this post, we’ll create a very basic client for a logic app. Obviously, with some thought, this could easily be extended to allow a fully functional, interactive, workflow system.

Basic Logic App

Let’s start by designing our logic app. The app in question is going to be a very simple one. It’s format is going to be that it will add a message to a logging queue (just so it has something to do), then we’ll ask the user a question; and we’ll do this by putting a message onto a topic: left or right. Based on the user’s response, we’ll either write a message to the queue saying left, or right. Let’s have a look at our Logic App design:

It’s worth pointing out a few things about this design:
1. The condition uses the expression base64ToString() to convert the encoded message into plain text.
2. Where the workflow picks up, it uses a peek-lock, and then completes the message at the end. It looks like it’s a ‘feature’ of logic apps that an automatic complete on this trigger will not actually complete the message (plus, this is actually a better design).

Queues and Topics

The “Log to message queue” action above is putting an entry into a queue; so a quick note about why we’re using a queue for logging, and a topic for the interaction with the user. In a real life version of this system, we might have many users, but they might all want to perform the same action. Let’s say that they all are part of a sales process, and the actions are actually actions along that process; adding these to a queue maintains their sequence. Here’s the queue and topic layout that I’m using for this post:

Multiple Triggers

As you can see, we actually have two triggers in this workflow. The first starts the workflow (so we’ll drop a message into the topic to start it), and the second waits for a second message to go into the topic.

To add a trigger part way through the workflow, simply add an action, search and select “Triggers”:

Because we have a trigger part way through the workflow, what we have effectively issued here is an await statement. Once a message appears in the subscription, the workflow will continue where it left off:

As soon as a message is posted, the workflow carries on:

Client Application

For the client application, we could simply use the Service Bus Explorer (in fact, the screenshots above were taken from using this to simulate messages in the topic). However, the point of this post is to create a client, and so we will… although we’ll just create a basic console app for now.

We need the client to do two things: read from a topic subscription, and write to a topic. I haven’t exactly been here before, but I will be heavily plagiarising from here, here, and here.

Let’s create a console application:

Once that’s done, we’ll need the service bus client library: Install it from here.

The code is generally quite straight-forward, and looks a lot like the code to read and write to queues. The big difference is that you don’t read from a topic, but from a subscription to a topic (a topic can have many subscriptions):

class Program
{
    
    static async Task Main(string[] args)
    {
        MessageHandler messageHandler = new MessageHandler();
        messageHandler.RegisterToRead("secondstage", "sub1");
 
        await WaitForever();
    }
 
    private static async Task WaitForever()
    {
        while (true) await Task.Delay(5000);
    }
}
public class MessageHandler
{
    private string _connectionString = "service bus connection string details";
    private ISubscriptionClient _subscriptionClient;
    public void RegisterToRead(string topicName, string subscriptionName)
    {            
        _subscriptionClient = new SubscriptionClient(_connectionString, topicName, subscriptionName);
 
        MessageHandlerOptions messageHandlerOptions = new MessageHandlerOptions(ExceptionReceived)
        {
            AutoComplete = false,
            MaxAutoRenewDuration = new TimeSpan(1, 0, 0)
        };
 
        _subscriptionClient.RegisterMessageHandler(ProcessMessage, messageHandlerOptions);
 
    }
 
    private async Task ProcessMessage(Message message, CancellationToken cancellationToken)
    {
        string messageText = Encoding.UTF8.GetString(message.Body);
 
        Console.WriteLine(messageText);
        string leftOrRight = Console.ReadLine();
 
        await _subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
 
        await SendResponse(leftOrRight, "userinput");
    }
 
    private async Task SendResponse(string leftOrRight, string topicName)
    {
        TopicClient topicClient = new TopicClient(_connectionString, topicName);
        Message message = new Message(Encoding.UTF8.GetBytes(leftOrRight));
        await topicClient.SendAsync(message);
    }
 
    private Task ExceptionReceived(ExceptionReceivedEventArgs arg)
    {
        Console.WriteLine(arg.Exception.ToString());
        return Task.CompletedTask;
    }
}

If we run it, then when the logic app reaches the second trigger, we’ll get a message from the subscription and ask directions:

Based on the response, the logic app will execute either the right or left branch of code.

Summary

Having worked with workflow systems in the past, one recurring feature of them is that they start to get used for things that don’t fit into a workflow, resulting in a needlessly over-complex system. I imagine that Logic Apps are no exception to this rule, and in 10 years time, people will roll their eyes at how Logic Apps have been used where a simple web service would have done the whole job.

The saving grace here is source control. The workflow inside a Logic App is simply a JSON file, and so it can be source controlled, added to a CI pipeline, and all the good things that you might expect. Whether or not a more refined version of what I have described here makes any sense is another question.

There are many downsides to this approach: firstly, you are fighting against the Service Bus by asking it to wait for input (that part is a very fixable problem with a bit of an adjustment to the messages); secondly, you would presumably need some form of timeout (again, a fixable problem that will probably feature in a future post). The biggest issue here is that you are likely introducing complex conditional logic with no way to unit test; this isn’t, per se, fixable; however, you can introduce some canary logic (again, this will probably be the feature of a future post).

References

https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config

https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions

https://stackoverflow.com/questions/28127001/the-lock-supplied-is-invalid-either-the-lock-expired-or-the-message-has-alread

What can you do with a logic app? Part Two – Use Excel to Manage an E-mail Notification System

In this post I started a series of posts covering different scenarios that you might use an Azure Logic App, and how you might go about that. In this, the second post, we’re going to set-up an excel spreadsheet that allows you simply add a row to an excel table and have a logic app act on that row.

So, we’ll set-up a basic spreadsheet with an e-mail address, subject, text and a date we want it to send; then we’ll have the logic app send the next eligible mail in the list, and mark it as sent.

Spreadsheet

I’ll first state that I do not have an Office 365 subscription, and nothing that I do here will require one. We’ll create the spreadsheet in Office Online. Head over to One Drive (if you don’t have a one drive account then they are free) and create a new spreadsheet:

In the spreadsheet, create a new table – just enter some headers (like below) and then highlight the columns and “Insert Table”:

Remember to check “My Table Has Headers”.

Now enter some data:

Create the Logic App

In this post I showed how you can use Visual Studio to create and deploy a logic app; we’ll do that here:

Once we’ve created the logic app, we’ll need to select to create an action that will get the Excel file that we created; in this case “List rows present in a table”:

This also requires that we specify the table (if you’re using the free online version of Excel then you’ll have to live with the table name you’re given):

Loop

This retrieves a list of rows, and so the next step is to iterate through them one-by-one. We’ll use a For-Each:

Conditions

Okay, so we’re now looking at every row in the table, but we don’t want every row in the table, we only want the ones that have not already been sent, and the ones that are due to be sent (so the date is either today, or earlier). We can use a conditional statement for this:

But we have two problems:

  • Azure Logic Apps are very bad at handling dates – that is to say, they don’t
  • There is currently no way in an Azure Logic App to update an Excel spreadsheet row (you can add and delete only)

The former is easily solved, and the way I elected to solve the latter is to simply delete the row instead of updating it. It is possible to simply delete the current row, and add it back with new values; however, we won’t bother with that here.

Back to the date problem; what we need here is an Azure function…

Creating an Azure Function

Here is the code for our function (see here for details of how to create one):

        [FunctionName("DatesCompare")]
        public static IActionResult Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequest req, TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            string requestBody = new StreamReader(req.Body).ReadToEnd();
            return ParseDates(requestBody);

        }

        public static IActionResult ParseDates(string requestBody)
        {
            dynamic data = JsonConvert.DeserializeObject(requestBody);

            DateTime date1 = (DateTime)data.date1;
            DateTime date2 = DateTime.FromOADate((double)data.date2);

            int returnFlagIndicator = 0;
            if (date1 > date2)
            {
                returnFlagIndicator = 1;
            }
            else if (date1 < date2)
            {
                returnFlagIndicator = -1;
            }

            return (ActionResult)new OkObjectResult(new
            {
                returnFlag = returnFlagIndicator
            });
        }

There’s a few points to note about this code:
1. The date coming from Excel extracts as a double, which is why we need to use FromOADate.
2. The reason to split the function up is so that the main logic can be more easily unit tested. If you ever need a reason for unit testing then try to work out why an Azure function isn’t working inside a logic app!

The logic around this function looks like this:

We build up the request body with the information that we have, and then parse the output. Finally, we can check if the date is in the past and then send the e-mail:

Lastly, as we said earlier, we’ll delete the row to ensure that the e-mail is only sent once:

The eagle eyed and sane amongst you will notice that I’ve used the subject as a key. Don’t do this – it’s very bad practice!

References

https://github.com/Azure/azure-functions-host/wiki/Azure-Functions-runtime-2.0-known-issues

What can you do with a logic app? Part One – Send tweets at random intervals based on a defined data set

I thought I’d start another of my patented series’. This one is about finding interesting things that can be done with Azure Logic Apps.

Let’s say, for example, that you have something that you want to say; for example, if you were Richard Dawkins or Ricky Gervais, you might want to repeatedly tell everyone that there is no God; or if you were Google, you might want to tell everyone how .Net runs on your platform; or if you were Microsoft, you might want to tell people how it’s a “Different Microsoft” these days.

The thing that I want to repeatedly tell everyone is that I’ve written some blog posts. For this purpose, I’m going to set-up a logic app that, based on a random interval, sends a tweet from my account (https://twitter.com/paul_michaels), informing people of one of my posts. It will get this information from a simple Azure storage table; let’s start there: first, we’ll need a storage account:

Then a table:

We’ll enter some data using Storage Explorer:

After entering a few records (three in this case – because the train journey would need to be across Russia or something for me to fill my entire back catalogue in manually – I might come back and see about automatically scraping this data from WordPress one day).

In order to create our logic app, we need a singular piece of custom logic. As you might expect, there’s no randomised action or result, so we’ll have to create that as a function:

For Logic App integration, a Generic WebHook seems to work best:

Here’s the code:

#r "Newtonsoft.Json"
using System;
using System.Net;
using Newtonsoft.Json;
static Random _rnd;

public static async Task<object> Run(HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");
    if (_rnd == null) _rnd = new Random();
    string rangeStr = req.GetQueryNameValuePairs()
        .FirstOrDefault(q => string.Compare(q.Key, "range", true) == 0)
        .Value;
    int range = int.Parse(rangeStr);
    int num = _rnd.Next(range - 1) + 1; 
    var response = req.CreateResponse(HttpStatusCode.OK, new
    {
        number = num
    });
    return response;
}

Okay – back to the logic app. Let’s create one:

The logic app that we want will be (currently) a recurrence; let’s start with every hour (if you’re following along then you might need to adjust this while testing – be careful, though, as it will send a tweet every second if you tell it to):

Add the function:

Add the input variables (remember that the parameters read by the function above are passed in via the query):

One thing to realise about Azure functions is they rely heavily on passing JSON around. For this purpose, you’ll use the JSON Parser action a lot. My advice would be to name them sensibly, and not “Parse JSON” and “Parse JSON 2” as I’ve done here:

The JSON Parser action requires a schema – that’s how it knows what your data looks like. You can get the schema by selecting the option to use a sample payload, and just grabbing the output from above (when you tested the function – if you didn’t test the function then you obviously trust me far more than you should and, as a reward, you can simply copy the output from below):

That will then generate a schema for you:

Note: if you get the schema wrong then the run report will error, but it will give you a dump of the JSON that it had – so another approach would be to enter anything and then take the actual JSON from the logs.

Now we’ll add a condition based on the output. Now that we’ve parsed the JSON, “number” (or output from the previous step) is available:

So, we’ll check if the number is 1 – meaning there’s a 1 in 10 chance that the condition will be true. We don’t care if it’s false, but if it’s true then we’ll send a tweet. Before we do that , though – we need to go the data table and find out what to send. Inside the “true” branch, we’ll add an “Azure Table Storage – Get Entities” call:

This asks you for a storage connection (the name is just for you to name the connection to the storage account). Typically, after getting this data, you would call for each to run through the entries. Because there is currently no way to count the entries in the table, we’ll iterate through each entry, but we’ll do it slowly, and we’ll use our random function to ensure that all are not sent.

Let’s start with not sending all items:

All the subsequent logic is inside the true branch. The next thing is to work out how long to delay:

Now we have a number between 1 and 60, we can wait for that length of time:

The next step is to send the tweet, but because we need specific data from the table, it’s back to our old friend: Parse JSON (it looks like every Workflow will contain around 50% of these parse tasks – although, obviously, you could bake this sort of thing into a function).

To get the data for the tweet, we’ll need to parse the JSON for the current item:

Once you’ve done this, you’ll have access to the parts of the record and can add the Tweet action:

And we have a successful run… and some tweets:

Reading Azure Service Bus Queue Names from the Config File

In this post, I wrote about how you might read a message from the service bus queue. However, with Azure Functions (and WebJobs), comes the ability to have Microsoft do some of this plumbing code for you.

I have a queue here (taken from the service bus explorer):

I can read this in an Azure function; let’s create a new Azure Functions App:

This time, we’ll create a Service Bus Queue Triggered function:

Out of the box, that will give you this:

public static class Function1
{
    [FunctionName("Function1")]
    public static void Run([ServiceBusTrigger("testqueue", AccessRights.Listen, Connection = "")]string myQueueItem, TraceWriter log)
    {
        log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
    }
}

There’s a few things that we’ll probably want to change here. The first is “Connection”. We can remove that parameter altogether, and then add a row to the local.settings.json file (which can be overridden later inside Azure). Out of the box, you get AzureWebJobsStorage and AzureWebJobsDashboard, which both accept a connection string to a Azure Storage Account. You can also add AzureWebJobsServiceBus, which accepts a connection string to the service bus:

"Values": {
    "AzureWebJobsStorage": "DefaultEndpointsProtocol=https;AccountName=teststorage1…",
    "AzureWebJobsDashboard": "DefaultEndpointsProtocol=https;AccountName=teststorage1…",
    "AzureWebJobsServiceBus": "Endpoint=sb://pcm-servicebustest.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=…"
  }

If you run the job, it will now pick up any outstanding entries in that queue. But, what if you don’t know the queue name; for example, what if you find out the queue name is different. To illustrate the point; here, I’m looking for “testqueue1”, but the queue name (as you saw earlier) is “testqueue”:

public static class Function1
{
    [FunctionName("Function1")]
    public static void Run([ServiceBusTrigger("testqueue1", AccessRights.Listen)]string myQueueItem, TraceWriter log)
    {
        log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
    }
}

Obviously, if you’re looking for a queue that doesn’t exist, bad things happen:

To fix this, I have to change the code… which is broadly speaking a bad thing. What we can do, is configure the queue name in the config file; like this:

"Values": {
    "AzureWebJobsStorage": " . . . ",
    . . .,
    "queue-name":  "testqueue"
  }

And we can have the function look in the config file by changing the queue name:

[FunctionName("Function1")]
public static void Run([ServiceBusTrigger("%queue-name%", AccessRights.Listen)]string myQueueItem, TraceWriter log)
{
    log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}

The pattern of supplying a variable name in the format “%variable-name%” seems to work across other triggers and bindings for Azure Functions.

Deployment

That’s now looking much better, but what happens when the function gets deployed? Let’s see:

We can now see that the function is deployed:

At the minute, it won’t do anything, because it’s looking for a queue name in a setting that only exists locally. Let’s fix that:

Remember to save the changes.

Looking at the logs confirms that this now runs correctly.

Creating a Scheduled Azure Function

I’ve previously written about creating Azure functions. I’ve also written about how to react to service bus queues. In this post, I wanted to cover creating a scheduled function. Basically, these allow you to create a scheduled task that executes at a given interval, or at a given time.

Timer Trigger

The first thing to do is create a function with a type of Timer Trigger:

Schedule / CRON format

The next thing is to understand the schedule, or CRON, format. The format is:

{second} {minute} {hour} {day} {month} {day-of-week}

Scheduled Intervals

The example you’ll see when you create this looks like this:

0 */5 * * * *

The notation */[number] means once every number; so */5 would mean once every 5… and then look at the placeholder to work out 5 what; in this case it means once every 5 minutes. So, for example:

*/10 * * * * *

Would be once every 10 seconds.

Scheduled Times

Specifying numbers means the schedule will execute at that time; so:

0 0 0 * * *

Would execute every time the hour, minute and second all hit zero – so once per day at midnight; and:

0 * * * * *

Would execute every time the second hits zero – so once per minute; and:

0 0 * * * 1

Would execute once per hour on a Monday (as the last placeholder is the day of the week).

Time constraints

These can be specified in any column in the format [lower bound]-[upper bound], and they restrict the timer to a range of values; for example:

0 */20 5-10 * * *

Means every 20 minutes between 5 and 10am (as you can see, the different types can be used in conjunction).

Asterisks (*)

You’ll notice above that there are asterisks in every placeholder that a value has not been specified. The askerisk signifies that the schedule will execute at every interval within that placeholder; for example:

* * * * * *

Means every second; and:

0 * * * * *

Means every minute.

Back to the function

Upon starting, the function will detail when the next several executions will take place:

But what if you don’t know what the schedule will be at compile time. As with many of the variables in an Azure Function, you can simply substitute the value for a placeholder:

[FunctionName("MyFunc")]
public static void Run([TimerTrigger("%schedule%")]TimerInfo myTimer, TraceWriter log)
{
    log.Info($"C# Timer trigger function executed at: {DateTime.Now}");
}

This value can then be provided inside the local.settings.json:

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "DefaultEndpointsProto . . .",
    "AzureWebJobsDashboard": "DefaultEndpointsProto . . .",
    "schedule": "0 * * * * *"
  }
}

References

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer

http://cronexpressiondescriptor.azurewebsites.net/?expression=1+*+*+*+*+*&locale=en

Using Unity With Azure Functions

Azure Functions are a relatively new kid on the block when it comes to the Microsoft Azure stack. I’ve previously written about them, and their limitations. One such limitation seems to be that they don’t lend themselves very well to using dependency injection. However, it is certainly not impossible to make them do so.

In this particular post, we’ll have a look at how you might use an IoC container (Unity in this case) in order to leverage DI inside an Azure function.

New Azure Functions Project

I’ve covered this before in previous posts, in Visual Studio, you can now create a new Azure Functions project:

That done, you should have a project that looks something like this:

As you can see, the elephant in the room here is there are no functions; let’s correct that:

Be sure to call your function something descriptive… like “Function1”. For the purposes of this post, it doesn’t matter what kind of function you create, but I’m going to create a “Generic Web Hook”.

Install Unity

The next step is to install Unity (at the time of writing):

Install-Package Unity -Version 5.5.6

Static Variables Inside Functions

It’s worth bearing mind that a static variable works the way you would expect, were the function a locally hosted process. That is, if you write a function such as this:

[FunctionName("Function1")]
public static object Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");
    
    System.Threading.Thread.Sleep(10000);
    log.Info($"Index is {test}");
    return req.CreateResponse(HttpStatusCode.OK, new
    {
        greeting = $"Hello {test++}!"
    });
}

And access it from a web browser, or postman, or both as the same time, you’ll get incrementing numbers:

Whilst the values are shared across the instances, you can’t cause a conflict by updating something in one function while reading it in another (I tried pretty hard to cause this to break). What this means, then, is that we can store an IoC container that will maintain state across function calls. Obviously, this is not intended for persisting state, so you should assume your state could be lost at any time (as indeed it can).

Registering the Unity Container

One method of doing this is to use the Lazy object. This pretty much passed me by in .Net 4 (which is, apparently, when it came out). It basically provides a slightly neater way of doing this kind of thing:

private List<string> _myList;
public List<string> MyList
{
    get
    {
        if (_myList == null)
        {
            _myList = new List<string>();
        }
        return _myList;
    }
}

The “lazy” method would be:

public Lazy<List<string>> MyList = new Lazy<List<string>>(() =>
{
    List<string> newList = new List<string>();
    return newList;
});

With that in mind, we can do something like this:

public static class Function1
{
     private static Lazy<IUnityContainer> _container =
         new Lazy<IUnityContainer>(() =>
         {
             IUnityContainer container = InitialiseUnityContainer();
             return container;
         });

InitialiseUnityContainer needs to return a new instance of the container:

public static IUnityContainer InitialiseUnityContainer()
{
    UnityContainer container = new UnityContainer();
    container.RegisterType<IMyClass1, MyClass1>();
    container.RegisterType<IMyClass2, MyClass2>();
    return container;
}

After that, you’ll need to resolve the parent dependency, then you can use standard constructor injection; for example, if MyClass1 orchestrates your functionality; you could use:

_container.Value.Resolve<IMyClass1>().DoStuff();

In Practise

Let’s apply all of that to our Functions App. Here’s two new classes:

public interface IMyClass1
{
    string GetOutput();
}
 
public interface IMyClass2
{
    void AddString(List<string> strings);
}
public class MyClass1 : IMyClass1
{
    private readonly IMyClass2 _myClass2;
 
    public MyClass1(IMyClass2 myClass2)
    {
        _myClass2 = myClass2;
    }
 
    public string GetOutput()
    {
        List<string> teststrings = new List<string>();
 
        for (int i = 0; i <= 10; i++)
        {
            _myClass2.AddString(teststrings);
        }
 
        return string.Join(",", teststrings);
    }
}
public class MyClass2 : IMyClass2
{
    public void AddString(List<string> strings)
    {
        Thread.Sleep(1000);
        strings.Add($"{DateTime.Now}");
    }
}

And the calling code looks like this:

[FunctionName("Function1")]
public static object Run([HttpTrigger(WebHookType = "genericJson")]HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");
 
    string output = _container.Value.Resolve<IMyClass1>().GetOutput();
    return req.CreateResponse(HttpStatusCode.OK, new
    {
        output
    });
}

Running it, we get an output that we might expect:

References

https://github.com/Azure/azure-webjobs-sdk/issues/1206

Serverless Computing – A Paradigm Shift

In the beginning

When I first started programming (on the Spectrum ZX81), you would typically write a program such as this (apologies if it’s syntactically incorrect, but I don’t have a Speccy interpreter to hand):

10 PRINT "Hello World"
20 GOTO 10

You could type that in to a computer at Tandys and chuckle as the shop assistants tried to work out how to stop the program (sometimes, you might not even use “Hello World”, but something more profane).

However, no matter how long it took them to work out how to stop the program, they only paid for the electricity the computer used while it was on. Further, there will only ever be a finite and, presumably (I never tried), predictable number of “Hello World” messages produced in an hour.

The N-Tier Revolution

Fast forward a few years, and everyone’s talking about N-Tier computing. We’re writing programs that run on servers. Some of those servers are big and expensive, but pretty much the same statement is true. No matter how big and complex your program that you run on the server, it’s your server (well, actually, it probably belongs to a company that you work for in some capacity). For example, if you have a poorly written SQL Server Procedure that scans an entire table, the same two statements are still true: no matter how long it takes to run, the price for execution is consistent, and the amount of time it takes to run is predicable (although, you may decide that if it’s slow, speeding it up might be a better use of your time than calculating exactly how slow).

Using Other People’s Computers

And now we come to cloud computing… or do we? The chronology is a little off on this, and that’s mainly because everyone keeps forgetting what cloud computing actually is. You’re renting time on somebody else’s computer. If I was twenty years older, I might have started this post by saying that “this was what was done back in the 70’s and 80’s in a manner of speaking. But I’m not, so we’ll jump back to the mid to late 90’s: and SETI. Anyone who had a computer back in those days of dial-up connections and 14.4K modems will remember that SETI (search for extra-terrestrial intelligence) were distributing a program for everyone to run on their computer instead of toaster screensavers*.

Wait – isn’t that what cloud computing is?

Yes – but SETI did it in reverse. You would dial-up, download a chunk of data and your machine would process it as a screensaver.

Everyone was happy to do that for free: because we all want to find aliens. But what if there had been a cost associated to each byte of data processed; clearly something similar was in the mind of Amazon when they started with this idea.

Back to the Paradigm Shift

So, the title, and gist of this post is that the two statements that have been consistent right through from programs written on the ZX Spectrum to programs written in Turbo C and Pascal, to programs written in C# running on a dedicated server, has now changed. So, here are the four big changes brought about by the cloud **:

1. Development and Debugging

If you write a program, you can no-longer be sure of the cost, or the execution time. This isn’t a scare post: both are broadly predictable; but the paradigm has now changed. If you’re in the middle of writing an Azure function, or a GCP BigQuery query, and it doesn’t work, you can’t just close your laptop and go for dinner while you have a think, because while you do, nodes are lighting up all over the world trying to complete your task. The lights are dimming in Seattle while your Azure function crashes again and again.

2. Scale and Accessibility

The second big change is the way that your code is scaled. For example, you might be used to parallelising slow code so that you can make use of all available threads; however, in our new world, if you do that, you may actually be making it harder for your cloud platform of choice to scale your code.

Because you pay per minute of computing time (typically this is more expensive than storage), code that is unnecessarily slow or inefficient may not cause your system to slow down – what it will probably do is to cost you more money to run it at the same speed.

In some cases, you might find it’s more efficient to do the opposite of what you have thus-far believed to be accepted industry best practice. For example, it may prove more cost efficient to de-normalise data that you need to access.

3. Logging

This isn’t exactly new: you should always have been logging in your programs – it’s just common sense. However, there’s a new emphasis here: you’re not running this on your own server (you’re not even running it on a customer’s server) – it’s somebody else’s. That means that if it crashes, there’s a limited amount of investigation that you can do. As a result, you need to log profusely.

4. Microservices and Message Busses

IMHO, there are two reason why the Microservice architecture has become so popular in the cloud world: it’s easy – that is, spinning up a new Azure function endpoint takes minutes.

Secondly, it’s more scaleable. Microservices make your code more scaleable because it’s easy for the cloud provider to instantiate two instances of your program, and then three, and then a million. If your program does one small thing, then only that small thing needs to be instantiated. If your program does twenty different things, it can still scale, but it’ll cost more, because it will require more processing power.

Finally, instead of simply calling the service that you need, there is now the option to place a message on a queue; apart from separating your program into definable responsibility sectors, this means that, when your cloud provider of choice scales your service out, all the instances can pick up a message and deal with it.

Summary

So, what’s the conclusion: is cloud computing good, or bad? In five years time, the idea that someone in your organisation will know the spec of the machine your business critical application is running on seems a little far-fetched. The days of having a trusted server that has a load of hugely important stuff on, but nobody really knows what, and has been running since 1973 are numbered.

Obviously, there’s a price to pay for everything. In the case of the cloud, it’s complexity – not of the code, and not really of the combined system, but typically you are introducing dozens of moving parts to a system. If you decide to segregate your database, you might find that you have several databases, tens, or even hundreds of independent processes and endpoints, you could even spread that across multiple cloud providers. It’s not that hard to lose track of where these pieces are living and what they are doing.

Footnotes

* If you don’t get this reference then you’re probably under 30.

** Clearly this is an opinion piece.

Setting up an e-mail Notification System using Logic Apps

One of the new features of the Microsoft’s Azure offering are Logic Apps: these are basically a workflow system, not totally dis-similar to Windows Workflow (WF so as not to get sued by panda bears). I’ve worked with a number of workflow systems in the past, from standard offerings to completely bespoke versions. The problem always seems to be that, once people start using them, they become the first thing you reach for to solve every problem. Not to say that you can’t solve every problem using a workflow (obviously, it depends which workflow and what you’re doing), but they are not always the best solution. In fact, they tend to be at their best when they are small and simple.

With that in mind, I thought I’d start with a very straightforward e-mail alert system. In fact, all this is going to do is to read an service bus queue and send an e-mail. I discussed a similar idea here, but that was using a custom written function.

Create a Logic App

The first step is to create a new Logic App project:

There are three options here: create a blank logic app, choose from a template (for example, process a service bus message), or define your own with a given trigger. We’ll start from a blank app:

Trigger

Obviously, for a workflow to make sense, it has to start on an event or a schedule. In our case, we are going to run from a service bus entry, so let’s pick that from the menu that appears:

In this case, we’ll choose Peek-Lock, so that we don’t lose our message if something fails. I can now provide the connection details, or simply pick the service bus from a list that it already knows about:

It’s not immediately obvious, but you have to provide a connection name here:

If you choose Peek-Lock, you’ll be presented with an explanation of what that means, and then a screen such as the following:

In addition to picking the queue name, you can also choose the queue type (as well as listening to the queue itself, you can run your workflow from the dead-letter queue – which is very useful in its own right, and may even be a better use case for this type of workflow). Finally, you can choose how frequently to poll the queue.

If you now pick “New step”, you should be faced with an option:

In our case, let’s provide a condition (so that only queue messages with “e-mail” in the message result in an e-mail):

Before progressing to the next stage – let’s have a look at the output of this (you can do this by running the workflow and viewing the “raw output”):

Clearly the content data here is not what was entered. A quick search revealed that the data is Base64 encoded, so we have to make a small tweak in advanced mode:

Okay – finally, we can add the step that actually sends the e-mail. In this instance, I simply picked Outlook.com, and allowed Azure access to my account:

The last step is to complete the message. Because we only took a “peek-lock”, we now need to manually complete the message. In the designer, we just need to add an action:

Then tell it that we want to use the service bus again. As you can see – that’s one of the options in the list:

Finally, it wants the name of the queue, and asks for the lock token – which it helpfully offers from dynamic content:

Testing

To test this, we can add a message to our test queue using the Service Bus Explorer:

I won’t bother with a screenshot of the e-mail, but I will show this:

Which provides a detailed overview of exactly what has happened in the run.

Summary

Having a workflow system built into Azure seems like a two edged sword. On the one hand, you could potentially use it to easily augment functionality and quickly plug holes; however, on the other hand, you might find very complex workflows popping up all over the system, creating an indecipherable architecture.