Category Archives: Azure

Add Storage Queue Message

I’ve written quite extensively in the past about Azure, and Azure Storage. I recently needed to add a message to an Azure storage queue, and realised that I had never written a post about that, specifically. As with many Azure focused .Net activities, it’s not too complex; but I do like to have my own notes on things.

If you’ve arrived at this post, you may find it’s very similar to the Microsoft documentation.

How to add a message

The first step is to install a couple of NuGet packages:

Install-Package Microsoft.Azure.Storage.Common
Install-Package Microsoft.Azure.Storage.Queue

My preference for these kinds of things is to create a helper: largely so that I can mock it out for testing; however, even if you fundamentally object to the concept of testing, you may find such a class helpful, as it keeps all your code in one place.

 public class StorageQueueHelper
{
        private readonly string _connectionString;
        private readonly string _queueName;

        public StorageQueueHelper(string connectionString, string queueName)
        {
            _connectionString = connectionString;
            _queueName = queueName;
        }

        public async Task AddNewMessage(string messageBody)
        {
            var queue = await GetQueue();

            CloudQueueMessage message = new CloudQueueMessage(messageBody);
            await queue.AddMessageAsync(message);
        }

        private async Task<CloudQueue> GetQueue()
        {
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(_connectionString);
            CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = queueClient.GetQueueReference(_queueName);
            await queue.CreateIfNotExistsAsync();

            return queue;
        }
}

The class above works for a single queue, and storage account. Depending on your use case, this might not be appropriate.

The GetQueue() method here is a bit naughty, as it actually changes something (or potentially changes something). Essentially, all it’s doing is connecting to a cloud storage account, and then getting a reference to a queue. We know that the queue will exist, because we’re forcing it to (CreateIfNotExistsAsync()).

Back in AddNewMessage(), once we have the queue, it’s trivial to simply create the message and add it.

References

https://docs.microsoft.com/en-us/azure/storage/queues/storage-tutorial-queues

Using Kudu to Edit a Deployed app.settings file

When you deploy an Azure App Service, there are occasions when you may need to change the running values. Exactly how you may do this depends heavily on how the service was deployed, and how you are managing your variables. You can typically overwrite variables inside the app service; however, the running appsettings.json and web.config will be deployed with the app, and you can edit these directly (whether you should or not is a different question).

It’s your foot

These instructions let you change the deployed files on your App Service. Doing so may result in the behaviour of the site changing.

On with the show

Launch Kudu

When you select this, you’ll be given the option to select “Go”, or … not. Select “Go”.

This will take you to the Kudu console.

From here, select “CMD”; this will take you to a hybrid screen with a command console and a file navigator:

Navigate to d:\home\site\wwwroot:

Too Many Files

Initially, you may get the following error:

There are n items in this directory, but maxViewItems is set to 99. You can increase maxViewItems by setting it to a larger value in localStorage.

To get around this, select F12 and in the console window type:

window.localStorage['maxViewItems'] = 1000

After you’ve changed this, refresh the page (F5).

App Settings

To change the variables, you’ll need to locate the appsettings.json in the list (it’s alpha-numerically sorted, so it should be near the top). (Unfortunately, you can’t edit this from the command line).

When you find the file, click the edit button:

And then change the file:

When done, select Save and then restart the app service.

References

https://www.poppastring.com/blog/kudu-error-with-maxviewitems-in-localstorage

Calling an Azure Signalr Instance from an Azure function

I’ve been playing around with the Azure Signalr Service. I’m particularly interested in how you can bind to this from an Azure function. Imagine the following scenario:

You’re sat there on your web application, and I press a button on my console application and you suddenly get notified. It’s actually remarkably easy to set-up (although there are definitely a few little things that can trip you up – many thanks to Anthony Chu for his help with some of those!)

If you want to see the code for this, it’s here.

Create an Azure Signalr Service

Let’s start by setting up an Azure Signalr service:

You’ll need to configure a few things:

The pricing tier is your call, but obviously, free is less money than … well, not free! The region should be wherever you plan to deploy your function / app service to, although I won’t actually deploy either of those in this post, and the ServiceMode should be Serverless.

Once you’ve created that, make a note of the connection string (accessed from Keys).

Create a Web App

Follow this post to create a basic web application. You’ll need to change the startup.cs as follows:

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddSignalR().AddAzureSignalR();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            //app.UseDefaultFiles();
            //app.UseStaticFiles();

            app.UseFileServer();
            app.UseRouting();
            app.UseAuthorization();

            app.UseEndpoints(routes =>
            {
                routes.MapHub<InfoRelay>("/InfoRelay");
            });
        }

Next, we’ll need to change index.html:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title></title>
    
    <script src="lib/@microsoft/signalr/dist/browser/signalr.js"></script>
    <script src="getmessages.js" type="text/javascript"></script>
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css">

</head>
<body>
    <div class="container">
        <div class="row">
            <div class="col-2">
                <h1><span class="label label-default">Message</span></h1>
            </div>
            <div class="col-4">
                <h1><span id="messageInput" class="label label-default"></span></h1>
            </div>
        </div>
        <div class="row">&nbsp;</div>
    </div>
    <div class="row">
        <div class="col-12">
            <hr />
        </div>
    </div>
</body>

</html>

The signalr package that’s referenced is an npm package:

npm install @microsoft/signalr

Next, we need the getmessages.js:

function bindConnectionMessage(connection) {
    var messageCallback = function (name, message) {
        if (!message) return;

        console.log("message received:" + message.Value);

        const msg = document.getElementById("messageInput");
        msg.textContent = message.Value;
    };
    // Create a function that the hub can call to broadcast messages.
    connection.on('broadcastMessage', messageCallback);
    connection.on('echo', messageCallback);
    connection.on('receive', messageCallback);
}

function onConnected(connection) {
    console.log("onConnected called");
}

var connection = new signalR.HubConnectionBuilder()
    .withUrl('/InfoRelay')
    .withAutomaticReconnect()
    .configureLogging(signalR.LogLevel.Debug)
    .build();

bindConnectionMessage(connection);
connection.start()
    .then(function () {
        onConnected(connection);
    })
    .catch(function (error) {
        console.error(error.message);
    });

The automatic reconnect and logging are optional (although at least while you’re writing this, I would strongly recommend the logging).

Functions App

Oddly, this is the simplest of all:

    public static class Function1
    {       
        [FunctionName("messages")]
        public static Task SendMessage(
            [HttpTrigger(AuthorizationLevel.Anonymous, "post")] object message,
            [SignalR(HubName = "InfoRelay")] IAsyncCollector<SignalRMessage> signalRMessages)
        {
            return signalRMessages.AddAsync(
                new SignalRMessage
                {
                    Target = "broadcastMessage",
                    Arguments = new[] { "test", message }
                });
        }
    }

The big thing here is the binding – SignalRMessage binding allows it to return the message to the hub (specified in HubName). Also, pay attention to the Target – this needs to match up the the event that the JS code is listening for (in this case: “broadcastMessage”).

Console App

Finally, we can send the initial message to set the whole chain off – the console app code looks like this:

        static async Task Main(string[] args)
        {
            Console.WriteLine($"Press any key to send a message");
            Console.ReadLine();

            HttpClient client = new HttpClient();
            string url = "http://localhost:7071/api/messages";
            
            HttpContent content = new StringContent("{'Value': 'Hello'}", Encoding.UTF8, "application/json");

            HttpResponseMessage response = await client.PostAsync(url, content);
            string results = await response.Content.ReadAsStringAsync();

            Console.WriteLine($"results: {results}");
            Console.ReadLine();
        }

So, all we’re doing here is invoking the function.

Now when you run this (remember that you’ll need to run all three projects), press enter in the console app, and you should see the “Hello” message pop up on the web app.

References

https://docs.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-3.1

https://docs.microsoft.com/en-us/aspnet/core/signalr/dotnet-client?view=aspnetcore-3.1&tabs=visual-studio

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-signalr-service?tabs=csharp

Add Application Insights to an Azure Resource

Application Insights provides a set of metric tools to analyse the performance and behaviour of various Azure services. For example, you can see how many calls you have to your Azure Web site, or you can see how many errors your service has generated.

This post is concerned with the scenario where you would want to manually log to application insights. The idea being that, in addition to the above metrics, you can output specific log messages in a central location. You might just want to log some debug information (“Code reached here”, “now here” – don’t try and say you’ve never been there!) Or you might find that there is a particular event in your program that you want to track, or maybe you’ve got two different resources, and you’re trying to work out how quick or frequent the communication between them is.

Set-up

The first step is to set-up a new App Insights service in the Azure Portal (you can also use the recently released Azure Portal App).

Select to create a new resource, and pick Application Insights:

When you create the resource, you’ll be asked for some basic details (try to keep the location in the same region as the app(s) you’ll be monitoring):

The instrumentation key is shown in the overview, and you will need this later:

You should be able to see things like failed requests, response time, etc. However, we’ve just configured this, so it’ll be quiet for now:

Check the “Search” window (which is where log entries will appear):

The other place you can see the output is “Logs (Analytics)”.

Create Web Job

The next thing we need is something to trace; let’s go back to a web job.

Once you’ve set-up your web job, app AppInsights from NuGet:

Install-Package ApplicationInsights.Helpers.WebJobs

The class that we’re principally interested here is the TelemetryClient. You’ll need to instantiate this class; there’s two ways to do this:

var config = Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration.CreateDefault();
 
var tc = new TelemetryClient(config);

This works if you link the App Insights to the resource that you’re tracking in Azure; you’ll usually to that here:

Once you’ve switched it on, you can link your resource; for example:

The other way to link them, without telling Azure that they are linked, is this:

TelemetryConfiguration.Active.InstrumentationKey = InstrumentationKey;

(You can use the instrumentation key that you noted earlier.)

Tracing

Now you’ve configured the telemetry client, let’s say you want to track an exception:

    var ai = new TelemetryClient(config);
    ai.TrackException(exception, properties);

Or you just want to trace something happening:

    var ai = new TelemetryClient(config);
    ai.TrackTrace(text);

Side note

The following code will result in a warning, telling you that it’s deprecated:

    var ai = new TelemetryClient();
    ai.TrackTrace(text);

References

http://pmichaels.net/2017/08/13/creating-basic-azure-web-job/

https://blogs.msdn.microsoft.com/devops/2015/01/07/application-insights-support-for-multiple-environments-stamps-and-app-versions/

https://docs.microsoft.com/en-us/azure/application-insights/app-insights-cloudservices

https://docs.microsoft.com/en-us/azure/application-insights/app-insights-windows-desktop

https://github.com/MicrosoftDocs/azure-docs/issues/40482

Microsoft Chat Bot Framework – Up And Running

Some time ago (think early to mid-nineties), I used to run a BBS. One Christmas, I logged onto another BBS based in Manchester, and they had a “Santa Chat”. I tried it for a while, and was so impressed by it, that I decided to write my own; it was basically the gift that kept giving: you put this thing on your BBS (basically an Eliza clone) and it records the responses of the unsuspecting users to a text file (or log).

These days there are laws against recording such things, but those were simpler times, and once they realised the joke, everyone was happy, and life went on (albeit at 14.4k bps).

A few years ago, I decided to relive my youth, and wrote an app for the Windows Store – this one didn’t keep logs, although I imagine, had I added a “Post Log to Facebook” button, it probably would have got some use. It has since removed by MS in their wisdom. There was very little difference between it and Eliza.

Now, Microsoft seem to have jumped on this bandwagon, and they have released a framework for developing such apps. Clearly their efforts were just a copy of mine… well, anyway, this is a quick foray into the world of a chat bot.

You’re first step, in Azure, is to set-up a Web-App bot:

This will actually create a fully working bot in two or three clicks; select “Test in Web Chat” if you don’t believe me:

Okay – it doesn’t do much – but what it does do is fully working! You can download the code for this if you like:

The code doesn’t look too daunting when you download it:

In fact, looking inside MessagesController, it appears to be a simple API controller. In fact, the controller selects a dialog to use, and the dialog is the magic class that essentially controls all the… err… dialog. The default is called “EchoDialog”.

For the purposes of this demo, we can change the part we want using the web browser; select Online Code Editor:

The bit we’re interested in for the purpose of this is the EchoDialog. Let’s try changing the text that gets sent back a little; replace the test in MessageReceivedAsync with this:

public async Task MessageReceivedAsync(IDialogContext context, IAwaitable<IMessageActivity> argument)
{
    var message = await argument;
 
    if (message.Text == "reset")
    {
        PromptDialog.Confirm(
            context,
            AfterResetAsync,
            "Are you sure you want to reset the count?",
            "Didn't get that!",
            promptStyle: PromptStyle.Auto);
    }
    else
    {
        if (message == "test")
        {
            await context.PostAsync("Testing 1, 2, 3...");
        }
        else
        {
            await context.PostAsync($"{this.count++}: You said {message.Text}");
        }
        context.Wait(MessageReceivedAsync);
    }
}

So, we are checking the input, and where it’s “test”, we’ll return a slightly different response. You’ll need to build this; select the “Open Console” button down the left hand side of the screen and type “build”:

When it’s done, open up your test again and see what happens:

Remember that the bot itself is exposed as an API, so you can put this directly into your own code.

References

https://blogs.msdn.microsoft.com/uk_faculty_connection/2017/09/08/how-to-build-a-chat-bot-using-azure-bot-service-and-train-it-with-luis/

https://github.com/Microsoft/BotFramework-Emulator

Setting up an Azure B2C Tenant

B2C is (one of) Microsoft’s offering to allow us programmers to pass the business of managing log-ins and users over to people who want to be bothered with such things. This post contains very little code, but lots of pictures of configuration screens, that will probably be out of date by the time you read it.

A B2C set-up starts with a tenant. So the first step is to create one:

Select “Create a resource” and search for B2C:

Then select “Create”:

Now you can tell Azure what to call you B2C tenant:

It takes a while to create this, so probably go and get a brew at this stage. When this tenant gets created, it gets created outside of your Azure subscription; the next step is to link it to your subscription:

Once you have a tenant, and you’ve linked it to your subscription, you can switch to it:

If you haven’t done all of the above, but you’re scrolling down to see what the score is for an existing, linked subscription, remember that you need to be a Global Administrator for that tenant to do anything useful.

Once you’ve switched to your new tenant, navigate to the B2C:

Your first step is to tell the B2C tenant which application(s) will be using it. Select “Add” in “Applications”:

This also allows you to tell B2C where to send the user after they have logged in. In this case, we’re just using a local instance, so we’ll send them to localhost:

It doesn’t matter what you call the application; but you will need the Application ID and the key (secret), so keep a note of that:

You’ll need to generate the secret:

Policies

Policies allow you to tell B2C exactly how the user will register and log-in: do they just need an e-mail, or their name, or other information, what information should be available to the app after a successful log-in, and whether to use multi-factor authentication.

Add a policy:

Next, set-up the claims (these are the fields that you will be able to access from the application once you have a successful log-in):

Summary

That’s it – you now have a B2C tenant that will provide log-in capabilities. The next step is to add that to a web application.

References

https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-how-to-enable-billing

https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-tutorials-web-app

https://joonasw.net/view/aspnet-core-2-azure-ad-authentication

Playing with Azure Event Hub

I’ve recently been playing with the Azure Event Hub. This is basically a way of transmitting large amounts* of data between systems. In a later post, I may try and test these limits by designing some kind of game based on this.

As a quick disclaimer, it’s worth bearing in mind that I am playing with this technology, and so much of the content of this post can be found in the links at the bottom of this post – you won’t find anything original here – just a record of my findings. You may find more (and more accurate) information in those.

Event Hub Namespace

The first step, as with many Azure services, is to create a namespace:

For a healthy amount of data transference, you’ll pay around £10 per month.

Finally, we’ll create event hub within the namespace:

When you create the event hub, it asks how many partitions you need. This basically splits the message delivery; and it’s clever enough to work out, if you have 3 partitions and two listeners that one should have two slots, and one, one slot:

We’ll need an access policy so that we have permission to listen:

New Console Apps

We’ll need to create two applications: a producer and a consumer.

Let’s start with a producer. Create a new console app and add this NuGet library.

Here’s the code:

class Program
{
    private static EventHubClient eventHubClient;
    private const string EhConnectionString = "Endpoint=sb://pcm-testeventhub.servicebus.windows.net/;SharedAccessKeyName=Publisher;SharedAccessKey=key;EntityPath=pcm-eventhub1";
    private const string EhEntityPath = "pcm-eventhub1";
 
    public static async Task Main(string[] args)
    {
        EventHubsConnectionStringBuilder connectionStringBuilder = new EventHubsConnectionStringBuilder(EhConnectionString)
        {
            EntityPath = EhEntityPath
        };
 
        eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());
 
        while (true)
        {
            Console.Write("Please enter message to send: ");
            string message = Console.ReadLine();
            if (string.IsNullOrWhiteSpace(message)) break;
 
            await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));
        }
 
        await eventHubClient.CloseAsync();
 
        Console.WriteLine("Press ENTER to exit.");
        Console.ReadLine();
    }
}

Consumer

Next we’ll create a consumer; so the first thing we’ll need is to grant permissions for listening:

We’ll create a second new console application with this same library and the processor library, too.

class Program
{
    private const string EhConnectionString = "Endpoint=sb://pcm-testeventhub.servicebus.windows.net/;SharedAccessKeyName=Listener;SharedAccessKey=key;EntityPath=pcm-eventhub1";
    private const string EhEntityPath = "pcm-eventhub1";
    private const string StorageContainerName = "eventhub";
    private const string StorageAccountName = "pcmeventhubstorage";
    private const string StorageAccountKey = "key";
 
    private static readonly string StorageConnectionString = string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}", StorageAccountName, StorageAccountKey);
 
    static async Task Main(string[] args)
    {
        Console.WriteLine("Registering EventProcessor...");
 
        var eventProcessorHost = new EventProcessorHost(
            EhEntityPath,
            PartitionReceiver.DefaultConsumerGroupName,
            EhConnectionString,
            StorageConnectionString,
            StorageContainerName);
 
        // Registers the Event Processor Host and starts receiving messages
        await eventProcessorHost.RegisterEventProcessorAsync<EventsProcessor>();
 
        Console.WriteLine("Receiving. Press ENTER to stop worker.");
        Console.ReadLine();
 
        // Disposes of the Event Processor Host
        await eventProcessorHost.UnregisterEventProcessorAsync();
    }
}

class EventsProcessor : IEventProcessor
{
    public Task CloseAsync(PartitionContext context, CloseReason reason)
    {
        Console.WriteLine($"Processor Shutting Down. Partition '{context.PartitionId}', Reason: '{reason}'.");
        return Task.CompletedTask;
    }
 
    public Task OpenAsync(PartitionContext context)
    {
        Console.WriteLine($"SimpleEventProcessor initialized. Partition: '{context.PartitionId}'");
        return Task.CompletedTask;
    }
 
    public Task ProcessErrorAsync(PartitionContext context, Exception error)
    {
        Console.WriteLine($"Error on Partition: {context.PartitionId}, Error: {error.Message}");
        return Task.CompletedTask;
    }
 
    public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
    {
        foreach (var eventData in messages)
        {
            var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
            Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
        }
 
        return context.CheckpointAsync();
    }
}

As you can see, we can now transmit data through the Event Hub into client applications:

Footnotes

*Large, in terms of frequency, rather than volume – for example, transmitting a small message twice a second, rather than uploading a petabyte of data

References

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-receive-eph

What can you do with a logic app? Part three – Creating a Logic App Client

One of the things that are missing from Azure Logic apps is the ability to integrate human interaction. Microsoft do have their own version of an interactive workflow (PowerApps), which is (obviously) far better than what you can produce by following this post.

In this post, we’ll create a very basic client for a logic app. Obviously, with some thought, this could easily be extended to allow a fully functional, interactive, workflow system.

Basic Logic App

Let’s start by designing our logic app. The app in question is going to be a very simple one. It’s format is going to be that it will add a message to a logging queue (just so it has something to do), then we’ll ask the user a question; and we’ll do this by putting a message onto a topic: left or right. Based on the user’s response, we’ll either write a message to the queue saying left, or right. Let’s have a look at our Logic App design:

It’s worth pointing out a few things about this design:
1. The condition uses the expression base64ToString() to convert the encoded message into plain text.
2. Where the workflow picks up, it uses a peek-lock, and then completes the message at the end. It looks like it’s a ‘feature’ of logic apps that an automatic complete on this trigger will not actually complete the message (plus, this is actually a better design).

Queues and Topics

The “Log to message queue” action above is putting an entry into a queue; so a quick note about why we’re using a queue for logging, and a topic for the interaction with the user. In a real life version of this system, we might have many users, but they might all want to perform the same action. Let’s say that they all are part of a sales process, and the actions are actually actions along that process; adding these to a queue maintains their sequence. Here’s the queue and topic layout that I’m using for this post:

Multiple Triggers

As you can see, we actually have two triggers in this workflow. The first starts the workflow (so we’ll drop a message into the topic to start it), and the second waits for a second message to go into the topic.

To add a trigger part way through the workflow, simply add an action, search and select “Triggers”:

Because we have a trigger part way through the workflow, what we have effectively issued here is an await statement. Once a message appears in the subscription, the workflow will continue where it left off:

As soon as a message is posted, the workflow carries on:

Client Application

For the client application, we could simply use the Service Bus Explorer (in fact, the screenshots above were taken from using this to simulate messages in the topic). However, the point of this post is to create a client, and so we will… although we’ll just create a basic console app for now.

We need the client to do two things: read from a topic subscription, and write to a topic. I haven’t exactly been here before, but I will be heavily plagiarising from here, here, and here.

Let’s create a console application:

Once that’s done, we’ll need the service bus client library: Install it from here.

The code is generally quite straight-forward, and looks a lot like the code to read and write to queues. The big difference is that you don’t read from a topic, but from a subscription to a topic (a topic can have many subscriptions):

class Program
{
    
    static async Task Main(string[] args)
    {
        MessageHandler messageHandler = new MessageHandler();
        messageHandler.RegisterToRead("secondstage", "sub1");
 
        await WaitForever();
    }
 
    private static async Task WaitForever()
    {
        while (true) await Task.Delay(5000);
    }
}
public class MessageHandler
{
    private string _connectionString = "service bus connection string details";
    private ISubscriptionClient _subscriptionClient;
    public void RegisterToRead(string topicName, string subscriptionName)
    {            
        _subscriptionClient = new SubscriptionClient(_connectionString, topicName, subscriptionName);
 
        MessageHandlerOptions messageHandlerOptions = new MessageHandlerOptions(ExceptionReceived)
        {
            AutoComplete = false,
            MaxAutoRenewDuration = new TimeSpan(1, 0, 0)
        };
 
        _subscriptionClient.RegisterMessageHandler(ProcessMessage, messageHandlerOptions);
 
    }
 
    private async Task ProcessMessage(Message message, CancellationToken cancellationToken)
    {
        string messageText = Encoding.UTF8.GetString(message.Body);
 
        Console.WriteLine(messageText);
        string leftOrRight = Console.ReadLine();
 
        await _subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
 
        await SendResponse(leftOrRight, "userinput");
    }
 
    private async Task SendResponse(string leftOrRight, string topicName)
    {
        TopicClient topicClient = new TopicClient(_connectionString, topicName);
        Message message = new Message(Encoding.UTF8.GetBytes(leftOrRight));
        await topicClient.SendAsync(message);
    }
 
    private Task ExceptionReceived(ExceptionReceivedEventArgs arg)
    {
        Console.WriteLine(arg.Exception.ToString());
        return Task.CompletedTask;
    }
}

If we run it, then when the logic app reaches the second trigger, we’ll get a message from the subscription and ask directions:

Based on the response, the logic app will execute either the right or left branch of code.

Summary

Having worked with workflow systems in the past, one recurring feature of them is that they start to get used for things that don’t fit into a workflow, resulting in a needlessly over-complex system. I imagine that Logic Apps are no exception to this rule, and in 10 years time, people will roll their eyes at how Logic Apps have been used where a simple web service would have done the whole job.

The saving grace here is source control. The workflow inside a Logic App is simply a JSON file, and so it can be source controlled, added to a CI pipeline, and all the good things that you might expect. Whether or not a more refined version of what I have described here makes any sense is another question.

There are many downsides to this approach: firstly, you are fighting against the Service Bus by asking it to wait for input (that part is a very fixable problem with a bit of an adjustment to the messages); secondly, you would presumably need some form of timeout (again, a fixable problem that will probably feature in a future post). The biggest issue here is that you are likely introducing complex conditional logic with no way to unit test; this isn’t, per se, fixable; however, you can introduce some canary logic (again, this will probably be the feature of a future post).

References

https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config

https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions

https://stackoverflow.com/questions/28127001/the-lock-supplied-is-invalid-either-the-lock-expired-or-the-message-has-alread

What can you do with a logic app? Part Two – Use Excel to Manage an E-mail Notification System

In this post I started a series of posts covering different scenarios that you might use an Azure Logic App, and how you might go about that. In this, the second post, we’re going to set-up an excel spreadsheet that allows you simply add a row to an excel table and have a logic app act on that row.

So, we’ll set-up a basic spreadsheet with an e-mail address, subject, text and a date we want it to send; then we’ll have the logic app send the next eligible mail in the list, and mark it as sent.

Spreadsheet

I’ll first state that I do not have an Office 365 subscription, and nothing that I do here will require one. We’ll create the spreadsheet in Office Online. Head over to One Drive (if you don’t have a one drive account then they are free) and create a new spreadsheet:

In the spreadsheet, create a new table – just enter some headers (like below) and then highlight the columns and “Insert Table”:

Remember to check “My Table Has Headers”.

Now enter some data:

Create the Logic App

In this post I showed how you can use Visual Studio to create and deploy a logic app; we’ll do that here:

Once we’ve created the logic app, we’ll need to select to create an action that will get the Excel file that we created; in this case “List rows present in a table”:

This also requires that we specify the table (if you’re using the free online version of Excel then you’ll have to live with the table name you’re given):

Loop

This retrieves a list of rows, and so the next step is to iterate through them one-by-one. We’ll use a For-Each:

Conditions

Okay, so we’re now looking at every row in the table, but we don’t want every row in the table, we only want the ones that have not already been sent, and the ones that are due to be sent (so the date is either today, or earlier). We can use a conditional statement for this:

But we have two problems:

  • Azure Logic Apps are very bad at handling dates – that is to say, they don’t
  • There is currently no way in an Azure Logic App to update an Excel spreadsheet row (you can add and delete only)

The former is easily solved, and the way I elected to solve the latter is to simply delete the row instead of updating it. It is possible to simply delete the current row, and add it back with new values; however, we won’t bother with that here.

Back to the date problem; what we need here is an Azure function…

Creating an Azure Function

Here is the code for our function (see here for details of how to create one):

        [FunctionName("DatesCompare")]
        public static IActionResult Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequest req, TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            string requestBody = new StreamReader(req.Body).ReadToEnd();
            return ParseDates(requestBody);

        }

        public static IActionResult ParseDates(string requestBody)
        {
            dynamic data = JsonConvert.DeserializeObject(requestBody);

            DateTime date1 = (DateTime)data.date1;
            DateTime date2 = DateTime.FromOADate((double)data.date2);

            int returnFlagIndicator = 0;
            if (date1 > date2)
            {
                returnFlagIndicator = 1;
            }
            else if (date1 < date2)
            {
                returnFlagIndicator = -1;
            }

            return (ActionResult)new OkObjectResult(new
            {
                returnFlag = returnFlagIndicator
            });
        }

There’s a few points to note about this code:
1. The date coming from Excel extracts as a double, which is why we need to use FromOADate.
2. The reason to split the function up is so that the main logic can be more easily unit tested. If you ever need a reason for unit testing then try to work out why an Azure function isn’t working inside a logic app!

The logic around this function looks like this:

We build up the request body with the information that we have, and then parse the output. Finally, we can check if the date is in the past and then send the e-mail:

Lastly, as we said earlier, we’ll delete the row to ensure that the e-mail is only sent once:

The eagle eyed and sane amongst you will notice that I’ve used the subject as a key. Don’t do this – it’s very bad practice!

References

https://github.com/Azure/azure-functions-host/wiki/Azure-Functions-runtime-2.0-known-issues