Tag Archives: Azure

Calling an Azure Signalr Instance from an Azure function

I’ve been playing around with the Azure Signalr Service. I’m particularly interested in how you can bind to this from an Azure function. Imagine the following scenario:

You’re sat there on your web application, and I press a button on my console application and you suddenly get notified. It’s actually remarkably easy to set-up (although there are definitely a few little things that can trip you up – many thanks to Anthony Chu for his help with some of those!)

If you want to see the code for this, it’s here.

Create an Azure Signalr Service

Let’s start by setting up an Azure Signalr service:

You’ll need to configure a few things:

The pricing tier is your call, but obviously, free is less money than … well, not free! The region should be wherever you plan to deploy your function / app service to, although I won’t actually deploy either of those in this post, and the ServiceMode should be Serverless.

Once you’ve created that, make a note of the connection string (accessed from Keys).

Create a Web App

Follow this post to create a basic web application. You’ll need to change the startup.cs as follows:

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddSignalR().AddAzureSignalR();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            //app.UseDefaultFiles();
            //app.UseStaticFiles();

            app.UseFileServer();
            app.UseRouting();
            app.UseAuthorization();

            app.UseEndpoints(routes =>
            {
                routes.MapHub<InfoRelay>("/InfoRelay");
            });
        }

Next, we’ll need to change index.html:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title></title>
    
    <script src="lib/@microsoft/signalr/dist/browser/signalr.js"></script>
    <script src="getmessages.js" type="text/javascript"></script>
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css">

</head>
<body>
    <div class="container">
        <div class="row">
            <div class="col-2">
                <h1><span class="label label-default">Message</span></h1>
            </div>
            <div class="col-4">
                <h1><span id="messageInput" class="label label-default"></span></h1>
            </div>
        </div>
        <div class="row">&nbsp;</div>
    </div>
    <div class="row">
        <div class="col-12">
            <hr />
        </div>
    </div>
</body>

</html>

The signalr package that’s referenced is an npm package:

npm install @microsoft/signalr

Next, we need the getmessages.js:

function bindConnectionMessage(connection) {
    var messageCallback = function (name, message) {
        if (!message) return;

        console.log("message received:" + message.Value);

        const msg = document.getElementById("messageInput");
        msg.textContent = message.Value;
    };
    // Create a function that the hub can call to broadcast messages.
    connection.on('broadcastMessage', messageCallback);
    connection.on('echo', messageCallback);
    connection.on('receive', messageCallback);
}

function onConnected(connection) {
    console.log("onConnected called");
}

var connection = new signalR.HubConnectionBuilder()
    .withUrl('/InfoRelay')
    .withAutomaticReconnect()
    .configureLogging(signalR.LogLevel.Debug)
    .build();

bindConnectionMessage(connection);
connection.start()
    .then(function () {
        onConnected(connection);
    })
    .catch(function (error) {
        console.error(error.message);
    });

The automatic reconnect and logging are optional (although at least while you’re writing this, I would strongly recommend the logging).

Functions App

Oddly, this is the simplest of all:

    public static class Function1
    {       
        [FunctionName("messages")]
        public static Task SendMessage(
            [HttpTrigger(AuthorizationLevel.Anonymous, "post")] object message,
            [SignalR(HubName = "InfoRelay")] IAsyncCollector<SignalRMessage> signalRMessages)
        {
            return signalRMessages.AddAsync(
                new SignalRMessage
                {
                    Target = "broadcastMessage",
                    Arguments = new[] { "test", message }
                });
        }
    }

The big thing here is the binding – SignalRMessage binding allows it to return the message to the hub (specified in HubName). Also, pay attention to the Target – this needs to match up the the event that the JS code is listening for (in this case: “broadcastMessage”).

Console App

Finally, we can send the initial message to set the whole chain off – the console app code looks like this:

        static async Task Main(string[] args)
        {
            Console.WriteLine($"Press any key to send a message");
            Console.ReadLine();

            HttpClient client = new HttpClient();
            string url = "http://localhost:7071/api/messages";
            
            HttpContent content = new StringContent("{'Value': 'Hello'}", Encoding.UTF8, "application/json");

            HttpResponseMessage response = await client.PostAsync(url, content);
            string results = await response.Content.ReadAsStringAsync();

            Console.WriteLine($"results: {results}");
            Console.ReadLine();
        }

So, all we’re doing here is invoking the function.

Now when you run this (remember that you’ll need to run all three projects), press enter in the console app, and you should see the “Hello” message pop up on the web app.

References

https://docs.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-3.1

https://docs.microsoft.com/en-us/aspnet/core/signalr/dotnet-client?view=aspnetcore-3.1&tabs=visual-studio

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-signalr-service?tabs=csharp

Add Application Insights to an Azure Resource

Application Insights provides a set of metric tools to analyse the performance and behaviour of various Azure services. For example, you can see how many calls you have to your Azure Web site, or you can see how many errors your service has generated.

This post is concerned with the scenario where you would want to manually log to application insights. The idea being that, in addition to the above metrics, you can output specific log messages in a central location. You might just want to log some debug information (“Code reached here”, “now here” – don’t try and say you’ve never been there!) Or you might find that there is a particular event in your program that you want to track, or maybe you’ve got two different resources, and you’re trying to work out how quick or frequent the communication between them is.

Set-up

The first step is to set-up a new App Insights service in the Azure Portal (you can also use the recently released Azure Portal App).

Select to create a new resource, and pick Application Insights:

When you create the resource, you’ll be asked for some basic details (try to keep the location in the same region as the app(s) you’ll be monitoring):

The instrumentation key is shown in the overview, and you will need this later:

You should be able to see things like failed requests, response time, etc. However, we’ve just configured this, so it’ll be quiet for now:

Check the “Search” window (which is where log entries will appear):

The other place you can see the output is “Logs (Analytics)”.

Create Web Job

The next thing we need is something to trace; let’s go back to a web job.

Once you’ve set-up your web job, app AppInsights from NuGet:

Install-Package ApplicationInsights.Helpers.WebJobs

The class that we’re principally interested here is the TelemetryClient. You’ll need to instantiate this class; there’s two ways to do this:

var config = Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration.CreateDefault();
 
var tc = new TelemetryClient(config);

This works if you link the App Insights to the resource that you’re tracking in Azure; you’ll usually to that here:

Once you’ve switched it on, you can link your resource; for example:

The other way to link them, without telling Azure that they are linked, is this:

TelemetryConfiguration.Active.InstrumentationKey = InstrumentationKey;

(You can use the instrumentation key that you noted earlier.)

Tracing

Now you’ve configured the telemetry client, let’s say you want to track an exception:

    var ai = new TelemetryClient(config);
    ai.TrackException(exception, properties);

Or you just want to trace something happening:

    var ai = new TelemetryClient(config);
    ai.TrackTrace(text);

Side note

The following code will result in a warning, telling you that it’s deprecated:

    var ai = new TelemetryClient();
    ai.TrackTrace(text);

References

http://pmichaels.net/2017/08/13/creating-basic-azure-web-job/

https://blogs.msdn.microsoft.com/devops/2015/01/07/application-insights-support-for-multiple-environments-stamps-and-app-versions/

https://docs.microsoft.com/en-us/azure/application-insights/app-insights-cloudservices

https://docs.microsoft.com/en-us/azure/application-insights/app-insights-windows-desktop

https://github.com/MicrosoftDocs/azure-docs/issues/40482

Download file from Azure storage using Javascript

.Net is an excellent framework – if you want proof of that, try to do, even very simple things, in Javascript. It feels a bit like getting out of a Tesla and travelling back in time to drive a Robin Reliant (I’ve never actually driven either of these cars, so I don’t really know if it feels like that or not!)

If you were to, for example, want to download a file from a Blob Storage container, in .Net you’re looking at about 4 lines of strongly typed code. There’s basically nothing to do, and it consistently works. If you want to do that in Javascript, there’s a Microsoft Javascript Library.

In said library, there is a function that should get a download URL for you; it’s named getUrl:

const downloadLink = blobService.getUrl(containerName, fileId, sasKey);            

If you use this (at least, when I used this), it gave me the following error:

Signature did not match

To get around this, you can build the download link manually like this:

const downloadLink = blobUri + '/' + containerName + '/' + fileId + sasKey;

Comparing the two, the former appears to escape the question mark in the SAS.

To actually download the file, you can use this:

        // https://stackoverflow.com/questions/3749231/download-file-using-javascript-jquery
        function downloadURI(uri, name) 
        {
            var link = document.createElement("a");
            link.download = name;
            link.href = uri;
            link.click();
        }

And the final download function looks like this:

        function downloadFile(sas, storageUri,
            containerName, fileId, destinationFileName) {

            var blobService = AzureStorage.Blob.createBlobServiceWithSas(storageUri, sas);
            
            const downloadLink = storageUri +'/' + containerName + '/' + fileId + sas;

            downloadURI(downloadLink, destinationFileName);
        }

Creating a Car Game in React – Part 6 – Adding High Scores

This is the sixth post of a series that starts here.

As with previous posts, if you wish to download the code, it’s here; and, as with previous posts, I won’t cover all the code changes here, so if you’re interested, then you should download the code.

In this post, we’re going to create a High Score table. We’ll create an Azure function as the server, and we’ll store the scores themselves in Azure Tables.

Let’s start with the table.

Create a new storage account in Azure, then add an Azure Table to it:

You’ll see a sign trying to persuade you to use Cosmos DB here. At the time of writing, using Cosmos was considerably more expensive than Table Storage. Obviously, you get increased throughput, distributed storage, etc with Cosmos. For this, we don’t need any of that.

Create a new table:

An Azure table is, in fact, a No SQL offering, as you have a key, and then an attribute – the attribute can be a JSON file, or whatever you choose. In our case, we’ll set the key as the user name, and the score as the attribute.

Once you’re created your table storage, you may wish to use the Storage Explorer to create the tables, although that isn’t necessary.

Finally, you’ll need to add a CORS rule:

Obviously, this should actually point to the domain that you’re using, rather than a blanket ‘allow’, but it’ll do for testing.

Adding a username

Before we can store a high score, the user needs a username. Let’s add one first.

In game status, we’ll add a text box:

<div style={containerStyle}>
	<input type='text' value={props.Username}
	onChange={props.onChangeUsername} />
</div>

The state is raised to the main Game.jsx:

<GameStatus Lives={this.state.playerLives} 
	Message={this.state.message} 
	Score={this.state.score} 
	RemainingTime={this.state.remainingTime}
	Level={this.state.level}
	Username={this.state.username} 
	onChangeUsername={this.onChangeUsername.bind(this)} 
/>

And onChangeUsername is here:

onChangeUsername(e) {
	this.updateUserName(e.target.value);
}

updateUserName(newUserName) {
	this.setState({
		username: newUserName
	});
}

Update High Score

We’ll create an Azure Function to update the table. In Visual Studio, create a new Windows Azure Function App (you will need to install the Azure Workload if you haven’t already):

You’ll be asked what the trigger should be for the function: we’ll go with HttpTrigger. This allows us to call our function whenever we please (rather than the function, being say scheduled.) Next, we’ll need to install a NuGet package into our project to let us use the Azure Storage Client:

Install-Package WindowsAzure.Storage

We need some access details from Azure:

Creating the Functions

We’re actually going to need two functions: update and retrieve (we won’t be using the retrieve in this post, but we’ll create it anyway). Let’s start with a helper method:

    public static class StorageAccountHelper
    {
        public static CloudStorageAccount Connect()
        {
            string accountName = Environment.GetEnvironmentVariable("StorageAccountName");
            string accountKey = Environment.GetEnvironmentVariable("StorageAccountKey");

            var storageAccount = new CloudStorageAccount(
                new Microsoft.WindowsAzure.Storage.Auth.StorageCredentials(
                    accountName, accountKey), true);
            return storageAccount;
        }
    }

For testing purposes, add the account name and key into the local.settings.json:

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "UseDevelopmentStorage=true",
    "FUNCTIONS_WORKER_RUNTIME": "dotnet",
    "StorageAccountName": "pcmtest2",
    "StorageAccountKey": "C05h2SJNQOXE9xYRObGP5sMi2owfDy7EkaouClfeOSKRdijyTQPh1PIJgHS//kOJPK+Nl9v/9BlH4rleJ4UJ7A=="
  }
}

The values here are taken from above – where we copied the access keys from Azure (whilst these keys are genuine keys, they will be changed by the time the post is published – so don’t get any ideas!

First, let’s create a function to add a new high Score:

        [FunctionName("AddHighScores")]
        public static async Task<IActionResult> Run(
            [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequest req,
            ILogger log)
        {
            log.LogInformation("C# HTTP trigger function processed a request.");

            var newScore = new HighScore(req.Query["name"], int.Parse(req.Query["score"]));            

            var storageAccount = StorageAccountHelper.Connect();

            CloudTableClient client = storageAccount.CreateCloudTableClient();
            var table = client.GetTableReference("HighScore");

            await table.ExecuteAsync(TableOperation.InsertOrReplace(newScore));

            return new OkResult();
        }

If you’ve seen the default example of this function, it’s actually not that different: it’s a POST method, we take the name and score parameters from the query string, build up a record and add the score. The function isn’t perfect: any conflicting names will result in overwritten score, but this is a copy of a spectrum game – so maybe that’s authentic!

The second function is to read them:

        [FunctionName("GetHighScores")]
        public static async Task<IList<HighScore>> Run(
            [HttpTrigger(AuthorizationLevel.Function, "get", Route = null)] HttpRequest req,
            ILogger log)
        {
            log.LogInformation("C# HTTP trigger function processed a request.");

            var storageAccount = StorageAccountHelper.Connect();

            CloudTableClient client = storageAccount.CreateCloudTableClient();
            var table = client.GetTableReference("HighScore");
            var tq = new TableQuery<HighScore>();
            var continuationToken = new TableContinuationToken();
            var result = await table.ExecuteQuerySegmentedAsync(tq, continuationToken);
            
            return result.Results;
        }

All we’re really doing here is reading whatever’s in the table. This might not scale hugely well, but again, for testing, it’s fine. The one thing to note here is ExecuteQuerySegmentedAsync: there seems to be very little documentation around on it; and what there is seems to refer to ExecuteQueryAsync (which, as far as I can tell, doesn’t, or at least, no longer, exists).

Let’s run the Azure function locally and see what happens:

As you can see, Azure helpfully gives us some endpoints that we can use for testing. If you don’t have a copy already, then download Postman. Here you can create a request that calls the function.

I won’t go into the exact details of how Postman works, but the requests might look something like this:

http://localhost:7071/api/AddHighScores?name=test2&score=19
http://localhost:7071/api/GetHighScores?10

To prove to yourself that they are actually working, have a look in the table.

There is now an online Storage Explorer in the Azure Portal. Details of the desktop version can be found in this post.

Update High Score from the Application

Starting with adding the high score, let’s call the method to add the high score when the player dies (as that’s the only time we know what the final score is):

playerDies() { 
    this.setState({
        playerLives: this.state.playerLives - 1,
        gameLoopActive: false
    });

    if (this.state.playerLives <= 0) {
        this.updateHighScore();
        this.initiateNewGame();
    } else {
        this.startLevel(this.state.level);
    }

    this.repositionPlayer();
    this.setState({ 
        playerCrashed: false,
        gameLoopActive: true
    });
}

The updateHighScore function looks like this:

updateHighScore() {
	fetch('http://localhost:7071/api/AddHighScores?name=' + this.state.username + '&score=' + this.state.score, {
		method: 'POST'
	}); 
}

Note (obviously) that here I’m updating using my locally running instance of the Azure Function.

And that’s it – we now have a score updating when the player dies. Next we need to display the high scores – that’ll be the next post.

References

https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch

https://facebook.github.io/react-native/docs/network

Setting up an Azure B2C Tenant

B2C is (one of) Microsoft’s offering to allow us programmers to pass the business of managing log-ins and users over to people who want to be bothered with such things. This post contains very little code, but lots of pictures of configuration screens, that will probably be out of date by the time you read it.

A B2C set-up starts with a tenant. So the first step is to create one:

Select “Create a resource” and search for B2C:

Then select “Create”:

Now you can tell Azure what to call you B2C tenant:

It takes a while to create this, so probably go and get a brew at this stage. When this tenant gets created, it gets created outside of your Azure subscription; the next step is to link it to your subscription:

Once you have a tenant, and you’ve linked it to your subscription, you can switch to it:

If you haven’t done all of the above, but you’re scrolling down to see what the score is for an existing, linked subscription, remember that you need to be a Global Administrator for that tenant to do anything useful.

Once you’ve switched to your new tenant, navigate to the B2C:

Your first step is to tell the B2C tenant which application(s) will be using it. Select “Add” in “Applications”:

This also allows you to tell B2C where to send the user after they have logged in. In this case, we’re just using a local instance, so we’ll send them to localhost:

It doesn’t matter what you call the application; but you will need the Application ID and the key (secret), so keep a note of that:

You’ll need to generate the secret:

Policies

Policies allow you to tell B2C exactly how the user will register and log-in: do they just need an e-mail, or their name, or other information, what information should be available to the app after a successful log-in, and whether to use multi-factor authentication.

Add a policy:

Next, set-up the claims (these are the fields that you will be able to access from the application once you have a successful log-in):

Summary

That’s it – you now have a B2C tenant that will provide log-in capabilities. The next step is to add that to a web application.

References

https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-how-to-enable-billing

https://docs.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-tutorials-web-app

https://joonasw.net/view/aspnet-core-2-azure-ad-authentication

Playing with Azure Event Hub

I’ve recently been playing with the Azure Event Hub. This is basically a way of transmitting large amounts* of data between systems. In a later post, I may try and test these limits by designing some kind of game based on this.

As a quick disclaimer, it’s worth bearing in mind that I am playing with this technology, and so much of the content of this post can be found in the links at the bottom of this post – you won’t find anything original here – just a record of my findings. You may find more (and more accurate) information in those.

Event Hub Namespace

The first step, as with many Azure services, is to create a namespace:

For a healthy amount of data transference, you’ll pay around £10 per month.

Finally, we’ll create event hub within the namespace:

When you create the event hub, it asks how many partitions you need. This basically splits the message delivery; and it’s clever enough to work out, if you have 3 partitions and two listeners that one should have two slots, and one, one slot:

We’ll need an access policy so that we have permission to listen:

New Console Apps

We’ll need to create two applications: a producer and a consumer.

Let’s start with a producer. Create a new console app and add this NuGet library.

Here’s the code:

class Program
{
    private static EventHubClient eventHubClient;
    private const string EhConnectionString = "Endpoint=sb://pcm-testeventhub.servicebus.windows.net/;SharedAccessKeyName=Publisher;SharedAccessKey=key;EntityPath=pcm-eventhub1";
    private const string EhEntityPath = "pcm-eventhub1";
 
    public static async Task Main(string[] args)
    {
        EventHubsConnectionStringBuilder connectionStringBuilder = new EventHubsConnectionStringBuilder(EhConnectionString)
        {
            EntityPath = EhEntityPath
        };
 
        eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());
 
        while (true)
        {
            Console.Write("Please enter message to send: ");
            string message = Console.ReadLine();
            if (string.IsNullOrWhiteSpace(message)) break;
 
            await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));
        }
 
        await eventHubClient.CloseAsync();
 
        Console.WriteLine("Press ENTER to exit.");
        Console.ReadLine();
    }
}

Consumer

Next we’ll create a consumer; so the first thing we’ll need is to grant permissions for listening:

We’ll create a second new console application with this same library and the processor library, too.

class Program
{
    private const string EhConnectionString = "Endpoint=sb://pcm-testeventhub.servicebus.windows.net/;SharedAccessKeyName=Listener;SharedAccessKey=key;EntityPath=pcm-eventhub1";
    private const string EhEntityPath = "pcm-eventhub1";
    private const string StorageContainerName = "eventhub";
    private const string StorageAccountName = "pcmeventhubstorage";
    private const string StorageAccountKey = "key";
 
    private static readonly string StorageConnectionString = string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}", StorageAccountName, StorageAccountKey);
 
    static async Task Main(string[] args)
    {
        Console.WriteLine("Registering EventProcessor...");
 
        var eventProcessorHost = new EventProcessorHost(
            EhEntityPath,
            PartitionReceiver.DefaultConsumerGroupName,
            EhConnectionString,
            StorageConnectionString,
            StorageContainerName);
 
        // Registers the Event Processor Host and starts receiving messages
        await eventProcessorHost.RegisterEventProcessorAsync<EventsProcessor>();
 
        Console.WriteLine("Receiving. Press ENTER to stop worker.");
        Console.ReadLine();
 
        // Disposes of the Event Processor Host
        await eventProcessorHost.UnregisterEventProcessorAsync();
    }
}

class EventsProcessor : IEventProcessor
{
    public Task CloseAsync(PartitionContext context, CloseReason reason)
    {
        Console.WriteLine($"Processor Shutting Down. Partition '{context.PartitionId}', Reason: '{reason}'.");
        return Task.CompletedTask;
    }
 
    public Task OpenAsync(PartitionContext context)
    {
        Console.WriteLine($"SimpleEventProcessor initialized. Partition: '{context.PartitionId}'");
        return Task.CompletedTask;
    }
 
    public Task ProcessErrorAsync(PartitionContext context, Exception error)
    {
        Console.WriteLine($"Error on Partition: {context.PartitionId}, Error: {error.Message}");
        return Task.CompletedTask;
    }
 
    public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
    {
        foreach (var eventData in messages)
        {
            var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
            Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
        }
 
        return context.CheckpointAsync();
    }
}

As you can see, we can now transmit data through the Event Hub into client applications:

Footnotes

*Large, in terms of frequency, rather than volume – for example, transmitting a small message twice a second, rather than uploading a petabyte of data

References

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-receive-eph

What can you do with a logic app? Part three – Creating a Logic App Client

One of the things that are missing from Azure Logic apps is the ability to integrate human interaction. Microsoft do have their own version of an interactive workflow (PowerApps), which is (obviously) far better than what you can produce by following this post.

In this post, we’ll create a very basic client for a logic app. Obviously, with some thought, this could easily be extended to allow a fully functional, interactive, workflow system.

Basic Logic App

Let’s start by designing our logic app. The app in question is going to be a very simple one. It’s format is going to be that it will add a message to a logging queue (just so it has something to do), then we’ll ask the user a question; and we’ll do this by putting a message onto a topic: left or right. Based on the user’s response, we’ll either write a message to the queue saying left, or right. Let’s have a look at our Logic App design:

It’s worth pointing out a few things about this design:
1. The condition uses the expression base64ToString() to convert the encoded message into plain text.
2. Where the workflow picks up, it uses a peek-lock, and then completes the message at the end. It looks like it’s a ‘feature’ of logic apps that an automatic complete on this trigger will not actually complete the message (plus, this is actually a better design).

Queues and Topics

The “Log to message queue” action above is putting an entry into a queue; so a quick note about why we’re using a queue for logging, and a topic for the interaction with the user. In a real life version of this system, we might have many users, but they might all want to perform the same action. Let’s say that they all are part of a sales process, and the actions are actually actions along that process; adding these to a queue maintains their sequence. Here’s the queue and topic layout that I’m using for this post:

Multiple Triggers

As you can see, we actually have two triggers in this workflow. The first starts the workflow (so we’ll drop a message into the topic to start it), and the second waits for a second message to go into the topic.

To add a trigger part way through the workflow, simply add an action, search and select “Triggers”:

Because we have a trigger part way through the workflow, what we have effectively issued here is an await statement. Once a message appears in the subscription, the workflow will continue where it left off:

As soon as a message is posted, the workflow carries on:

Client Application

For the client application, we could simply use the Service Bus Explorer (in fact, the screenshots above were taken from using this to simulate messages in the topic). However, the point of this post is to create a client, and so we will… although we’ll just create a basic console app for now.

We need the client to do two things: read from a topic subscription, and write to a topic. I haven’t exactly been here before, but I will be heavily plagiarising from here, here, and here.

Let’s create a console application:

Once that’s done, we’ll need the service bus client library: Install it from here.

The code is generally quite straight-forward, and looks a lot like the code to read and write to queues. The big difference is that you don’t read from a topic, but from a subscription to a topic (a topic can have many subscriptions):

class Program
{
    
    static async Task Main(string[] args)
    {
        MessageHandler messageHandler = new MessageHandler();
        messageHandler.RegisterToRead("secondstage", "sub1");
 
        await WaitForever();
    }
 
    private static async Task WaitForever()
    {
        while (true) await Task.Delay(5000);
    }
}
public class MessageHandler
{
    private string _connectionString = "service bus connection string details";
    private ISubscriptionClient _subscriptionClient;
    public void RegisterToRead(string topicName, string subscriptionName)
    {            
        _subscriptionClient = new SubscriptionClient(_connectionString, topicName, subscriptionName);
 
        MessageHandlerOptions messageHandlerOptions = new MessageHandlerOptions(ExceptionReceived)
        {
            AutoComplete = false,
            MaxAutoRenewDuration = new TimeSpan(1, 0, 0)
        };
 
        _subscriptionClient.RegisterMessageHandler(ProcessMessage, messageHandlerOptions);
 
    }
 
    private async Task ProcessMessage(Message message, CancellationToken cancellationToken)
    {
        string messageText = Encoding.UTF8.GetString(message.Body);
 
        Console.WriteLine(messageText);
        string leftOrRight = Console.ReadLine();
 
        await _subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
 
        await SendResponse(leftOrRight, "userinput");
    }
 
    private async Task SendResponse(string leftOrRight, string topicName)
    {
        TopicClient topicClient = new TopicClient(_connectionString, topicName);
        Message message = new Message(Encoding.UTF8.GetBytes(leftOrRight));
        await topicClient.SendAsync(message);
    }
 
    private Task ExceptionReceived(ExceptionReceivedEventArgs arg)
    {
        Console.WriteLine(arg.Exception.ToString());
        return Task.CompletedTask;
    }
}

If we run it, then when the logic app reaches the second trigger, we’ll get a message from the subscription and ask directions:

Based on the response, the logic app will execute either the right or left branch of code.

Summary

Having worked with workflow systems in the past, one recurring feature of them is that they start to get used for things that don’t fit into a workflow, resulting in a needlessly over-complex system. I imagine that Logic Apps are no exception to this rule, and in 10 years time, people will roll their eyes at how Logic Apps have been used where a simple web service would have done the whole job.

The saving grace here is source control. The workflow inside a Logic App is simply a JSON file, and so it can be source controlled, added to a CI pipeline, and all the good things that you might expect. Whether or not a more refined version of what I have described here makes any sense is another question.

There are many downsides to this approach: firstly, you are fighting against the Service Bus by asking it to wait for input (that part is a very fixable problem with a bit of an adjustment to the messages); secondly, you would presumably need some form of timeout (again, a fixable problem that will probably feature in a future post). The biggest issue here is that you are likely introducing complex conditional logic with no way to unit test; this isn’t, per se, fixable; however, you can introduce some canary logic (again, this will probably be the feature of a future post).

References

https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config

https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions

https://stackoverflow.com/questions/28127001/the-lock-supplied-is-invalid-either-the-lock-expired-or-the-message-has-alread

What can you do with a logic app? Part Two – Use Excel to Manage an E-mail Notification System

In this post I started a series of posts covering different scenarios that you might use an Azure Logic App, and how you might go about that. In this, the second post, we’re going to set-up an excel spreadsheet that allows you simply add a row to an excel table and have a logic app act on that row.

So, we’ll set-up a basic spreadsheet with an e-mail address, subject, text and a date we want it to send; then we’ll have the logic app send the next eligible mail in the list, and mark it as sent.

Spreadsheet

I’ll first state that I do not have an Office 365 subscription, and nothing that I do here will require one. We’ll create the spreadsheet in Office Online. Head over to One Drive (if you don’t have a one drive account then they are free) and create a new spreadsheet:

In the spreadsheet, create a new table – just enter some headers (like below) and then highlight the columns and “Insert Table”:

Remember to check “My Table Has Headers”.

Now enter some data:

Create the Logic App

In this post I showed how you can use Visual Studio to create and deploy a logic app; we’ll do that here:

Once we’ve created the logic app, we’ll need to select to create an action that will get the Excel file that we created; in this case “List rows present in a table”:

This also requires that we specify the table (if you’re using the free online version of Excel then you’ll have to live with the table name you’re given):

Loop

This retrieves a list of rows, and so the next step is to iterate through them one-by-one. We’ll use a For-Each:

Conditions

Okay, so we’re now looking at every row in the table, but we don’t want every row in the table, we only want the ones that have not already been sent, and the ones that are due to be sent (so the date is either today, or earlier). We can use a conditional statement for this:

But we have two problems:

  • Azure Logic Apps are very bad at handling dates – that is to say, they don’t
  • There is currently no way in an Azure Logic App to update an Excel spreadsheet row (you can add and delete only)

The former is easily solved, and the way I elected to solve the latter is to simply delete the row instead of updating it. It is possible to simply delete the current row, and add it back with new values; however, we won’t bother with that here.

Back to the date problem; what we need here is an Azure function…

Creating an Azure Function

Here is the code for our function (see here for details of how to create one):

        [FunctionName("DatesCompare")]
        public static IActionResult Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequest req, TraceWriter log)
        {
            log.Info("C# HTTP trigger function processed a request.");

            string requestBody = new StreamReader(req.Body).ReadToEnd();
            return ParseDates(requestBody);

        }

        public static IActionResult ParseDates(string requestBody)
        {
            dynamic data = JsonConvert.DeserializeObject(requestBody);

            DateTime date1 = (DateTime)data.date1;
            DateTime date2 = DateTime.FromOADate((double)data.date2);

            int returnFlagIndicator = 0;
            if (date1 > date2)
            {
                returnFlagIndicator = 1;
            }
            else if (date1 < date2)
            {
                returnFlagIndicator = -1;
            }

            return (ActionResult)new OkObjectResult(new
            {
                returnFlag = returnFlagIndicator
            });
        }

There’s a few points to note about this code:
1. The date coming from Excel extracts as a double, which is why we need to use FromOADate.
2. The reason to split the function up is so that the main logic can be more easily unit tested. If you ever need a reason for unit testing then try to work out why an Azure function isn’t working inside a logic app!

The logic around this function looks like this:

We build up the request body with the information that we have, and then parse the output. Finally, we can check if the date is in the past and then send the e-mail:

Lastly, as we said earlier, we’ll delete the row to ensure that the e-mail is only sent once:

The eagle eyed and sane amongst you will notice that I’ve used the subject as a key. Don’t do this – it’s very bad practice!

References

https://github.com/Azure/azure-functions-host/wiki/Azure-Functions-runtime-2.0-known-issues

What can you do with a logic app? Part One – Send tweets at random intervals based on a defined data set

I thought I’d start another of my patented series’. This one is about finding interesting things that can be done with Azure Logic Apps.

Let’s say, for example, that you have something that you want to say; for example, if you were Richard Dawkins or Ricky Gervais, you might want to repeatedly tell everyone that there is no God; or if you were Google, you might want to tell everyone how .Net runs on your platform; or if you were Microsoft, you might want to tell people how it’s a “Different Microsoft” these days.

The thing that I want to repeatedly tell everyone is that I’ve written some blog posts. For this purpose, I’m going to set-up a logic app that, based on a random interval, sends a tweet from my account (https://twitter.com/paul_michaels), informing people of one of my posts. It will get this information from a simple Azure storage table; let’s start there: first, we’ll need a storage account:

Then a table:

We’ll enter some data using Storage Explorer:

After entering a few records (three in this case – because the train journey would need to be across Russia or something for me to fill my entire back catalogue in manually – I might come back and see about automatically scraping this data from WordPress one day).

In order to create our logic app, we need a singular piece of custom logic. As you might expect, there’s no randomised action or result, so we’ll have to create that as a function:

For Logic App integration, a Generic WebHook seems to work best:

Here’s the code:

#r "Newtonsoft.Json"
using System;
using System.Net;
using Newtonsoft.Json;
static Random _rnd;

public static async Task<object> Run(HttpRequestMessage req, TraceWriter log)
{
    log.Info($"Webhook was triggered!");
    if (_rnd == null) _rnd = new Random();
    string rangeStr = req.GetQueryNameValuePairs()
        .FirstOrDefault(q => string.Compare(q.Key, "range", true) == 0)
        .Value;
    int range = int.Parse(rangeStr);
    int num = _rnd.Next(range - 1) + 1; 
    var response = req.CreateResponse(HttpStatusCode.OK, new
    {
        number = num
    });
    return response;
}

Okay – back to the logic app. Let’s create one:

The logic app that we want will be (currently) a recurrence; let’s start with every hour (if you’re following along then you might need to adjust this while testing – be careful, though, as it will send a tweet every second if you tell it to):

Add the function:

Add the input variables (remember that the parameters read by the function above are passed in via the query):

One thing to realise about Azure functions is they rely heavily on passing JSON around. For this purpose, you’ll use the JSON Parser action a lot. My advice would be to name them sensibly, and not “Parse JSON” and “Parse JSON 2” as I’ve done here:

The JSON Parser action requires a schema – that’s how it knows what your data looks like. You can get the schema by selecting the option to use a sample payload, and just grabbing the output from above (when you tested the function – if you didn’t test the function then you obviously trust me far more than you should and, as a reward, you can simply copy the output from below):

That will then generate a schema for you:

Note: if you get the schema wrong then the run report will error, but it will give you a dump of the JSON that it had – so another approach would be to enter anything and then take the actual JSON from the logs.

Now we’ll add a condition based on the output. Now that we’ve parsed the JSON, “number” (or output from the previous step) is available:

So, we’ll check if the number is 1 – meaning there’s a 1 in 10 chance that the condition will be true. We don’t care if it’s false, but if it’s true then we’ll send a tweet. Before we do that , though – we need to go the data table and find out what to send. Inside the “true” branch, we’ll add an “Azure Table Storage – Get Entities” call:

This asks you for a storage connection (the name is just for you to name the connection to the storage account). Typically, after getting this data, you would call for each to run through the entries. Because there is currently no way to count the entries in the table, we’ll iterate through each entry, but we’ll do it slowly, and we’ll use our random function to ensure that all are not sent.

Let’s start with not sending all items:

All the subsequent logic is inside the true branch. The next thing is to work out how long to delay:

Now we have a number between 1 and 60, we can wait for that length of time:

The next step is to send the tweet, but because we need specific data from the table, it’s back to our old friend: Parse JSON (it looks like every Workflow will contain around 50% of these parse tasks – although, obviously, you could bake this sort of thing into a function).

To get the data for the tweet, we’ll need to parse the JSON for the current item:

Once you’ve done this, you’ll have access to the parts of the record and can add the Tweet action:

And we have a successful run… and some tweets: