Tag Archives: c#

Predicting Football Results Using ML.Net

Have you ever wondered why milk is where it is in the supermarket? At least in the UK, the supermarkets sell milk either at cost, or even below cost, in order to attract people into the store: they don’t want you to just buy milk, because they lose money on it. You’ll know the stuff they want you to buy, because it’s at eye level, and it’s right in front of you when you walk in the door.

ML.NET is an open source machine learning platform. As with many things that Microsoft are the guardian of, they want to sell you Azure time, and so this is just another pint of milk at the back of the shop. Having said that – it’s pretty good milk!

In this post, I’m going to set-up a very simple test. I’ll be using this file. It shows the results of the English Premier League from 2018-2019. I’m not a huge football fan myself, but it was the only data I could find at short notice.

Add ML.NET

ML.NET is in preview, so the first step is to add the feature. Oddly, it’s under the “Cross Platform Development” workload:

Once you’ve added this, you may reasonably expect something to change, although it likely won’t – or it will – you’ll see a context menu when you right click a project – but it won’t do anything. This is, bizarrely, because you need to explicitly enable preview features; under Tools -> Options, you’ll find this menu:

Let’s create a new console application; then right click on the project:

You’re now given a list of “scenarios”:

For our purpose, let’s select “Value prediction”. We’re going to try to predict the number of goals for the home team, based on the shots on goal. Just select the file as input data and the column to predict as home_team_goal_count:

For the input, just select home_team_goal_count and then Train:

It asks you for a time here. The longer you give it, the better the model – although there will be a point at which additional time won’t make any difference. You should be able to get a reasonable prediction after 10 seconds, but I’ve picked 60 to see how good a prediction it can make. As someone who knows nothing about football, I would expect these figures to be an almost direct correlation.

Once you’ve finished training the model, you can Evaluate it:

So, it would appear that with 9 shots at goal, I can expect that a team will score between 1 and 2. If I now click the code button, ML.NET will create two new projects for me, including a new Console application; which looks like this:

        static void Main(string[] args)
        {
            // Create single instance of sample data from first line of dataset for model input
            ModelInput sampleData = new ModelInput()
            {
                Home_team_shots = 9F,
            };

            // Make a single prediction on the sample data and print results
            var predictionResult = ConsumeModel.Predict(sampleData);

            Console.WriteLine("Using model to make single prediction -- Comparing actual Home_team_goal_count with predicted Home_team_goal_count from sample data...\n\n");
            Console.WriteLine($"Home_team_shots: {sampleData.Home_team_shots}");
            Console.WriteLine($"\n\nPredicted Home_team_goal_count: {predictionResult.Score}\n\n");
            Console.WriteLine("=============== End of process, hit any key to finish ===============");
            Console.ReadKey();
        }

Let’s modify this slightly, so that we can simply ask it to predict the goal count:

        static void Main(string[] args)
        {
            Console.WriteLine("Enter shots at goal: ");
            string shots = Console.ReadLine();
            if (int.TryParse(shots, out int shotsNum))
            {
                PredictGoals(shotsNum);
            }
        }

        private static void PredictGoals(int shots)
        {
            // Create single instance of sample data from first line of dataset for model input
            ModelInput sampleData = new ModelInput()
            {
                Home_team_shots = shots,
            };

            // Make a single prediction on the sample data and print results
            var predictionResult = ConsumeModel.Predict(sampleData);

            Console.WriteLine("Using model to make single prediction -- Comparing actual Home_team_goal_count with predicted Home_team_goal_count from sample data...\n\n");
            Console.WriteLine($"Home_team_shots: {sampleData.Home_team_shots}");
            Console.WriteLine($"\n\nPredicted Home_team_goal_count: {predictionResult.Score}\n\n");
            Console.WriteLine("=============== End of process, hit any key to finish ===============");
            Console.ReadKey();
        }

And now, we can get a prediction from the app:

29 shots at goal result in only 2 – 3 goals. We can glance at the spreadsheet to see how accurate this is:

It appears it is actually quite accurate!

Debugging a Failed API Request, and Defining an Authorization Header Using Fiddler Everywhere

Postman is a really great tool, but I’ve recently been playing with Telerik’s new version of Fiddler – Fiddler Everywhere. You’re probably thinking that these are separate tools, that do separate jobs… and before Fiddler Everywhere, you’d have been right. However, have a look at this screen:

…In fact it’s not Postman. The previous version of this tool (called Compose) from Fiddler 4 was pretty clunky – but now you can simply right-click on a request and select “Edit in Compose”.

Use Case

Let’s imagine for a minute, that you’ve made a request, and you got a 401 because the authentication header wasn’t set. You can open that request in Fiddler:

In fact, this returns a nice little web snippet, which we can see by selecting the Web tab:

The error:

The request requires HTTP authentication

This error means that authentication details have not been passed to the API; typically, these can be passed in the header, in the form:

Authorization: Basic EncodedUsernamePassword

So, let’s try re-issuing that call in Fiddler – let’s start with the encoding. Visit this site and enter into the Encode section your username and password:

Username:password

For example:

In Fiddler, right click on the request in question, and select to Edit in Compose. You should now see the full request, and be able to edit any part of it; for example, you can add an Authorization header:

Now that you’ve proved that works, you can make the appropriate change in the code – here’s what that looks like in C#:

            byte[] bytes = Encoding.UTF8.GetBytes($"{username}:{password}");
            var auth = Convert.ToBase64String(bytes);

            var client = _httpClientFactory.CreateClient();
            client.DefaultRequestHeaders.Add($"Authorization", $"Basic {auth}");

Downloading from an SFTP site using SSH.Net

I’ve done this a few times, and have failed to document it, and so each time it’s a pain. To be clear, if you’re downloading from FTP, you should have a look here: it’s an excellent, and simple code snippet that will do the job for you.

However, this won’t work with SFTP. Having looked into this, it looks like there’s basically two options: Chilkat if you have some money to spend, and SSH.NET if you don’t. I actually implemented Chilkat before realising it was commercial – it’s a much easier experience, and it’s commercially supported. I’m not being paid by them to say this, and you can easily get by with SSH.NET (in fact, that’s the subject of this post); but there are advantages to going with a commercial option.

Using SSH.NET

The latest version of SSH was released in 2016. There does appear to be an update being worked on, but the NuGet package (at the time of writing) is from 2016:

Install-Package SSH.NET

There’s some pretty good documentation on the GitHub site, and the two links in the references offer wrapper implementations. What’s here is not really any better than what’s there, but I hadn’t seen a post with the code in (plus, I like to have these things documented in my own words).

Client

The first thing you’ll need for each call is a client; I’ve separated mine into a method:

SftpClient GetClient()
{
    var connectionInfo = new PasswordConnectionInfo(url, port, username, password);

    var client = new SftpClient(connectionInfo);
    client.Connect();
    return client;
}

If you’re not sure what your port is, it’s probably 22, although I can’t help with the rest. We’re going to cover 5 basic methods here: List, Upload, Download, Read and Delete.

List

        IEnumerable<SftpFile> ListFiles(string directory)
        {
            using var client = GetClient();
            try
            {                
                return client.ListDirectory(directory);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

There’s not much to explain here – ListDirectory returns a list of SftpFiles. The parameter directory is the directory on the remote server; if you want to access the base directory, then directory = “.”. It’s worth looking at the finally block, though. You should disconnect the client when you’re done.

Upload

        void UploadFile(string localPath, string remotePath)
        {
            using var client = GetClient();
            try
            {
                using var s = File.OpenRead(localPath);
                client.UploadFile(s, remotePath);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

Again, not much here: simply creating a stream, and passing it to client.UploadFile().

Download

This is basically the reverse of UploadFile. In this case, we create the stream locally and download to it:

        void DownloadFile(string remotePath, string localPath)
        {
            using var client = GetClient();
            try
            {
                using var s = File.Create(localPath);
                client.DownloadFile(remotePath, s);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

Read

The Read functionality is, perhaps, the most trivial, and the most useful:

        string ReadFile(string remotePath)
        {
            using var client = GetClient();
            try
            {                
                return client.ReadAllText(remotePath);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

Depending on your use case, this might be all you need.

Delete

Finally, the Delete method:

        void DeleteFile(string remotePath)
        {
            using var client = GetClient();
            try
            {
                client.DeleteFile(remotePath);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

Summary

You might be wondering what the purpose of these wrapper functions are: they do little more than call the underlying SSH library. The only reason I can give, other than that it provides some reusable documentation, is that one day the new version of SSH might be out (or you may choose to switch the Chilkat). Having already done the opposite, I can attest to how much easier that is, if you’re not picking through the main code trying to extract pieces of SSH.NET.

References

https://github.com/dotnet-labs/SftpService/blob/master/SFTPService/SftpService.cs

https://github.com/jorgepsmatos/SftpClientDemo/blob/master/Sftp.cs

Manually Creating a Test Harness in .Net

Let me start by saying that this post isn’t intended to try to replace Selenium, or Cypress, or whatever UI testing tools you may choose to use. In fact, it’s something that I did for manual testing, although it’s not difficult to imagine introducing some minor automation.

The Problem

Imagine that you have a solution that requires some data – perhaps it requires a lot of data, because you’re testing some specific performance issue, or perhaps you just want to see what the screen looks like when you have a lot of data. Let’s also imagine that you’re repeatedly running your project for one reason or another, and adding data, or whatever.

My idea here was that I could create a C# application that scripts this process, but because it’s an internal application, I could give it access to the data layer directly.

The Solution

Basically, the solution to this (and to many things) was a console app. Let’s take a solution that implements a very basic service / repository pattern:

From this, we can see that we have a pretty standard layout, and essentially, what we’re trying to do is insert some data into the database. It’s a bonus if we can add some test coverage while we’re at it (manual test coverage is still test coverage – it just doesn’t show up on your stats). So, if you’re using a REST type pattern, you might want to use the controller endpoints to add the data; but for my purpose, I’m just going to add the data directly into the data access layer.

Let’s see what the console app looks like:

        static async Task Main(string[] args)
        {
            IConfiguration configuration = new ConfigurationBuilder()
                  .AddJsonFile("appsettings.json", true, true)
                  .Build();

            // Ensure the DB is populated
            var dataAccess = new TestDataAccess(configuration.GetConnectionString("ConnectionString"));
            if (dataAccess.GetDataCount() == 0)
            {
                var data = new List<MyData>();

	     // Generate 100 items of data
                for (int j = 0; j <= 100; j++)
                {
		var dataItem = CreateTestItem();
                      data.Add(dataItem);
                }
                dataAccess.AddDataRange(data);
            }

            // Launch the site            
            await RunCommand("dotnet", "build");
            await RunCommand("dotnet", "run", 5);

            System.Diagnostics.Process.Start(@"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe", @"https://localhost:5001");
        }

Okay, so let’s break this down: there’s essentially three sections to this: configuration, adding the data, and running the app.

Configuration

We’ll start with the configuration:

       IConfiguration configuration = new ConfigurationBuilder()
                  .AddJsonFile("appsettings.json", true, true)
                  .Build();

        // Ensure the DB is populated
        var dataAccess = new TestDataAccess(configuration.GetConnectionString("ConnectionString"));

Because we’re using a console app, we’ll need to get the configuration; you could copy the appsettings.json, but my suggestion would be to add a link; that is, add an existing item, and select that item from the main project, then choose to “Add As Link” (this is not a new feature):

This means that you’ll be able to change the config file, and it will be reflected in the test harness.

Creating the data

There’s not too much point in me covering what’s behind TestDataAccess – suffice to say that it encapsulates the data access layer; which, as a minimum, requires the connection string.

It’s also worth pointing out that we check whether there is any data there before running it. Depending on your specific use-case, you may choose to remove this.

Building, running, and launching the app

Okay, so we’ve now added our data, we now want to build the main application – thanks to the command line capabilities of .Net Core, this is much simpler than it was when we used to have to try and wrangle with MSBuild!

    // Launch the site            
    await RunCommand("dotnet", "build");
    await RunCommand("dotnet", "run", 5);

    await RunCommand(@"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe", @"https://localhost:5001");

RunCommand probably needs a little more attention, but before we look at that, let’s think about what we’re trying to do:

1. Build the application
2. When the application has built, run the application
3. Once the application is running, navigate to the site

RunCommand looks like this:

        private static async Task RunCommand(string command, string? args = null, int? waitSecs = -1)
        {
            Console.WriteLine($"Executing command: {command} (args: {args})");

            Process proc = new System.Diagnostics.Process();
            proc.StartInfo.WorkingDirectory = @"..\..\..\..\MyApp.App";
            proc.StartInfo.FileName = command;
            proc.StartInfo.Arguments = args ?? string.Empty;

            proc.Start();

            if ((waitSecs ?? -1) == -1)
            {
                proc.WaitForExit();
            }
            else
            {
                if (waitSecs! == 0) return;
                await Task.Delay(waitSecs!.Value * 1000);
            }
        }

“But this looks inordinately complicated for a simple wrapper for running a process!” I hear you say.

It is a bit, but the essence is this: when running the build command, we need to wait for it to complete, when running the run command, we can’t wait for it to complete, because it never will; but we do need to move to the next thing; and when we launch the site, we don’t really care whether it waits or not after that.

Summary

I appreciate that some of you may be regretting spending the time reading through this, as all I’ve essentially done is script some data creation and run an application; but I imagine there are some people out there, like me, that want to see (visually) what their app looks like with different data shapes.

Using HttpClientFactory

In .Net Framework (prior to .Net Core), HttpClient was something of a pain. You essentially kept it around in a static variable. If you didn’t, and used too many of them, you could end up issues such as socket exhaustion.

I think most people got around this by implementing their own version of a HttpClientFactory… but now you don’t need to.

In this post, I’m covering the simplest possible use case inside an Asp.Net Core application. The articles linked at the bottom of this post go into much more detail.

Usage

The first step to using the HttpClientFactory is to add it to the IoC:

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddControllersWithViews();
            services.AddHttpClient();
        }

Then, simply inject it into your controller; for example:

        public HomeController(ILogger<HomeController> logger,
            IHttpClientFactory clientFactory)
        {
            _logger = logger;
            _clientFactory = clientFactory;
        }

Finally, you request a client like this:

var client = _clientFactory.CreateClient();

Then you can use it as a normal HttpClient:

var result = await client.GetAsync($"https://localhost:22222/endpoint");

References

https://docs.microsoft.com/en-us/aspnet/core/fundamentals/http-requests?view=aspnetcore-3.1

https://www.stevejgordon.co.uk/introduction-to-httpclientfactory-aspnetcore

Calling an Azure Signalr Instance from an Azure function

I’ve been playing around with the Azure Signalr Service. I’m particularly interested in how you can bind to this from an Azure function. Imagine the following scenario:

You’re sat there on your web application, and I press a button on my console application and you suddenly get notified. It’s actually remarkably easy to set-up (although there are definitely a few little things that can trip you up – many thanks to Anthony Chu for his help with some of those!)

If you want to see the code for this, it’s here.

Create an Azure Signalr Service

Let’s start by setting up an Azure Signalr service:

You’ll need to configure a few things:

The pricing tier is your call, but obviously, free is less money than … well, not free! The region should be wherever you plan to deploy your function / app service to, although I won’t actually deploy either of those in this post, and the ServiceMode should be Serverless.

Once you’ve created that, make a note of the connection string (accessed from Keys).

Create a Web App

Follow this post to create a basic web application. You’ll need to change the startup.cs as follows:

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddSignalR().AddAzureSignalR();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }

            //app.UseDefaultFiles();
            //app.UseStaticFiles();

            app.UseFileServer();
            app.UseRouting();
            app.UseAuthorization();

            app.UseEndpoints(routes =>
            {
                routes.MapHub<InfoRelay>("/InfoRelay");
            });
        }

Next, we’ll need to change index.html:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title></title>
    
    <script src="lib/@microsoft/signalr/dist/browser/signalr.js"></script>
    <script src="getmessages.js" type="text/javascript"></script>
    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css">

</head>
<body>
    <div class="container">
        <div class="row">
            <div class="col-2">
                <h1><span class="label label-default">Message</span></h1>
            </div>
            <div class="col-4">
                <h1><span id="messageInput" class="label label-default"></span></h1>
            </div>
        </div>
        <div class="row">&nbsp;</div>
    </div>
    <div class="row">
        <div class="col-12">
            <hr />
        </div>
    </div>
</body>

</html>

The signalr package that’s referenced is an npm package:

npm install @microsoft/signalr

Next, we need the getmessages.js:

function bindConnectionMessage(connection) {
    var messageCallback = function (name, message) {
        if (!message) return;

        console.log("message received:" + message.Value);

        const msg = document.getElementById("messageInput");
        msg.textContent = message.Value;
    };
    // Create a function that the hub can call to broadcast messages.
    connection.on('broadcastMessage', messageCallback);
    connection.on('echo', messageCallback);
    connection.on('receive', messageCallback);
}

function onConnected(connection) {
    console.log("onConnected called");
}

var connection = new signalR.HubConnectionBuilder()
    .withUrl('/InfoRelay')
    .withAutomaticReconnect()
    .configureLogging(signalR.LogLevel.Debug)
    .build();

bindConnectionMessage(connection);
connection.start()
    .then(function () {
        onConnected(connection);
    })
    .catch(function (error) {
        console.error(error.message);
    });

The automatic reconnect and logging are optional (although at least while you’re writing this, I would strongly recommend the logging).

Functions App

Oddly, this is the simplest of all:

    public static class Function1
    {       
        [FunctionName("messages")]
        public static Task SendMessage(
            [HttpTrigger(AuthorizationLevel.Anonymous, "post")] object message,
            [SignalR(HubName = "InfoRelay")] IAsyncCollector<SignalRMessage> signalRMessages)
        {
            return signalRMessages.AddAsync(
                new SignalRMessage
                {
                    Target = "broadcastMessage",
                    Arguments = new[] { "test", message }
                });
        }
    }

The big thing here is the binding – SignalRMessage binding allows it to return the message to the hub (specified in HubName). Also, pay attention to the Target – this needs to match up the the event that the JS code is listening for (in this case: “broadcastMessage”).

Console App

Finally, we can send the initial message to set the whole chain off – the console app code looks like this:

        static async Task Main(string[] args)
        {
            Console.WriteLine($"Press any key to send a message");
            Console.ReadLine();

            HttpClient client = new HttpClient();
            string url = "http://localhost:7071/api/messages";
            
            HttpContent content = new StringContent("{'Value': 'Hello'}", Encoding.UTF8, "application/json");

            HttpResponseMessage response = await client.PostAsync(url, content);
            string results = await response.Content.ReadAsStringAsync();

            Console.WriteLine($"results: {results}");
            Console.ReadLine();
        }

So, all we’re doing here is invoking the function.

Now when you run this (remember that you’ll need to run all three projects), press enter in the console app, and you should see the “Hello” message pop up on the web app.

References

https://docs.microsoft.com/en-us/aspnet/core/signalr/javascript-client?view=aspnetcore-3.1

https://docs.microsoft.com/en-us/aspnet/core/signalr/dotnet-client?view=aspnetcore-3.1&tabs=visual-studio

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-signalr-service?tabs=csharp

Policy Server with Asp.Net Core

It took me a while to collate my notes for this; hopefully this will be the first in a vaguely related series on using Policy Server.

Policy Server is a product from a company called Solliance, allowing you to control authorisation within your application (as opposed to authentication – the same company produces a sister product, called Identity Server). The idea is that, the user is authenticated to determine they are who they say they are, but then authorised to use certain parts of the application, or to do certain things.

This post discusses setting up the open source version of Policy Server. There is a commercial version of this.

The ReadMe on the above GitHub page is quite extensive, however, when I was trying this out, I had some areas that I thought weren’t too clear. This post will explain that to me in the future, should I become unclear again (a common occurrence when you get to a certain age!)

It’s worth bearing in mind that authorisation and authentication need to be done on the server; whilst you may choose to also do that on the client, the code on the client is, well, on the client! If you have some Javascript code sat in the web browser that’s stopping me from seeing the defence plans, then that’s just not secure enough.

Installing and Configuring Policy Server

The first step here is to install the NuGet package:

Install-Package PolicyServer.Local

Next, add Policy Server in startup.cs:

    public void ConfigureServices(IServiceCollection services)
    {
            services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);

            // In production, the React files will be served from this directory
            services.AddSpaStaticFiles(configuration =>
            {
                configuration.RootPath = "ClientApp/build";
            });

            services.AddPolicyServerClient(Configuration.GetSection("Policy"));
    }

Amazingly, at this stage, you’re basically done. The next stage is to add the rules into the appsettings.json. There is an example on the page above, but here’s a much simpler one:

  "Policy": {
    "roles": [
      {
        "name": "salesmanager",
        "subjects": [ "1" ]
      },
      {
        "name": "driver",
        "subjects": [ "2" ]
      }
    ],
    "permissions": [
      {
        "name": "ViewRoutes",
        "roles": [ "driver" ]
      },
      {
        "name": "CreateSalesOrder",
        "roles": [ "salesmanager" ]
      },
      {
        "name": "Timesheet",
        "roles": [ "salesmanager", "driver" ]
      }
    ]
  }

This is, in fact, the part that I didn’t feel was clear enough in the GitHub readme. What, exactly, are “subjects”, and how do I associate them to anything useful? I’ll come back to this shortly, but first, let’s see the rules that we have set up here:

The Sales Manager can create a sales order, and complete their timesheet.

The Driver can view routes and complete their timesheet.

The Sales Manager can not view routes; nor can the driver create a sales order.

Add Policy Server to a Controller

The next stage is to inject Policy Server into our controller. For the home controller, all you need to do is inject IPolicyServerRuntimeClient:

        private readonly IPolicyServerRuntimeClient _policyServerRuntimeClient;

        public HomeController(IPolicyServerRuntimeClient policyServerRuntimeClient)
        {
            _policyServerRuntimeClient = policyServerRuntimeClient;
        }

This can now be used anywhere in the controller. However, for testing purposes, let’s just circle back round to the subject question. The subject is the identity of the user. You may have noticed that I’ve started this post with PolicyServer, so I don’t have any authentication in place here. However, for testing purposes, we can force our user to a specific identity.

Let’s override the Index method of the HomeController:

        public async Task<IActionResult> Index()
        {
            var identity = new ClaimsIdentity(new Claim[]
            {
                new Claim("sub", "2"),
                new Claim("name", "TestUser")
            }, "test");

            User.AddIdentity(identity);

            var homeViewModel = await SecureViewModel();
            return View(homeViewModel);
        }

We’ll come back to the ViewModel business shortly, but let’s focus on the new ClaimsIdentity; notice that the “sub” (or subject) refers to the Subject ID of the driver above. So what we’ve done is created a driver!

The SecureViewModel method returns a ViewModel (which, as you can see, is passed to the view); let’s see what that might look like:

        public async Task<HomeViewModel> SecureViewModel()
        {
            var homeViewModel = new HomeViewModel();            
            homeViewModel.ViewRoutes = await _policyServerRuntimeClient.HasPermissionAsync(User, "ViewRoutes");
            homeViewModel.CreateSalesOrder = await _policyServerRuntimeClient.HasPermissionAsync(User, "CreateSalesOrder");
            homeViewModel.Timesheet = await _policyServerRuntimeClient.HasPermissionAsync(User, "Timesheet");

            return homeViewModel;
        }

You can play about with how this works by altering the “sub” claim.

How does this fit into Identity Server (or an identity server such as Azure B2C)?

The identity result of the authentication, should map to the “sub” claim. In some cases, it’s “nameidentitifier”. Once you’ve captured that, you’ll need to store that reference against the user record.

That’s all very well for testing, but when I use this for real, I want to plug my data in from a database

The first thing you’ll need to do is to ‘flatten’ the data. this StackOverflow question was about the best resource I could find for this, and it gets you about 90% of the way there. I hope to come back to that, and linking this in with Asp.Net Core Policies in the next post.

References

https://auth0.com/blog/role-based-access-control-rbac-and-react-apps/

https://github.com/IdentityServer/IdentityServer4/issues/2968

https://stackoverflow.com/questions/55450393/persist-entitys-in-database-instead-of-appsettings-json

https://stackoverflow.com/questions/55450393/persist-entitys-in-database-instead-of-appsettings-json

Simple binding in Blazor

A while back, I asked on Dev about moving my blog from WordPress to … well, not WordPress anymore. My main critera was to Markdown instead of whatever the WordPress format is called. The main issue being that you need to replace the code tags.

I resolved to create a tool to do it for me. I’m doing so in Blazor.

The first task was to create a very simple data binding scenario, so I could handle all the view logic in the View Model. I’ve previously written about using view models. This post covers the scenario where you want to bind text boxes and a button.

Let’s see the View Model first:

    public class MainViewModel
    {
        public string WpText { get; set; }
        public string MdText { get; set; }

        public void ConvertText()
        {
            MdText = $"Converted {WpText}";
        }
    }

Okay, so we have two strings, and a method that populates the second string with the first. As per the linked post, the plumbing for the view model is very simple; first, it’s registered in ConfigureServices:

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddTransient<MainViewModel, MainViewModel>();
        }

Then it’s injected into the Razor view:

@page "/"
@inject ViewModels.MainViewModel MainViewModel

In fact, the next part is much simpler than I thought it would be. To bind a view model property to the view, you just use the syntax bind:

<div class="container">
    <div class="row">
        <div class="form-group col-md-6">
            <label for="WpText">Wordpress Text</label>
            <input type="text" @[email protected] class="form-control" id="WpText" name="WpText"/>
        </div>

        <div class="form-group col-md-6">
            <label for="MdText">Markdown</label>
            <input type="text" @[email protected] class="form-control" id="MdText" name="MdText"/>
        </div>
    </div>
    <div class="row">
        <div class="form-group col-md-12">
            <input type="button" value="Convert" @[email protected](() => MainViewModel.ConvertText()) class="form-control" id="Submit" />
        </div>
    </div>
</div>

Just to point out one thing that tripped me up before I leave this: the event handlers that relate to Blazor must be prefixed with an at symbol (@).

References

https://www.nativoplus.studio/blog/blazor-data-binding-2/

Adding Logging to Client Side Blazor

Whilst there are some pre-built libraries for this, I seemed to be getting Mono linking errors. What finally worked for me was to install the pre-release versions of:

Install-Package Microsoft.Extensions.Logging -Version 3.0.0-preview6.19304.6
Install-Package Microsoft.Extensions.Logging.Console -Version 3.0.0-preview6.19304.6

Now, in your View Model, accept the Logger:

public MyViewModel(ILogger<MyViewModel> logger)

Then you can log as normal:

_logger.LogInformation("Hello, here's a log message");

You should now see the debug message in the F12 console.

You might be wondering why you don’t need to explicitly inject the logging capability; the reason is that:

BlazorWebAssemblyHost.CreateDefaultBuilder()            

Does that for you.

Casting a C# Object From its Parent

Have you ever tried to do something akin to the following:

[Fact]
public void ConvertClassToSubClass_Converts()
{
    // Arrange
    var parentClass = new SimpleTestClass();
    parentClass.Property1 = "test";
 
    // Act
    var childClass = parentClass as SimpleTestSubClass;
 
    // Assert
    Assert.Equal("test", childClass.Property1);
}

This is a simple Xunit (failing) test. The reason is fails is because you (or I) am trying to cast a general type to a specific, and C# is complaining that this may not be possible; consequently, you will get null (or for a hard cast, you’ll get an InvalidCastException.

Okay, that makes sense. After all, parentClass could actually be a SimpleTestSubClass2 and, as a result, C# is being safe because there’s (presumably – I don’t work for MS) too many possibilities for edge cases.

This is, however, a solvable problem; there are a few ways to do it, but you can simply use reflection:

public TNewClass CastAsClass<TNewClass>() where TNewClass : class
{
 
    var newObject = Activator.CreateInstance<TNewClass>();
    var newProps = typeof(TNewClass).GetProperties();
 
    foreach (var prop in newProps)
    {
        if (!prop.CanWrite) continue;
 
        var existingPropertyInfo = typeof(TExistingClass).GetProperty(prop.Name);
        if (existingPropertyInfo == null || !existingPropertyInfo.CanRead) continue;
        var value = existingPropertyInfo.GetValue(_existingClass);
        
        prop.SetValue(newObject, value, null);
    }
 
    return newObject;
}

This code will effectively transfer any class over to any other class.

If you’d rather use an existing library, you can always use this one. It’s also Open Source on Git Hib.