Category Archives: .Net Core

Predicting Football Results Using ML.Net

Have you ever wondered why milk is where it is in the supermarket? At least in the UK, the supermarkets sell milk either at cost, or even below cost, in order to attract people into the store: they don’t want you to just buy milk, because they lose money on it. You’ll know the stuff they want you to buy, because it’s at eye level, and it’s right in front of you when you walk in the door.

ML.NET is an open source machine learning platform. As with many things that Microsoft are the guardian of, they want to sell you Azure time, and so this is just another pint of milk at the back of the shop. Having said that – it’s pretty good milk!

In this post, I’m going to set-up a very simple test. I’ll be using this file. It shows the results of the English Premier League from 2018-2019. I’m not a huge football fan myself, but it was the only data I could find at short notice.

Add ML.NET

ML.NET is in preview, so the first step is to add the feature. Oddly, it’s under the “Cross Platform Development” workload:

Once you’ve added this, you may reasonably expect something to change, although it likely won’t – or it will – you’ll see a context menu when you right click a project – but it won’t do anything. This is, bizarrely, because you need to explicitly enable preview features; under Tools -> Options, you’ll find this menu:

Let’s create a new console application; then right click on the project:

You’re now given a list of “scenarios”:

For our purpose, let’s select “Value prediction”. We’re going to try to predict the number of goals for the home team, based on the shots on goal. Just select the file as input data and the column to predict as home_team_goal_count:

For the input, just select home_team_goal_count and then Train:

It asks you for a time here. The longer you give it, the better the model – although there will be a point at which additional time won’t make any difference. You should be able to get a reasonable prediction after 10 seconds, but I’ve picked 60 to see how good a prediction it can make. As someone who knows nothing about football, I would expect these figures to be an almost direct correlation.

Once you’ve finished training the model, you can Evaluate it:

So, it would appear that with 9 shots at goal, I can expect that a team will score between 1 and 2. If I now click the code button, ML.NET will create two new projects for me, including a new Console application; which looks like this:

        static void Main(string[] args)
        {
            // Create single instance of sample data from first line of dataset for model input
            ModelInput sampleData = new ModelInput()
            {
                Home_team_shots = 9F,
            };

            // Make a single prediction on the sample data and print results
            var predictionResult = ConsumeModel.Predict(sampleData);

            Console.WriteLine("Using model to make single prediction -- Comparing actual Home_team_goal_count with predicted Home_team_goal_count from sample data...\n\n");
            Console.WriteLine($"Home_team_shots: {sampleData.Home_team_shots}");
            Console.WriteLine($"\n\nPredicted Home_team_goal_count: {predictionResult.Score}\n\n");
            Console.WriteLine("=============== End of process, hit any key to finish ===============");
            Console.ReadKey();
        }

Let’s modify this slightly, so that we can simply ask it to predict the goal count:

        static void Main(string[] args)
        {
            Console.WriteLine("Enter shots at goal: ");
            string shots = Console.ReadLine();
            if (int.TryParse(shots, out int shotsNum))
            {
                PredictGoals(shotsNum);
            }
        }

        private static void PredictGoals(int shots)
        {
            // Create single instance of sample data from first line of dataset for model input
            ModelInput sampleData = new ModelInput()
            {
                Home_team_shots = shots,
            };

            // Make a single prediction on the sample data and print results
            var predictionResult = ConsumeModel.Predict(sampleData);

            Console.WriteLine("Using model to make single prediction -- Comparing actual Home_team_goal_count with predicted Home_team_goal_count from sample data...\n\n");
            Console.WriteLine($"Home_team_shots: {sampleData.Home_team_shots}");
            Console.WriteLine($"\n\nPredicted Home_team_goal_count: {predictionResult.Score}\n\n");
            Console.WriteLine("=============== End of process, hit any key to finish ===============");
            Console.ReadKey();
        }

And now, we can get a prediction from the app:

29 shots at goal result in only 2 – 3 goals. We can glance at the spreadsheet to see how accurate this is:

It appears it is actually quite accurate!

Executing Dynamically Generated SQL in EF Core

Entity Framework Core is primarily concerned with defining and executing pre-defined queries on a DB table, or executing a simple join on two of more tables. You can do more, but that’s its sweet spot – and for good reason. Have a think about the last project you worked on: I reckon 95% of you will be thinking about a forms-over-data application. Get a list of orders, update the product price, create a new user: really basic CRUD operations. So it makes sense that a framework like EF Core should make the 95% as easy as possible.

But what if you’re in the 5%? What if you’re working on a project where you have a query with 5 or 6 tables. Maybe you don’t even know which fields you’ll need to filter on. Well, for those users, EF Core provides two methods:

FromSqlRaw

And

FromSqlInterpolated

Both methods basically allow you to build your own SQL string, and execute it against the DB Context. It’s worth remembering that, unlike ADO.NET, you can’t just parse the output, you need to be ready for it; but that’s not the subject of this post. Here. We’re talking about a dynamically build query that returns a known type.

So, what are we trying to do?

The Problem

Let’s imagine that we have a table, called MyTable for the sake of argument, and MyTable has five columns:

MyTable
-- Field1
-- Field2
-- Field3
-- Field4
-- Field5

Now, let’s imagine that we have an app that allows the user to pick one or more fields to filter on (obviously, exposing the DB structure to the user is a bad idea unless you’re writing an SSMS clone, but bear with me here). When this comes through to EF, you’ve basically got three ways to implement this:

1. Dynamically build the query string and execute it directly.
2. Use (either in raw SQL, a stored procedure, or in Linq) the Field1 = filter OR filter = “” method.
3. Bring the data down to the client, and filter it there.

For the purpose of this post, we’re going to discuss option (1). All the options have merit, depending on the use case.

Let’s talk about building dynamic SQL, and some of the pitfalls.

Dynamic SQL

Building dynamic SQL is easy, right? You could just do this:

string sql = "select * from MyTable ";

If (!string.IsNullOrWhitespace(filter1)
{
    sql += $"where Field1 = {filter1}"
}

// Add additional fields, and deal with the WHERE / AND problem

var result = _myDbContext.MyTable.FromSqlInterpolated(sql);

So, this code is bad for several reasons. Let’s run through them.

1. It doesn’t compile

The first thing (although by far not the worst), is that this code won’t compile. The reason this won’t compile is that FromSqlInterpolated takes a FormattableString. Of course, this is easily correctable:

var result = _myDbContext.MyTable.FromSqlInterpolated($"{sql}");

Now the code compiles, but it probably doesn’t do what you want (to be clear, it probably will work at this point).

The next issue is one of security.

2. SQL Injection

If the field above (filter1) is set to: ‘1’; DROP TABLE MyTable; (or something equivalent), your app will execute it. This is because we’re not using placeholders. What does this mean:

1. FromSqlInterpolated accepts an interpolated string, but what we’re passing here is a pre-built string. The code being passed into FromSqlInterpolated needs to be interpolated at the time; e.g.:
– _myDbContext.MyTable.FromSqlInterpolated($”select * from MyTable where field1 = {filter1}”);
2. Since this won’t work in our case, we’ll need to build up the query using FromSqlRaw, and pass in parameters.

3. Caching

The way that most (at least relational) databases work, is that they try to cache the most frequently used queries. The problem is that, if you do something like the query above: “select * from MyTable where Field1 = ‘myvalue'”, that gets cached. If you run that again, but pass ‘myvalue2’ then that gets cached. Run it 1000 times with different values, and other queries start to get pushed out of the cache.

So, how can we build up a dynamic SQL string, without leaving ourselves open to SQL injection, or flooding the cache?

A Solution

This is a solution, it is not the solution. In it, we’re reverting a little to an ADO.NET style of doing things, by providing SqlParameters. Let’s see what that might look like:

            string sql =
                "select * " +
                "from MyTable ";

            var parameters = new List<SqlParameter>();

            int i = 1;
            foreach (var filter in filters)
            {
                if (i == 1)
                {
                    sql += $"where Field{i} = @filter{i} ";                    
                }
                else
                {
                    sql += $"and Field{i} = @filter{i} ";
                }
                parameters.Add(new SqlParameter($"@filter{i++}", filter));
            }

            var result = _paymentsDbContext.MyTable
                .FromSqlRaw(sql, parameters.ToArray())
                .ToList();

We’re assuming that we have an array / list of filters here, and we just create a query that looks something like this:

select *
from MyTable
where Field1 = @filter1
and Field3 = @filter3

Because these are placeholders, you’re protected against SQL injection, and the DB engine will cache this query (so changing the values themselves doesn’t affect the cache). It’s worth bearing in mind that if we run this again, and end up with the following:

select *
from MyTable
where Field1 = @filter1
and Field4 = @filter4
and Field5 = @filter5

This will be separately cached, so you’d need to make a decision as to whether you are likely to have few enough queries that it doesn’t matter.

Summary

Quite often, people use EF as though the data was all local. It’s always worth remembering that each time you make a call, you are accessing the DB – despite the fact that Microsoft have gone to great lengths to make you think you are not. Each time you touch the DBMS, you change something – or, rather, something is changed as a result of you touching the DB. This might be obvious, like you insert a record, or it might be hidden, like the cache is updated. Nevertheless, the DB is a service, and it is probably the most important service in your system.

This post is based on my knowledge of relational databases, so the same may not be completely true of No-Sql databases.

IConfiguration does not contain a definition for GetValue

When I search for something, I typically start with pmichaels.net [tab] [search term] – especially if I know I’ve come across a problem before. This post is one such problem: it’s not hard to find the solution, but it is hard to find the solution on this site (because until now, it wasn’t here).

The error (which is also in the title):

IConfiguration does not contain a definition for GetValue

Typically appears when you’re using IConfiguration outside of an Asp.Net Core app. In fact, GetValue is an extension method, so the solution is to simply add the following package:

Install-Package Microsoft.Extensions.Configuration.Binder

References

https://stackoverflow.com/questions/54767718/iconfiguration-does-not-contain-a-definition-for-getvalue

GitHub Actions – Debugging Techniques and Common Issues

In this post I covered how to debug a failing build. This is more of the same, really; a sort of hodgepodge of bits that have cropped up while discovering issues with various projects.

If you’re interested, the library that caused all this was this one.

.Net Version

Let’s start with the version number that you’re building for. When you create the initial build you get a targetted .net version; and by default, it’s very specific (and the latest version):

dotnet-version: 3.1.101

There’s two things that are worth noting here. The first is that if you intend to release this library on NuGet or somewhere else that it can be consumed, then the lower the target version, the better. A .Net Core app can consume a library of the same version or lower. This sounds obvious, but it’s not without cost: some of the GitHub actions depend on later versions. For example, Publish Nuget uses the switch —skip-duplicate, which is a .Net 3.1 thing. If you try to use this targeting a previous version of .Net, you’ll get the following error:

error: Unrecognized option '--skip-duplicate'

The second thing of note is the specific version; it’s not as well documented as it should be, but you can simply use something like this:

	
	dotnet-version: '3.1.x'

And you’re build will work with any version of 3.1.

Cross Platform, Verbosity and Tracing

As with the post mentioned above, I had an issue with a specific library, whereby two of the tests were failing. The first test in question called a line of code that compared two strings:

if (String.Compare(string1, string2, StringComparison.OrdinalIgnoreCase) == 0)  

It turns out that this doesn’t work cross platform, and was failing because it was running on Ubuntu.

The second issue was slightly more nuanced, and relates to the dates (1/3 was being read as 3/1); this isn’t specifically a Linux / Windows issue, but it is something that’s different between testing on your local environment, and on the build server. This might not be as much of an issue if you’re in the U.S. (or any country that formats its dates with the month first).

Although I initially suspected the cause, I began by changing the log level on the build:

run: dotnet test --no-restore --verbosity normal

To:

run: dotnet test --no-restore --verbosity detailed

Unfortunately, this didn’t really give me anything; and I’m sad to say that I next resorted to inserting lines line this into the code to try and determine what was going on:

Console.WriteLine("pcm-test-1");

I’m not saying this is the best method of debugging (especially in a situation like this), but it’s where we all go to when nothing else works.

A Better Way – Debugging on Linux Locally

In this post I covered how you can install Ubuntu and run it from your Windows Terminal, and in this post, I covered how you can install .Net on that instance. This means that we can run the tests directly from Linux and see what’s going on locally.

Simply cd to the directory that you’ve pulled the code down to, and run dotnet test. You may need to run it as elevated privilege, but it should run, and fail with the same error that you’re getting from GitHub:

Summary

I’ve used GitHub Actions a few times now, and this issue of the code running on a different platform is by far the most challenging thing about using them. Given that this is running on a Windows machine, being able to run (and debug) on a Linux platform is a huge step forward.

Installing .Net on Ubuntu… on Windows

With the new Windows Subsystem for Linux, and the Windows Terminal, comes the ability to run .Net programs on Linux from your Windows machine.

Here I wrote briefly about how you can install and run Linux on your Windows machine. In this post, I’ll cover how to install .Net.

If you don’t have the Windows Terminal, then you can install it here.

The installation process is pretty straightforward, and the first step is to launch the Windows Terminal. Once that’s running, open a Linux tab, and run the following two scripts (if you’re interested in where these came from, follow the link in the References section below):

wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb

Then run:

sudo apt-get update; \
  sudo apt-get install -y apt-transport-https && \
  sudo apt-get update && \
  sudo apt-get install -y dotnet-sdk-3.1

That should do it. To verify that it has:

dotnet --version

And you should see:

References

https://docs.microsoft.com/en-us/dotnet/core/install/linux-ubuntu

Change the Default Asp.Net Core Layout to Use Feature Folders

One of the irritating things about the Asp.Net Core default project is that the various parts of your system are arranged by type, as opposed to function. For example, if you’re working on the Accounts page, you’re likely going to want to change the view, the controller and, perhaps, the model; you are, however, unlikely to want to change the Sales Order controller as a result of your change: so why have the AccountsController and SalesOrderController in the same place, but away from the AccountsView?

If you create a new Asp.Net Core MVC Web App:

Then you’ll get a layout like this (in fact, exactly like this):

If your web app has two or three controllers, and maybe five or six views, then this works fine. When you start getting a larger, more complex app, you’ll find that you’re scrolling through your solution trying to find the SalesOrderController, or the AccountsView.

One way to alleviate this, is to re-organise your project to reference features in vertical slices. For example:

There’s not much to either of these, but let’s just put them in for the sake of completeness; the View:

@{
    ViewData["Title"] = "Wibble Page";
}

<div class="text-center">
    <h1 class="display-4">Wibble</h1>    
</div>

And the controller:

namespace WebApplication1.Wibble
{
    public class WibbleController : Controller
    {
        public IActionResult Index()
        {
            return View();
        }
    }
}

The problem here is that the engine won’t know where to look for the views. We can change that by changing the ConfigureServices method in Startup.cs:

        public void ConfigureServices(IServiceCollection services)
        {
            . . . 
            services.Configure<RazorViewEngineOptions>(options =>
            {
                options.ViewLocationFormats.Clear();
                options.ViewLocationFormats.Add($"/Wibble/{{0}}{RazorViewEngine.ViewExtension}");
                options.ViewLocationFormats.Add($"/Views/Shared/{{0}}{RazorViewEngine.ViewExtension}");
            });
        }

Let’s also change the default controller action (in the Configure method of Startup.cs):

        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }
            else
            {
                app.UseExceptionHandler("/Home/Error");
                // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
                app.UseHsts();
            }

            . . . 
            app.UseEndpoints(endpoints =>
            {
                endpoints.MapControllerRoute(
                    name: "default",
                    pattern: "{controller=Wibble}/{action=Index}/{id?}");
            });
        }

There’s more than a few libraries that will handle this for you (here’s one by the late Scott Allen), but it’s always nice to be able to do such things manually before you resort to a third-party library.

Manually Creating a Test Harness in .Net

Let me start by saying that this post isn’t intended to try to replace Selenium, or Cypress, or whatever UI testing tools you may choose to use. In fact, it’s something that I did for manual testing, although it’s not difficult to imagine introducing some minor automation.

The Problem

Imagine that you have a solution that requires some data – perhaps it requires a lot of data, because you’re testing some specific performance issue, or perhaps you just want to see what the screen looks like when you have a lot of data. Let’s also imagine that you’re repeatedly running your project for one reason or another, and adding data, or whatever.

My idea here was that I could create a C# application that scripts this process, but because it’s an internal application, I could give it access to the data layer directly.

The Solution

Basically, the solution to this (and to many things) was a console app. Let’s take a solution that implements a very basic service / repository pattern:

From this, we can see that we have a pretty standard layout, and essentially, what we’re trying to do is insert some data into the database. It’s a bonus if we can add some test coverage while we’re at it (manual test coverage is still test coverage – it just doesn’t show up on your stats). So, if you’re using a REST type pattern, you might want to use the controller endpoints to add the data; but for my purpose, I’m just going to add the data directly into the data access layer.

Let’s see what the console app looks like:

        static async Task Main(string[] args)
        {
            IConfiguration configuration = new ConfigurationBuilder()
                  .AddJsonFile("appsettings.json", true, true)
                  .Build();

            // Ensure the DB is populated
            var dataAccess = new TestDataAccess(configuration.GetConnectionString("ConnectionString"));
            if (dataAccess.GetDataCount() == 0)
            {
                var data = new List<MyData>();

	     // Generate 100 items of data
                for (int j = 0; j <= 100; j++)
                {
		var dataItem = CreateTestItem();
                      data.Add(dataItem);
                }
                dataAccess.AddDataRange(data);
            }

            // Launch the site            
            await RunCommand("dotnet", "build");
            await RunCommand("dotnet", "run", 5);

            System.Diagnostics.Process.Start(@"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe", @"https://localhost:5001");
        }

Okay, so let’s break this down: there’s essentially three sections to this: configuration, adding the data, and running the app.

Configuration

We’ll start with the configuration:

       IConfiguration configuration = new ConfigurationBuilder()
                  .AddJsonFile("appsettings.json", true, true)
                  .Build();

        // Ensure the DB is populated
        var dataAccess = new TestDataAccess(configuration.GetConnectionString("ConnectionString"));

Because we’re using a console app, we’ll need to get the configuration; you could copy the appsettings.json, but my suggestion would be to add a link; that is, add an existing item, and select that item from the main project, then choose to “Add As Link” (this is not a new feature):

This means that you’ll be able to change the config file, and it will be reflected in the test harness.

Creating the data

There’s not too much point in me covering what’s behind TestDataAccess – suffice to say that it encapsulates the data access layer; which, as a minimum, requires the connection string.

It’s also worth pointing out that we check whether there is any data there before running it. Depending on your specific use-case, you may choose to remove this.

Building, running, and launching the app

Okay, so we’ve now added our data, we now want to build the main application – thanks to the command line capabilities of .Net Core, this is much simpler than it was when we used to have to try and wrangle with MSBuild!

    // Launch the site            
    await RunCommand("dotnet", "build");
    await RunCommand("dotnet", "run", 5);

    await RunCommand(@"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe", @"https://localhost:5001");

RunCommand probably needs a little more attention, but before we look at that, let’s think about what we’re trying to do:

1. Build the application
2. When the application has built, run the application
3. Once the application is running, navigate to the site

RunCommand looks like this:

        private static async Task RunCommand(string command, string? args = null, int? waitSecs = -1)
        {
            Console.WriteLine($"Executing command: {command} (args: {args})");

            Process proc = new System.Diagnostics.Process();
            proc.StartInfo.WorkingDirectory = @"..\..\..\..\MyApp.App";
            proc.StartInfo.FileName = command;
            proc.StartInfo.Arguments = args ?? string.Empty;

            proc.Start();

            if ((waitSecs ?? -1) == -1)
            {
                proc.WaitForExit();
            }
            else
            {
                if (waitSecs! == 0) return;
                await Task.Delay(waitSecs!.Value * 1000);
            }
        }

“But this looks inordinately complicated for a simple wrapper for running a process!” I hear you say.

It is a bit, but the essence is this: when running the build command, we need to wait for it to complete, when running the run command, we can’t wait for it to complete, because it never will; but we do need to move to the next thing; and when we launch the site, we don’t really care whether it waits or not after that.

Summary

I appreciate that some of you may be regretting spending the time reading through this, as all I’ve essentially done is script some data creation and run an application; but I imagine there are some people out there, like me, that want to see (visually) what their app looks like with different data shapes.

Feature Flags in Asp.Net Core – Advanced Features – Targeting a Specific Audience using Feature Filters

In this post, I introduced the concept, and Microsoft’s implementation, of Feature Flags. In fact, there’s a lot more to both than I covered in this initial post. In this post, I want to cover how you could use this to turn a particular feature on for a single user, or a group of users. You can even specify groups for those users, and allow all, or some of those users to see the feature.

It’s worth noting that, at the time of writing, this functionality is currently only in preview; you’ll need the following NuGet package (or later) for it to work:

<PackageReference Include="Microsoft.FeatureManagement.AspNetCore" Version="2.2.0-preview" />

Before we get into how to use this, let’s have a quick look at the source. The interesting method is EvaluateAsync. Essentially this method returns a boolean indicating whether or not the feature is available; you could simply return true and the feature would always be enabled; but let’s see an abridged version of this method:

        public Task<bool> EvaluateAsync(FeatureFilterEvaluationContext context, ITargetingContext targetingContext)
        {
            . . .

            //
            // Check if the user is being targeted directly
            if (targetingContext.UserId != null &&
                settings.Audience.Users != null &&
                settings.Audience.Users.Any(user => targetingContext.UserId.Equals(user, ComparisonType)))
            {
                return Task.FromResult(true);
            }

            //
            // Check if the user is in a group that is being targeted
            if (targetingContext.Groups != null &&
                settings.Audience.Groups != null)
            {
                foreach (string group in targetingContext.Groups)
                {
                    GroupRollout groupRollout = settings.Audience.Groups.FirstOrDefault(g => g.Name.Equals(group, ComparisonType));

                    if (groupRollout != null)
                    {
                        string audienceContextId = $"{targetingContext.UserId}\n{context.FeatureName}\n{group}";

                        if (IsTargeted(audienceContextId, groupRollout.RolloutPercentage))
                        {
                            return Task.FromResult(true);
                        }
                    }
                }
            }

            //
            // Check if the user is being targeted by a default rollout percentage
            string defaultContextId = $"{targetingContext.UserId}\n{context.FeatureName}";

            return Task.FromResult(IsTargeted(defaultContextId, settings.Audience.DefaultRolloutPercentage));
        }

So, the process is that it looks for specific users and, if it finds them, they can see the feature; if it cannot find them then it looks through the groups (we’ll come back to IsTargeted later), and finally reverts to the DefaultRolloutPercentage – again, we’ll look into what that is later on.

Let’s start with a single user

If you have a look at the previous post, you’ll see that the Feature Management system is being added using the following syntax:

services.AddFeatureManagement();

In order to add one of the pre-defined filters, we’ll need to add to this like so:

            services.AddFeatureManagement()
                .AddFeatureFilter<TargetingFilter>();

You’ll also need the following class importing:

using Microsoft.FeatureManagement.FeatureFilters;

If you run this now, you’ll get the following error:

System.InvalidOperationException: ‘Unable to resolve service for type ‘Microsoft.FeatureManagement.FeatureFilters.ITargetingContextAccessor’ while attempting to activate ‘Microsoft.FeatureManagement.FeatureFilters.TargetingFilter’.’

The reason being that you need to identify a ITargetingContextAccessor. What exactly this looks like is up to the implementer, but you’ll find an example implementation here.

We’ll come back to this shortly in the groups section.

Let’s now have a look at what our appsettings.json might look like:

  "FeatureManagement": {
    "MyFeature": {
      "EnabledFor": [
        {
          "Name": "Targeting",
          "Parameters": {
            "Audience": {
              "Users": [
                "[email protected]"
              ]
            }
          }
        }
      ]      
    }

If we have a look at the default HttpContextTargetingContextAccessor (see the link above for the ITargetingContextAccessor), we’ll see that the UserId is being set there:

            TargetingContext targetingContext = new TargetingContext
            {
                UserId = user.Identity.Name,
                Groups = groups
            };

This isn’t particularly controversial – at least the User Id part isn’t; however, it doesn’t have to be this; for example, you could get the family_name claim, and return that – and then you could target your feature to anyone named Smith. It’s a bit of a silly example, but the point is that you can customise how this works (in fact, you can write a completely custom filter, which I’ll probably cover in a later post).

This part of the Feature Management is not to be underestimated: you could release a feature, in live, to only one or two Beta Testers. However, it is quite specific; that’s where Groups come in.

Groups

Groups are slightly more interesting. You can specify which groups are in and out using the following configuration:

  "FeatureManagement": {

    "MyFeature": {
      "EnabledFor": [
        {
          "Name": "Targeting",
          "Parameters": { 
            "Audience": {
              "Users": [],
              "Groups": [
                {
                  "Name": "Group1",
                  "RolloutPercentage": 80
                },
                {
                  "Name": "Group2",
                  "RolloutPercentage": 40
                }
              ],
              "DefaultRolloutPercentage": 20
            }
          }
        }
      ]
    },

The RolloutPercentage here indicates what proportion of the group that you wish to be included.

Do you remember the IsTargeted method from earlier? This is what it looks like:

        private bool IsTargeted(string contextId, double percentage)
        {
            byte[] hash;

            using (HashAlgorithm hashAlgorithm = SHA256.Create())
            {
                hash = hashAlgorithm.ComputeHash(Encoding.UTF8.GetBytes(contextId));
            }

            //
            // Use first 4 bytes for percentage calculation
            // Cryptographic hashing algorithms ensure adequate entropy across hash
            uint contextMarker = BitConverter.ToUInt32(hash, 0);

            double contextPercentage = (contextMarker / (double)uint.MaxValue) * 100;

            return contextPercentage < percentage;
        }

To an extent, this is a random selection, but it uses the User ID and feature name to calculate the hash for that selection (that’s what gets passed into contextId), meaning that the same user will see the same thing each time. You may also find when playing with this that for small numbers, it doesn’t really match the expectation; for example, at 40%, you would expect around two out of five users to see the feature, but when I ran my test, all the five users could see the feature. Larger numbers work better, although the fact that this is tied to the User Id makes it a little tricky to test (you can’t simply launch the site and press Ctrl-F5 until it switches over).

Again, it’s worth pointing out that what a group is, is determined by you (or at least the creator of the HttpContextTargetingContextAccessor). This means that you can base this on a claim, on the first letter of the username, the time of day, anything you like. I haven’t tried it, but I suspect you could put a DB query in here, too. That’s probably not the best idea, because it gets called a lot, but I believe it’s possible.

Default Rollout Percentage

Here we have a catch-all – if the user is not in the group, and not identified as a user, this will allow you to expose your feature to a percentage of the user base. Again, this isn’t something you can easily check by refreshing your page, as it’s based on a hash of the user, group, and feature name. In fact, this won’t work very well at all if you’re not using any kind of identiy.

References

http://dontcodetired.com/blog/post/Using-the-Microsoft-Feature-Toggle-Library-in-ASPNET-Core-(MicrosoftFeatureManagement)

https://github.com/microsoft/FeatureManagement-Dotnet

Calling a Web API from a Console App and Creating a Performance Test

While working on a proof of concept, I needed to create a dummy API, and then run a stress test on it. Whilst the activity itself may seem pointless (creating a templated API and stress testing it), I felt the process might be worth documenting.

Create the API

We’ll start with creating the API:

dotnet new webapi

If you just run this, you should see the following:

The next step is to call that from a console app.

Create a console app to call the API

Add a new console app project to your solution, and replace the code in Program.cs with the following:

    class Program
    {
        static HttpClient client = new HttpClient();
        static string path = "https://localhost:44356/weatherforecast";


        static async Task Main(string[] args)
        {
            Console.WriteLine("Press enter to start test");
            Console.ReadLine();
            string? data = await CallWeatherForecast();

            Console.WriteLine(data ?? "No data returned");
        }

        private static async Task<string?> CallWeatherForecast()
        {            
            HttpResponseMessage response = await client.GetAsync(path);
            if (response.IsSuccessStatusCode)
            {
                string data = await response.Content.ReadAsStringAsync();
                return data;
            }

            return null;
        }
    }

It’s worth noting that I switched on nullable reference types here (in the csproj file):

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.1</TargetFramework>
    <Nullable>enable</Nullable>
  </PropertyGroup>

To test this, you’ll need to start both the projects (either select Set Startup Projects… from the solution context menu, or run the API, and then right-click the console app and select Debug -> Start New Instance).

Once you’ve checked that this works, build the solution in release mode (remember, we’re going to be running a stress test, so debug mode will show skewed results).

Call the console app from JMeter

For the stress test, I’m going to use Jmeter. You’ll need to download it from here.

I won’t go into too much detail about how to set this up, but briefly, extract the downloaded zip somewhere, and run the jmeter.bat file. You should be presented with the following screen:

Add a thread group, so we can simulate multiple users:

Then add an OS Process Sampler to run the console app:

Remember to run the API first, then click the green play arrow. You’ll see the users ramp up:

We don’t have any listeners, so the results are, unfortunately lost. Let’s add a couple:

As you can see, we now have some information. How long these calls are taking on average, the error count, etc. The throughput we’re getting is around 3/second… In fact, running a stress test locally on the same machine, it’s difficult to break it, because as the resources get used up, the JMeter process itself suffers, too. This is a good reason to run JMeter from a VM in the cloud.

Whilst it’s quite difficult to kill the service, I certainly managed to slow it down considerably:

These figures are in milliseconds, which means that 90% of calls are taking over 2 minutes. This entire test took around 15 minutes, and around 10 requests per second was about the best it got to (I ran 10 loops of 1000 concurrent users).

There’s a few things you can do to identify where the performance starts to degrade, but I’m more interested in what happens to these figures if I add DB access.

Note: when you’re playing with this, the reports don’t automatically clear each run, so you have to select each, right-click and clear.

Add calls to a DB

Let’s add Entity Framework to our project.

We can then change the controller to allow adding new records and retrieval by date:

        public WeatherForecastController(
            ILogger<WeatherForecastController> logger,
            StressTestDbContext stressTestDbContext)
        {
            _logger = logger;
            _stressTestDbContext = stressTestDbContext;
        }

        [HttpGet]
        public IEnumerable<WeatherForecast> GetNew()
        {
            var rng = new Random();
            return Enumerable.Range(1, 5).Select(index => new WeatherForecast
            {
                Date = DateTime.Now.AddDays(index),
                TemperatureC = rng.Next(-20, 55),
                Summary = Summaries[rng.Next(Summaries.Length)]
            })
            .ToArray();
        }
        
        [HttpGet("/[controller]/GetByDate/{dateTime}")]
        public IEnumerable<WeatherForecast> GetByDate(DateTime dateTime)
        {
            var forecasts = _stressTestDbContext.Forecast.Where(a => a.Date.Date == dateTime.Date);
            return forecasts;
        }

        [HttpPost]
        public IActionResult Post(WeatherForecast weatherForecast)
        {
            DailyForecast forecast = new DailyForecast()
            {
                Date = weatherForecast.Date,
                Summary = weatherForecast.Summary,
                TemperatureC = weatherForecast.TemperatureC
            };

            _stressTestDbContext.Add(forecast);
            if (_stressTestDbContext.SaveChanges() != 0)
            {
                return Ok();
            }
            return BadRequest();
        }

Finally, we can change the main program to call those functions:

        static async Task Main(string[] args)
        {
            Console.WriteLine("Press enter to start test");
            Console.ReadLine();

            ConsoleTitle("CallGetNewWeatherForecast");
            string? dataGet = await CallGetNewWeatherForecast();
            Console.WriteLine(dataGet ?? "No data returned");

            ConsoleTitle("AddForecast");
            string? dataAdd = await AddForecast(_rnd.Next(30),
                DateTime.Now.AddDays(_rnd.Next(10)), _summaries[_rnd.Next(_summaries.Length)]);
            Console.WriteLine(dataAdd ?? "No data returned");

            ConsoleTitle("CallGetWeatherForecast");
            string? dataGetDate = await CallGetWeatherForecast(DateTime.Now.AddDays(_rnd.Next(10)));
            Console.WriteLine(dataGetDate ?? "No data returned");
        }

        private static void ConsoleTitle(string title)
        {
            Console.ForegroundColor = ConsoleColor.Blue;
            Console.WriteLine(title);
            Console.ResetColor();
        }

        private static async Task<string?> CallGetNewWeatherForecast()
        {            
            HttpResponseMessage response = await client.GetAsync($"{path}");
            if (response.IsSuccessStatusCode)
            {
                string data = await response.Content.ReadAsStringAsync();
                return data;
            }

            return null;
        }

        private static async Task<string?> CallGetWeatherForecast(DateTime dateTime)
        {
            string dateString = dateTime.ToString("yyyy-MM-dd");

            HttpResponseMessage response = await client.GetAsync($"{path}/GetByDate/{dateString}");
            if (response.IsSuccessStatusCode)
            {
                string data = await response.Content.ReadAsStringAsync();
                return data;
            }

            return null;
        }

        private static async Task<string> AddForecast(int temperature, DateTime date, string summary)
        {
            var forecast = new WeatherForecast()
            {
                TemperatureC = temperature,
                Date = date,
                Summary = summary
            };

            HttpResponseMessage response = await client.PostAsJsonAsync($"{path}", forecast);
            if (response.IsSuccessStatusCode)
            {
                string data = await response.Content.ReadAsStringAsync();
                return data;
            }

            return null;
        }

To get a sensible reading, you’ll need to do this from an empty database:

For the first run, we’ll do 500 users, and 3 iterations:

The output from the first run is:

And let’s just check that that’s created the records we expected:

EF async calls vs non-async

To satisfy a curiosity that I’ve had for a while, I’m now going to change the update API method to async:

        [HttpPost]
        public async Task<IActionResult> Post(WeatherForecast weatherForecast)
        {
            DailyForecast forecast = new DailyForecast()
            {
                Date = weatherForecast.Date,
                Summary = weatherForecast.Summary,
                TemperatureC = weatherForecast.TemperatureC
            };

            _stressTestDbContext.Add(forecast);
            if (await _stressTestDbContext.SaveChangesAsync() != 0)
            {
                return Ok();
            }
            return BadRequest();
        }

Again, 1500 records were created:

Here’s the report:

What an interesting result. Making the update async seems to have slightly reduced the throughput. This is running locally, and I only have a 4 core machine, but I would have expected throughput to slightly increase here, rather than decrease.