Category Archives: .Net Core

GitHub Actions – Debugging Techniques and Common Issues

In this post I covered how to debug a failing build. This is more of the same, really; a sort of hodgepodge of bits that have cropped up while discovering issues with various projects.

If you’re interested, the library that caused all this was this one.

.Net Version

Let’s start with the version number that you’re building for. When you create the initial build you get a targetted .net version; and by default, it’s very specific (and the latest version):

dotnet-version: 3.1.101

There’s two things that are worth noting here. The first is that if you intend to release this library on NuGet or somewhere else that it can be consumed, then the lower the target version, the better. A .Net Core app can consume a library of the same version or lower. This sounds obvious, but it’s not without cost: some of the GitHub actions depend on later versions. For example, Publish Nuget uses the switch —skip-duplicate, which is a .Net 3.1 thing. If you try to use this targeting a previous version of .Net, you’ll get the following error:

error: Unrecognized option '--skip-duplicate'

The second thing of note is the specific version; it’s not as well documented as it should be, but you can simply use something like this:

	
	dotnet-version: '3.1.x'

And you’re build will work with any version of 3.1.

Cross Platform, Verbosity and Tracing

As with the post mentioned above, I had an issue with a specific library, whereby two of the tests were failing. The first test in question called a line of code that compared two strings:

if (String.Compare(string1, string2, StringComparison.OrdinalIgnoreCase) == 0)  

It turns out that this doesn’t work cross platform, and was failing because it was running on Ubuntu.

The second issue was slightly more nuanced, and relates to the dates (1/3 was being read as 3/1); this isn’t specifically a Linux / Windows issue, but it is something that’s different between testing on your local environment, and on the build server. This might not be as much of an issue if you’re in the U.S. (or any country that formats its dates with the month first).

Although I initially suspected the cause, I began by changing the log level on the build:

run: dotnet test --no-restore --verbosity normal

To:

run: dotnet test --no-restore --verbosity detailed

Unfortunately, this didn’t really give me anything; and I’m sad to say that I next resorted to inserting lines line this into the code to try and determine what was going on:

Console.WriteLine("pcm-test-1");

I’m not saying this is the best method of debugging (especially in a situation like this), but it’s where we all go to when nothing else works.

A Better Way – Debugging on Linux Locally

In this post I covered how you can install Ubuntu and run it from your Windows Terminal, and in this post, I covered how you can install .Net on that instance. This means that we can run the tests directly from Linux and see what’s going on locally.

Simply cd to the directory that you’ve pulled the code down to, and run dotnet test. You may need to run it as elevated privilege, but it should run, and fail with the same error that you’re getting from GitHub:

Summary

I’ve used GitHub Actions a few times now, and this issue of the code running on a different platform is by far the most challenging thing about using them. Given that this is running on a Windows machine, being able to run (and debug) on a Linux platform is a huge step forward.

Installing .Net on Ubuntu… on Windows

With the new Windows Subsystem for Linux, and the Windows Terminal, comes the ability to run .Net programs on Linux from your Windows machine.

Here I wrote briefly about how you can install and run Linux on your Windows machine. In this post, I’ll cover how to install .Net.

If you don’t have the Windows Terminal, then you can install it here.

The installation process is pretty straightforward, and the first step is to launch the Windows Terminal. Once that’s running, open a Linux tab, and run the following two scripts (if you’re interested in where these came from, follow the link in the References section below):

wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb

Then run:

sudo apt-get update; \
  sudo apt-get install -y apt-transport-https && \
  sudo apt-get update && \
  sudo apt-get install -y dotnet-sdk-3.1

That should do it. To verify that it has:

dotnet --version

And you should see:

References

https://docs.microsoft.com/en-us/dotnet/core/install/linux-ubuntu

Change the Default Asp.Net Core Layout to Use Feature Folders

One of the irritating things about the Asp.Net Core default project is that the various parts of your system are arranged by type, as opposed to function. For example, if you’re working on the Accounts page, you’re likely going to want to change the view, the controller and, perhaps, the model; you are, however, unlikely to want to change the Sales Order controller as a result of your change: so why have the AccountsController and SalesOrderController in the same place, but away from the AccountsView?

If you create a new Asp.Net Core MVC Web App:

Then you’ll get a layout like this (in fact, exactly like this):

If your web app has two or three controllers, and maybe five or six views, then this works fine. When you start getting a larger, more complex app, you’ll find that you’re scrolling through your solution trying to find the SalesOrderController, or the AccountsView.

One way to alleviate this, is to re-organise your project to reference features in vertical slices. For example:

There’s not much to either of these, but let’s just put them in for the sake of completeness; the View:

@{
    ViewData["Title"] = "Wibble Page";
}

<div class="text-center">
    <h1 class="display-4">Wibble</h1>    
</div>

And the controller:

namespace WebApplication1.Wibble
{
    public class WibbleController : Controller
    {
        public IActionResult Index()
        {
            return View();
        }
    }
}

The problem here is that the engine won’t know where to look for the views. We can change that by changing the ConfigureServices method in Startup.cs:

        public void ConfigureServices(IServiceCollection services)
        {
            . . . 
            services.Configure<RazorViewEngineOptions>(options =>
            {
                options.ViewLocationFormats.Clear();
                options.ViewLocationFormats.Add($"/Wibble/{{0}}{RazorViewEngine.ViewExtension}");
                options.ViewLocationFormats.Add($"/Views/Shared/{{0}}{RazorViewEngine.ViewExtension}");
            });
        }

Let’s also change the default controller action (in the Configure method of Startup.cs):

        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }
            else
            {
                app.UseExceptionHandler("/Home/Error");
                // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
                app.UseHsts();
            }

            . . . 
            app.UseEndpoints(endpoints =>
            {
                endpoints.MapControllerRoute(
                    name: "default",
                    pattern: "{controller=Wibble}/{action=Index}/{id?}");
            });
        }

There’s more than a few libraries that will handle this for you (here’s one by the late Scott Allen), but it’s always nice to be able to do such things manually before you resort to a third-party library.

Manually Creating a Test Harness in .Net

Let me start by saying that this post isn’t intended to try to replace Selenium, or Cypress, or whatever UI testing tools you may choose to use. In fact, it’s something that I did for manual testing, although it’s not difficult to imagine introducing some minor automation.

The Problem

Imagine that you have a solution that requires some data – perhaps it requires a lot of data, because you’re testing some specific performance issue, or perhaps you just want to see what the screen looks like when you have a lot of data. Let’s also imagine that you’re repeatedly running your project for one reason or another, and adding data, or whatever.

My idea here was that I could create a C# application that scripts this process, but because it’s an internal application, I could give it access to the data layer directly.

The Solution

Basically, the solution to this (and to many things) was a console app. Let’s take a solution that implements a very basic service / repository pattern:

From this, we can see that we have a pretty standard layout, and essentially, what we’re trying to do is insert some data into the database. It’s a bonus if we can add some test coverage while we’re at it (manual test coverage is still test coverage – it just doesn’t show up on your stats). So, if you’re using a REST type pattern, you might want to use the controller endpoints to add the data; but for my purpose, I’m just going to add the data directly into the data access layer.

Let’s see what the console app looks like:

        static async Task Main(string[] args)
        {
            IConfiguration configuration = new ConfigurationBuilder()
                  .AddJsonFile("appsettings.json", true, true)
                  .Build();

            // Ensure the DB is populated
            var dataAccess = new TestDataAccess(configuration.GetConnectionString("ConnectionString"));
            if (dataAccess.GetDataCount() == 0)
            {
                var data = new List<MyData>();

	     // Generate 100 items of data
                for (int j = 0; j <= 100; j++)
                {
		var dataItem = CreateTestItem();
                      data.Add(dataItem);
                }
                dataAccess.AddDataRange(data);
            }

            // Launch the site            
            await RunCommand("dotnet", "build");
            await RunCommand("dotnet", "run", 5);

            System.Diagnostics.Process.Start(@"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe", @"https://localhost:5001");
        }

Okay, so let’s break this down: there’s essentially three sections to this: configuration, adding the data, and running the app.

Configuration

We’ll start with the configuration:

       IConfiguration configuration = new ConfigurationBuilder()
                  .AddJsonFile("appsettings.json", true, true)
                  .Build();

        // Ensure the DB is populated
        var dataAccess = new TestDataAccess(configuration.GetConnectionString("ConnectionString"));

Because we’re using a console app, we’ll need to get the configuration; you could copy the appsettings.json, but my suggestion would be to add a link; that is, add an existing item, and select that item from the main project, then choose to “Add As Link” (this is not a new feature):

This means that you’ll be able to change the config file, and it will be reflected in the test harness.

Creating the data

There’s not too much point in me covering what’s behind TestDataAccess – suffice to say that it encapsulates the data access layer; which, as a minimum, requires the connection string.

It’s also worth pointing out that we check whether there is any data there before running it. Depending on your specific use-case, you may choose to remove this.

Building, running, and launching the app

Okay, so we’ve now added our data, we now want to build the main application – thanks to the command line capabilities of .Net Core, this is much simpler than it was when we used to have to try and wrangle with MSBuild!

    // Launch the site            
    await RunCommand("dotnet", "build");
    await RunCommand("dotnet", "run", 5);

    await RunCommand(@"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe", @"https://localhost:5001");

RunCommand probably needs a little more attention, but before we look at that, let’s think about what we’re trying to do:

1. Build the application
2. When the application has built, run the application
3. Once the application is running, navigate to the site

RunCommand looks like this:

        private static async Task RunCommand(string command, string? args = null, int? waitSecs = -1)
        {
            Console.WriteLine($"Executing command: {command} (args: {args})");

            Process proc = new System.Diagnostics.Process();
            proc.StartInfo.WorkingDirectory = @"..\..\..\..\MyApp.App";
            proc.StartInfo.FileName = command;
            proc.StartInfo.Arguments = args ?? string.Empty;

            proc.Start();

            if ((waitSecs ?? -1) == -1)
            {
                proc.WaitForExit();
            }
            else
            {
                if (waitSecs! == 0) return;
                await Task.Delay(waitSecs!.Value * 1000);
            }
        }

“But this looks inordinately complicated for a simple wrapper for running a process!” I hear you say.

It is a bit, but the essence is this: when running the build command, we need to wait for it to complete, when running the run command, we can’t wait for it to complete, because it never will; but we do need to move to the next thing; and when we launch the site, we don’t really care whether it waits or not after that.

Summary

I appreciate that some of you may be regretting spending the time reading through this, as all I’ve essentially done is script some data creation and run an application; but I imagine there are some people out there, like me, that want to see (visually) what their app looks like with different data shapes.

Feature Flags in Asp.Net Core – Advanced Features – Targeting a Specific Audience using Feature Filters

In this post, I introduced the concept, and Microsoft’s implementation, of Feature Flags. In fact, there’s a lot more to both than I covered in this initial post. In this post, I want to cover how you could use this to turn a particular feature on for a single user, or a group of users. You can even specify groups for those users, and allow all, or some of those users to see the feature.

It’s worth noting that, at the time of writing, this functionality is currently only in preview; you’ll need the following NuGet package (or later) for it to work:

<PackageReference Include="Microsoft.FeatureManagement.AspNetCore" Version="2.2.0-preview" />

Before we get into how to use this, let’s have a quick look at the source. The interesting method is EvaluateAsync. Essentially this method returns a boolean indicating whether or not the feature is available; you could simply return true and the feature would always be enabled; but let’s see an abridged version of this method:

        public Task<bool> EvaluateAsync(FeatureFilterEvaluationContext context, ITargetingContext targetingContext)
        {
            . . .

            //
            // Check if the user is being targeted directly
            if (targetingContext.UserId != null &&
                settings.Audience.Users != null &&
                settings.Audience.Users.Any(user => targetingContext.UserId.Equals(user, ComparisonType)))
            {
                return Task.FromResult(true);
            }

            //
            // Check if the user is in a group that is being targeted
            if (targetingContext.Groups != null &&
                settings.Audience.Groups != null)
            {
                foreach (string group in targetingContext.Groups)
                {
                    GroupRollout groupRollout = settings.Audience.Groups.FirstOrDefault(g => g.Name.Equals(group, ComparisonType));

                    if (groupRollout != null)
                    {
                        string audienceContextId = $"{targetingContext.UserId}\n{context.FeatureName}\n{group}";

                        if (IsTargeted(audienceContextId, groupRollout.RolloutPercentage))
                        {
                            return Task.FromResult(true);
                        }
                    }
                }
            }

            //
            // Check if the user is being targeted by a default rollout percentage
            string defaultContextId = $"{targetingContext.UserId}\n{context.FeatureName}";

            return Task.FromResult(IsTargeted(defaultContextId, settings.Audience.DefaultRolloutPercentage));
        }

So, the process is that it looks for specific users and, if it finds them, they can see the feature; if it cannot find them then it looks through the groups (we’ll come back to IsTargeted later), and finally reverts to the DefaultRolloutPercentage – again, we’ll look into what that is later on.

Let’s start with a single user

If you have a look at the previous post, you’ll see that the Feature Management system is being added using the following syntax:

services.AddFeatureManagement();

In order to add one of the pre-defined filters, we’ll need to add to this like so:

            services.AddFeatureManagement()
                .AddFeatureFilter<TargetingFilter>();

You’ll also need the following class importing:

using Microsoft.FeatureManagement.FeatureFilters;

If you run this now, you’ll get the following error:

System.InvalidOperationException: ‘Unable to resolve service for type ‘Microsoft.FeatureManagement.FeatureFilters.ITargetingContextAccessor’ while attempting to activate ‘Microsoft.FeatureManagement.FeatureFilters.TargetingFilter’.’

The reason being that you need to identify a ITargetingContextAccessor. What exactly this looks like is up to the implementer, but you’ll find an example implementation here.

We’ll come back to this shortly in the groups section.

Let’s now have a look at what our appsettings.json might look like:

  "FeatureManagement": {
    "MyFeature": {
      "EnabledFor": [
        {
          "Name": "Targeting",
          "Parameters": {
            "Audience": {
              "Users": [
                "[email protected]"
              ]
            }
          }
        }
      ]      
    }

If we have a look at the default HttpContextTargetingContextAccessor (see the link above for the ITargetingContextAccessor), we’ll see that the UserId is being set there:

            TargetingContext targetingContext = new TargetingContext
            {
                UserId = user.Identity.Name,
                Groups = groups
            };

This isn’t particularly controversial – at least the User Id part isn’t; however, it doesn’t have to be this; for example, you could get the family_name claim, and return that – and then you could target your feature to anyone named Smith. It’s a bit of a silly example, but the point is that you can customise how this works (in fact, you can write a completely custom filter, which I’ll probably cover in a later post).

This part of the Feature Management is not to be underestimated: you could release a feature, in live, to only one or two Beta Testers. However, it is quite specific; that’s where Groups come in.

Groups

Groups are slightly more interesting. You can specify which groups are in and out using the following configuration:

  "FeatureManagement": {

    "MyFeature": {
      "EnabledFor": [
        {
          "Name": "Targeting",
          "Parameters": { 
            "Audience": {
              "Users": [],
              "Groups": [
                {
                  "Name": "Group1",
                  "RolloutPercentage": 80
                },
                {
                  "Name": "Group2",
                  "RolloutPercentage": 40
                }
              ],
              "DefaultRolloutPercentage": 20
            }
          }
        }
      ]
    },

The RolloutPercentage here indicates what proportion of the group that you wish to be included.

Do you remember the IsTargeted method from earlier? This is what it looks like:

        private bool IsTargeted(string contextId, double percentage)
        {
            byte[] hash;

            using (HashAlgorithm hashAlgorithm = SHA256.Create())
            {
                hash = hashAlgorithm.ComputeHash(Encoding.UTF8.GetBytes(contextId));
            }

            //
            // Use first 4 bytes for percentage calculation
            // Cryptographic hashing algorithms ensure adequate entropy across hash
            uint contextMarker = BitConverter.ToUInt32(hash, 0);

            double contextPercentage = (contextMarker / (double)uint.MaxValue) * 100;

            return contextPercentage < percentage;
        }

To an extent, this is a random selection, but it uses the User ID and feature name to calculate the hash for that selection (that’s what gets passed into contextId), meaning that the same user will see the same thing each time. You may also find when playing with this that for small numbers, it doesn’t really match the expectation; for example, at 40%, you would expect around two out of five users to see the feature, but when I ran my test, all the five users could see the feature. Larger numbers work better, although the fact that this is tied to the User Id makes it a little tricky to test (you can’t simply launch the site and press Ctrl-F5 until it switches over).

Again, it’s worth pointing out that what a group is, is determined by you (or at least the creator of the HttpContextTargetingContextAccessor). This means that you can base this on a claim, on the first letter of the username, the time of day, anything you like. I haven’t tried it, but I suspect you could put a DB query in here, too. That’s probably not the best idea, because it gets called a lot, but I believe it’s possible.

Default Rollout Percentage

Here we have a catch-all – if the user is not in the group, and not identified as a user, this will allow you to expose your feature to a percentage of the user base. Again, this isn’t something you can easily check by refreshing your page, as it’s based on a hash of the user, group, and feature name. In fact, this won’t work very well at all if you’re not using any kind of identiy.

References

http://dontcodetired.com/blog/post/Using-the-Microsoft-Feature-Toggle-Library-in-ASPNET-Core-(MicrosoftFeatureManagement)

https://github.com/microsoft/FeatureManagement-Dotnet

Calling a Web API from a Console App and Creating a Performance Test

While working on a proof of concept, I needed to create a dummy API, and then run a stress test on it. Whilst the activity itself may seem pointless (creating a templated API and stress testing it), I felt the process might be worth documenting.

Create the API

We’ll start with creating the API:

dotnet new webapi

If you just run this, you should see the following:

The next step is to call that from a console app.

Create a console app to call the API

Add a new console app project to your solution, and replace the code in Program.cs with the following:

    class Program
    {
        static HttpClient client = new HttpClient();
        static string path = "https://localhost:44356/weatherforecast";


        static async Task Main(string[] args)
        {
            Console.WriteLine("Press enter to start test");
            Console.ReadLine();
            string? data = await CallWeatherForecast();

            Console.WriteLine(data ?? "No data returned");
        }

        private static async Task<string?> CallWeatherForecast()
        {            
            HttpResponseMessage response = await client.GetAsync(path);
            if (response.IsSuccessStatusCode)
            {
                string data = await response.Content.ReadAsStringAsync();
                return data;
            }

            return null;
        }
    }

It’s worth noting that I switched on nullable reference types here (in the csproj file):

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.1</TargetFramework>
    <Nullable>enable</Nullable>
  </PropertyGroup>

To test this, you’ll need to start both the projects (either select Set Startup Projects… from the solution context menu, or run the API, and then right-click the console app and select Debug -> Start New Instance).

Once you’ve checked that this works, build the solution in release mode (remember, we’re going to be running a stress test, so debug mode will show skewed results).

Call the console app from JMeter

For the stress test, I’m going to use Jmeter. You’ll need to download it from here.

I won’t go into too much detail about how to set this up, but briefly, extract the downloaded zip somewhere, and run the jmeter.bat file. You should be presented with the following screen:

Add a thread group, so we can simulate multiple users:

Then add an OS Process Sampler to run the console app:

Remember to run the API first, then click the green play arrow. You’ll see the users ramp up:

We don’t have any listeners, so the results are, unfortunately lost. Let’s add a couple:

As you can see, we now have some information. How long these calls are taking on average, the error count, etc. The throughput we’re getting is around 3/second… In fact, running a stress test locally on the same machine, it’s difficult to break it, because as the resources get used up, the JMeter process itself suffers, too. This is a good reason to run JMeter from a VM in the cloud.

Whilst it’s quite difficult to kill the service, I certainly managed to slow it down considerably:

These figures are in milliseconds, which means that 90% of calls are taking over 2 minutes. This entire test took around 15 minutes, and around 10 requests per second was about the best it got to (I ran 10 loops of 1000 concurrent users).

There’s a few things you can do to identify where the performance starts to degrade, but I’m more interested in what happens to these figures if I add DB access.

Note: when you’re playing with this, the reports don’t automatically clear each run, so you have to select each, right-click and clear.

Add calls to a DB

Let’s add Entity Framework to our project.

We can then change the controller to allow adding new records and retrieval by date:

        public WeatherForecastController(
            ILogger<WeatherForecastController> logger,
            StressTestDbContext stressTestDbContext)
        {
            _logger = logger;
            _stressTestDbContext = stressTestDbContext;
        }

        [HttpGet]
        public IEnumerable<WeatherForecast> GetNew()
        {
            var rng = new Random();
            return Enumerable.Range(1, 5).Select(index => new WeatherForecast
            {
                Date = DateTime.Now.AddDays(index),
                TemperatureC = rng.Next(-20, 55),
                Summary = Summaries[rng.Next(Summaries.Length)]
            })
            .ToArray();
        }
        
        [HttpGet("/[controller]/GetByDate/{dateTime}")]
        public IEnumerable<WeatherForecast> GetByDate(DateTime dateTime)
        {
            var forecasts = _stressTestDbContext.Forecast.Where(a => a.Date.Date == dateTime.Date);
            return forecasts;
        }

        [HttpPost]
        public IActionResult Post(WeatherForecast weatherForecast)
        {
            DailyForecast forecast = new DailyForecast()
            {
                Date = weatherForecast.Date,
                Summary = weatherForecast.Summary,
                TemperatureC = weatherForecast.TemperatureC
            };

            _stressTestDbContext.Add(forecast);
            if (_stressTestDbContext.SaveChanges() != 0)
            {
                return Ok();
            }
            return BadRequest();
        }

Finally, we can change the main program to call those functions:

        static async Task Main(string[] args)
        {
            Console.WriteLine("Press enter to start test");
            Console.ReadLine();

            ConsoleTitle("CallGetNewWeatherForecast");
            string? dataGet = await CallGetNewWeatherForecast();
            Console.WriteLine(dataGet ?? "No data returned");

            ConsoleTitle("AddForecast");
            string? dataAdd = await AddForecast(_rnd.Next(30),
                DateTime.Now.AddDays(_rnd.Next(10)), _summaries[_rnd.Next(_summaries.Length)]);
            Console.WriteLine(dataAdd ?? "No data returned");

            ConsoleTitle("CallGetWeatherForecast");
            string? dataGetDate = await CallGetWeatherForecast(DateTime.Now.AddDays(_rnd.Next(10)));
            Console.WriteLine(dataGetDate ?? "No data returned");
        }

        private static void ConsoleTitle(string title)
        {
            Console.ForegroundColor = ConsoleColor.Blue;
            Console.WriteLine(title);
            Console.ResetColor();
        }

        private static async Task<string?> CallGetNewWeatherForecast()
        {            
            HttpResponseMessage response = await client.GetAsync($"{path}");
            if (response.IsSuccessStatusCode)
            {
                string data = await response.Content.ReadAsStringAsync();
                return data;
            }

            return null;
        }

        private static async Task<string?> CallGetWeatherForecast(DateTime dateTime)
        {
            string dateString = dateTime.ToString("yyyy-MM-dd");

            HttpResponseMessage response = await client.GetAsync($"{path}/GetByDate/{dateString}");
            if (response.IsSuccessStatusCode)
            {
                string data = await response.Content.ReadAsStringAsync();
                return data;
            }

            return null;
        }

        private static async Task<string> AddForecast(int temperature, DateTime date, string summary)
        {
            var forecast = new WeatherForecast()
            {
                TemperatureC = temperature,
                Date = date,
                Summary = summary
            };

            HttpResponseMessage response = await client.PostAsJsonAsync($"{path}", forecast);
            if (response.IsSuccessStatusCode)
            {
                string data = await response.Content.ReadAsStringAsync();
                return data;
            }

            return null;
        }

To get a sensible reading, you’ll need to do this from an empty database:

For the first run, we’ll do 500 users, and 3 iterations:

The output from the first run is:

And let’s just check that that’s created the records we expected:

EF async calls vs non-async

To satisfy a curiosity that I’ve had for a while, I’m now going to change the update API method to async:

        [HttpPost]
        public async Task<IActionResult> Post(WeatherForecast weatherForecast)
        {
            DailyForecast forecast = new DailyForecast()
            {
                Date = weatherForecast.Date,
                Summary = weatherForecast.Summary,
                TemperatureC = weatherForecast.TemperatureC
            };

            _stressTestDbContext.Add(forecast);
            if (await _stressTestDbContext.SaveChangesAsync() != 0)
            {
                return Ok();
            }
            return BadRequest();
        }

Again, 1500 records were created:

Here’s the report:

What an interesting result. Making the update async seems to have slightly reduced the throughput. This is running locally, and I only have a 4 core machine, but I would have expected throughput to slightly increase here, rather than decrease.

Debugging an Asp.Net Core React Application in Azure

I’ve recently been working with an Asp.Net Core ReactJS application. When trying to debug this remotely, I switched on Development mode in order to get a stack trace when it crashed:

Instead of the stack trace, I got this:

An unhandled exception occurred while processing the request. AggregateException: One or more errors occurred. (One or more errors occurred. (The NPM script ‘start’ exited without indicating that the create-react-app server was listening for requests. The error output was: )) System.Threading.Tasks.Task.ThrowIfExceptional(bool includeTaskCanceledExceptions)
InvalidOperationException: The NPM script ‘start’ exited without indicating that the create-react-app server was listening for requests. The error output was:

This is, in fact, caused by the following code:

            app.UseSpa(spa =>
            {
                spa.Options.SourcePath = "ClientApp";

                if (_env.IsDevelopment())
                {
                    spa.UseReactDevelopmentServer(npmScript: "start");
                }
            });

This uses the setting “Development” to determine whether to start a local React server; which will fail on a remote server. However, I want to see a stack trace, which is here:

            if (_env.IsDevelopment())            
            {
                app.UseDeveloperExceptionPage();
            }

The problem here is that “Development” has two functions – it displays a stack trace, and it manages all these variables that should only run on your machine. What we need are two settings that both mean “Development”; one that means that we’re running locally, and one that we’re trying to debug. Start with an environment variable:

You can set this to anything you choose… But I’ve gone with “LocalDevelopment”.

The next step is to find all the places that check IsDevelopment, and replace them. What we essentially want is this:

                //if (_env.IsDevelopment())
                if (_env.IsEnvironment("LocalDevelopment"))
                {

However, we can create our own extension method, so that the code looks a lot neater:

        public static bool IsLocalDevelopment(this IWebHostEnvironment env)
        {
            return (env.IsEnvironment("LocalDevelopment"));
        }

Remember that IsEnvironment() is actually an extension method itself, so you would need to include:

using Microsoft.Extensions.Hosting;

In your extension class.

What to change

The following places will, at a minimum, need replacing for a standard web app. The stack trace should be displayed in either situation:

        public void Configure(IApplicationBuilder app)
        {
            if (_env.IsDevelopment() || _env.IsLocalDevelopment())                  
            {
                app.UseDeveloperExceptionPage();
            }

The React check that started all this:

            app.UseSpa(spa =>
            {
                spa.Options.SourcePath = "ClientApp";

                if (_env.IsLocalDevelopment())
                {
                    spa.UseReactDevelopmentServer(npmScript: "start");
                }
            });

Also, if you’re using local secrets, you’ll need this in Program.cs:

        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder
                        .ConfigureAppConfiguration((hostingContext, config) =>
                        {
                            if (hostingContext.HostingEnvironment.IsEnvironment("LocalDevelopment"))
                            {
                                config.AddUserSecrets<Program>();
                            }

Because, by default, local secrets are only added for Development only.

Summary

That’s it, you can now set the ASPNETCORE_ENVIRONMENT to Development on a remote server, and you should get a stack trace.

Add Storage Queue Message

I’ve written quite extensively in the past about Azure, and Azure Storage. I recently needed to add a message to an Azure storage queue, and realised that I had never written a post about that, specifically. As with many Azure focused .Net activities, it’s not too complex; but I do like to have my own notes on things.

If you’ve arrived at this post, you may find it’s very similar to the Microsoft documentation.

How to add a message

The first step is to install a couple of NuGet packages:

Install-Package Microsoft.Azure.Storage.Common
Install-Package Microsoft.Azure.Storage.Queue

My preference for these kinds of things is to create a helper: largely so that I can mock it out for testing; however, even if you fundamentally object to the concept of testing, you may find such a class helpful, as it keeps all your code in one place.

 public class StorageQueueHelper
{
        private readonly string _connectionString;
        private readonly string _queueName;

        public StorageQueueHelper(string connectionString, string queueName)
        {
            _connectionString = connectionString;
            _queueName = queueName;
        }

        public async Task AddNewMessage(string messageBody)
        {
            var queue = await GetQueue();

            CloudQueueMessage message = new CloudQueueMessage(messageBody);
            await queue.AddMessageAsync(message);
        }

        private async Task<CloudQueue> GetQueue()
        {
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(_connectionString);
            CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = queueClient.GetQueueReference(_queueName);
            await queue.CreateIfNotExistsAsync();

            return queue;
        }
}

The class above works for a single queue, and storage account. Depending on your use case, this might not be appropriate.

The GetQueue() method here is a bit naughty, as it actually changes something (or potentially changes something). Essentially, all it’s doing is connecting to a cloud storage account, and then getting a reference to a queue. We know that the queue will exist, because we’re forcing it to (CreateIfNotExistsAsync()).

Back in AddNewMessage(), once we have the queue, it’s trivial to simply create the message and add it.

References

https://docs.microsoft.com/en-us/azure/storage/queues/storage-tutorial-queues

Debugging GitHub Actions

I’ve recently been playing with GitHub actions. Having been around the block a few times, I’ve seen a fair few methods of building and deploying software, and of those, a fair few that are automated in some way. Oddly, debugging these things seems to be in the same place it was around 10 years ago: you see an error, try to work out what caused it, fix it, and run the build / deploy, rinse and repeat.

In some respects, this process may have actually become harder to debug since the days of TFS (at least with TFS, you could connect to the server and see why the software wasn’t building).

Anyway, onto GitHub actions

I’ve been trying to set-up an automated CI/CD pipeline for a new utility that I’ve been playing with.

After I’d configured a basic build, I started getting the following error:

MSBUILD : error MSB1003: Specify a project or solution file. The current working directory does not contain a project or solution file.

So, I thought future me (and perhaps one or two other people) may like to see the process that I went through to resolve (or at least to diagnose) this.

1. Git Bash

Your build is very likely trying to run on a Linux platform. If you have a look at your build file, it will tell you (or, more accurately, you will tell it) where it’s building:

So, the first step is to load up bash and manually type in the commands that the build is executing, one at a time. Again, these are all in the build file:

Make sure that you do this from the root directory, in case your problem relates to the path.

2. Add Debug Steps

Assuming that you’ve done step one and everything is working, the next stage is to start adding some logging. I strongly suspect that someone reading this will tell me there’s an easier way to go about this (please tell me there’s an easier way to go about this!) but this is how I went about adding tracing:

steps:	
	- uses: actions/[email protected]
	- name: Setup .NET Core
	uses: actions/[email protected]
	with:
	dotnet-version: 3.1.101
	- name: where are we
	run: pwd
	- name: list some key files
	run: ls -lrt
	- name: try a different die
	run: ls -lrt websitemeta
	- name: Install dependencies
	run: dotnet restore ./websitemeta/

As you can see, I was working under the assumption that my build was failing because the directory paths were incorrect. In fact, the paths were fine:

With the logging stage, there’s two ways to look at this
1. You’re closely following the scientific method of establishing a hypothesis and testing it; or
2. You’re blindly logging as much information as you can to try and extract a clue.

You’d be surprised how quickly 1 turns into 2!

3. Remember the platform that you’re running on

Okay, so in Step 1, I stated that you should try running the build in bash; but remember that, despite the Unix like interface, you’re still on Windows. As you can see from my build file, my build is on Ubuntu. In my particular case, this held the key – in fact, my build was failing because I’d used the wrong case for the directory path.

This is also true for your tests; for example, if you’ve hard-coded a directory path like this in your test:

string path = "tmp\myfile.txt";

It will fail, because in Unix, it would be:

string path = "tmp/myfile.txt";