Handling Events Inside an Azure Function

While writing an Azure function, I established a simple repository pattern using the standard IoC for Functions.

However, when I ran the project up, I started getting the following error:

System.ObjectDisposedException: ‘Cannot access a disposed object. A common cause of this error is disposing a context that was resolved from dependency injection and then later trying to use the same context instance elsewhere in your application. This may occur if you are calling Dispose() on the context, or wrapping the context in a using statement. If you are using dependency injection, you should let the dependency injection container take care of disposing context instances.

Odd!

This was very unexpected. The path of code that led to this was a very basic Add, so I was confident that the object had not been intentionally disposed.

If a write caused this, then what about a read?

I tried reading from the context at the same point, and saw the same effect. Finally, I injected the DB Context into the function itself and called it from the first line of the function call, and it worked fine.

In fact, the issue was that I was trying to read from an Azure Service Bus here:

var client = new QueueClient(connectionStringBuilder, ReceiveMode.PeekLock); 
client.RegisterMessageHandler(handler, new MessageHandlerOptions(onException));

Before we continue, I realise that someone reading this will probably announce loudly: “You’re doing it wrong! You should use the Function Binding to read from a Service Bus Queue!” To that person, I reply loudly, and proudly: “Yes, you’re right – I’m doing it wrong!”

Back to the story. So, after I left, I saw … oops – wrong story.

The issue here is that the event is losing the context of the function, because the function can complete before the event fires.

However, it seems that I have solved this problem before. To fix this, you can simply use a TaskCompletionSource to set something up similar to the following:

private static TaskCompletionSource<bool> _tcs;

[FunctionName("MyFunction")]
public async Task Run([TimerTrigger("0 */1 * * * *")]TimerInfo myTimer)
{
    _tcs = new TaskCompletionSource<bool>();

    var client = new QueueClient(connectionStringBuilder, ReceiveMode.PeekLock); 

    client.RegisterMessageHandler(handler, new MessageHandlerOptions(onException));

    await _tcs.Task;
}

private Task handler(Message message, CancellationToken cancellationToken)
{            
    _myService.AddData(someData);

    _tcs.SetResult(true);

    . . . 
}

Summary

As stated, if your problem is that you’re trying to use the service bus client, then you are probably doing it wrong. It’s something of a shame that the new Service Bus SDK doesn’t support a GetNextMessage() style syntax, but it doesn’t, and so you get forced into the Function Bindings (not that I have any specific issue with them).

However, there are many reasons why you may need or choose to use an event inside a function, and if you do, this is one possible way to do so.

It’s worth pointing out that the above solution will only work for the first call of the event. If you have multiple calls then you’ll need to somehow determine when the last one has occurred.

References

https://www.pmichaels.net/2020/09/19/creating-and-using-an-azure-service-bus-in-net-core/

Add Evaluation to ML.NET Model

I’ve recently been playing around with ML.NET. I’ve documented some of my escapades here.

One thing that I found was that, when trying to work out how effective the model created from the data was, I was manually rifling through the data that I had: having to essentially compare each result. As a result, I created EvaluateMLNet. It’s a small NuGet package that essentially does this for you.

Step 1 – Import the package

If you follow the previous post, you’ll find yourself in a position where you have a Model project and a ConsoleApp project:

In order to use the package, start by importing the NuGet package into the ConsoleApp project:

Install-Package EvaluateMLNet

Step 2 – Add the data

The next stage is to have some data to test your model against. Add this to your ConsoleApp project, and remember to set the Copy if Newer or Copy Always on the file.

Step 3 – Code

The main program will look something like this to begin with:

        static void Main(string[] args)
        {
            // Create single instance of sample data from first line of dataset for model input
            ModelInput sampleData = new ModelInput()
            {
                Season = @"2019-2020",
                Time = @"7:00pm",
                Home_team_name = @"Liverpool",
                Away_team_name = @"Norwich City",
                Referee = @"Michael Oliver",
                Stadium_name = @"Anfield (Liverpool)",
            };

            // Make a single prediction on the sample data and print results
            var predictionResult = ConsumeModel.Predict(sampleData);

            Console.WriteLine($"Season: {sampleData.Season}");
            Console.WriteLine($"Time: {sampleData.Time}");
            Console.WriteLine($"Home_team_name: {sampleData.Home_team_name}");
            Console.WriteLine($"Away_team_name: {sampleData.Away_team_name}");
            Console.WriteLine($"Referee: {sampleData.Referee}");
            Console.WriteLine($"Stadium_name: {sampleData.Stadium_name}");
            Console.WriteLine($"\n\nPredicted home_team_score: {predictionResult.Score}\n\n");
            Console.WriteLine("=============== End of process, hit any key to finish ===============");
            Console.ReadKey();
        }

Instead of that, start by extracting the Predict method – that is everything after:

// Make a single prediction on the sample data and print results

This should give you:

        static void Main(string[] args)
        {
            // Create single instance of sample data from first line of dataset for model input
            ModelInput sampleData = new ModelInput()
            {
                Season = @"2019-2020",
                Time = @"7:00pm",
                Home_team_name = @"Liverpool",
                Away_team_name = @"Norwich City",
                Referee = @"Michael Oliver",
                Stadium_name = @"Anfield (Liverpool)",
            };

            PredictData(sampleData);
        }

        private static float PredictData(ModelInput sampleData)
        {
            // Make a single prediction on the sample data and print results
            var predictionResult = ConsumeModel.Predict(sampleData);

            Console.WriteLine($"Season: {sampleData.Season}");
            Console.WriteLine($"Time: {sampleData.Time}");
            Console.WriteLine($"Home_team_name: {sampleData.Home_team_name}");
            Console.WriteLine($"Away_team_name: {sampleData.Away_team_name}");
            Console.WriteLine($"Referee: {sampleData.Referee}");
            Console.WriteLine($"Stadium_name: {sampleData.Stadium_name}");

            return predictionResult.Score;
        }

Note that we’re also returning the result of the prediction. In fact, that method only needs to return the result of the prediction – the Console.WriteLines are unnecessary.

Finally, replace the Main method with the following:

        static void Main(string[] args)
        {
            var runEvaluation = new RunEvaluation();
            var resultStats = runEvaluation.Run<ModelInput, float>("my-data-file.csv",
                "Predicted_field_name", PredictData, 0);

            Console.WriteLine("Results");
            Console.WriteLine("Total evaluated results: {0}", resultStats.EvaluatedCount);
            Console.WriteLine("Total success results: {0}", resultStats.SuccessCount);
            Console.ReadLine();            
        }

A few comments about this code:

1. The “Predicted_field_name” is the name of the field in the class ModelInput. It’s very likely to have a capitalised first letter.
2. My data is predicting a float – if yours is not then you’ll need to change this.
3. The margin of error here is 0; that means that a prediction is only considered a success where it’s within the same integer; for example, if the prediction was 1.3, then 1 and 2 would be considered a success, but 0 and 3 would not.

That’s it, the output will give you something like this:

Summary

I realise that this is feeding a very niche crowd, but hopefully it’ll save someone a Saturday afternoon.

Predicting Football Results Using ML.Net

Have you ever wondered why milk is where it is in the supermarket? At least in the UK, the supermarkets sell milk either at cost, or even below cost, in order to attract people into the store: they don’t want you to just buy milk, because they lose money on it. You’ll know the stuff they want you to buy, because it’s at eye level, and it’s right in front of you when you walk in the door.

ML.NET is an open source machine learning platform. As with many things that Microsoft are the guardian of, they want to sell you Azure time, and so this is just another pint of milk at the back of the shop. Having said that – it’s pretty good milk!

In this post, I’m going to set-up a very simple test. I’ll be using this file. It shows the results of the English Premier League from 2018-2019. I’m not a huge football fan myself, but it was the only data I could find at short notice.

Add ML.NET

ML.NET is in preview, so the first step is to add the feature. Oddly, it’s under the “Cross Platform Development” workload:

Once you’ve added this, you may reasonably expect something to change, although it likely won’t – or it will – you’ll see a context menu when you right click a project – but it won’t do anything. This is, bizarrely, because you need to explicitly enable preview features; under Tools -> Options, you’ll find this menu:

Let’s create a new console application; then right click on the project:

You’re now given a list of “scenarios”:

For our purpose, let’s select “Value prediction”. We’re going to try to predict the number of goals for the home team, based on the shots on goal. Just select the file as input data and the column to predict as home_team_goal_count:

For the input, just select home_team_goal_count and then Train:

It asks you for a time here. The longer you give it, the better the model – although there will be a point at which additional time won’t make any difference. You should be able to get a reasonable prediction after 10 seconds, but I’ve picked 60 to see how good a prediction it can make. As someone who knows nothing about football, I would expect these figures to be an almost direct correlation.

Once you’ve finished training the model, you can Evaluate it:

So, it would appear that with 9 shots at goal, I can expect that a team will score between 1 and 2. If I now click the code button, ML.NET will create two new projects for me, including a new Console application; which looks like this:

        static void Main(string[] args)
        {
            // Create single instance of sample data from first line of dataset for model input
            ModelInput sampleData = new ModelInput()
            {
                Home_team_shots = 9F,
            };

            // Make a single prediction on the sample data and print results
            var predictionResult = ConsumeModel.Predict(sampleData);

            Console.WriteLine("Using model to make single prediction -- Comparing actual Home_team_goal_count with predicted Home_team_goal_count from sample data...\n\n");
            Console.WriteLine($"Home_team_shots: {sampleData.Home_team_shots}");
            Console.WriteLine($"\n\nPredicted Home_team_goal_count: {predictionResult.Score}\n\n");
            Console.WriteLine("=============== End of process, hit any key to finish ===============");
            Console.ReadKey();
        }

Let’s modify this slightly, so that we can simply ask it to predict the goal count:

        static void Main(string[] args)
        {
            Console.WriteLine("Enter shots at goal: ");
            string shots = Console.ReadLine();
            if (int.TryParse(shots, out int shotsNum))
            {
                PredictGoals(shotsNum);
            }
        }

        private static void PredictGoals(int shots)
        {
            // Create single instance of sample data from first line of dataset for model input
            ModelInput sampleData = new ModelInput()
            {
                Home_team_shots = shots,
            };

            // Make a single prediction on the sample data and print results
            var predictionResult = ConsumeModel.Predict(sampleData);

            Console.WriteLine("Using model to make single prediction -- Comparing actual Home_team_goal_count with predicted Home_team_goal_count from sample data...\n\n");
            Console.WriteLine($"Home_team_shots: {sampleData.Home_team_shots}");
            Console.WriteLine($"\n\nPredicted Home_team_goal_count: {predictionResult.Score}\n\n");
            Console.WriteLine("=============== End of process, hit any key to finish ===============");
            Console.ReadKey();
        }

And now, we can get a prediction from the app:

29 shots at goal result in only 2 – 3 goals. We can glance at the spreadsheet to see how accurate this is:

It appears it is actually quite accurate!

Debugging a Failed API Request, and Defining an Authorization Header Using Fiddler Everywhere

Postman is a really great tool, but I’ve recently been playing with Telerik’s new version of Fiddler – Fiddler Everywhere. You’re probably thinking that these are separate tools, that do separate jobs… and before Fiddler Everywhere, you’d have been right. However, have a look at this screen:

…In fact it’s not Postman. The previous version of this tool (called Compose) from Fiddler 4 was pretty clunky – but now you can simply right-click on a request and select “Edit in Compose”.

Use Case

Let’s imagine for a minute, that you’ve made a request, and you got a 401 because the authentication header wasn’t set. You can open that request in Fiddler:

In fact, this returns a nice little web snippet, which we can see by selecting the Web tab:

The error:

The request requires HTTP authentication

This error means that authentication details have not been passed to the API; typically, these can be passed in the header, in the form:

Authorization: Basic EncodedUsernamePassword

So, let’s try re-issuing that call in Fiddler – let’s start with the encoding. Visit this site and enter into the Encode section your username and password:

Username:password

For example:

In Fiddler, right click on the request in question, and select to Edit in Compose. You should now see the full request, and be able to edit any part of it; for example, you can add an Authorization header:

Now that you’ve proved that works, you can make the appropriate change in the code – here’s what that looks like in C#:

            byte[] bytes = Encoding.UTF8.GetBytes($"{username}:{password}");
            var auth = Convert.ToBase64String(bytes);

            var client = _httpClientFactory.CreateClient();
            client.DefaultRequestHeaders.Add($"Authorization", $"Basic {auth}");

Tag Helper Not Working?

I recently had an issue where I was trying to use a tag helper that wasn’t correctly rendering. The HTML that I had (which was trying to call MyController.MyAction) was this:

            <form asp-controller="My" asp-action="MyAction" method="post">
                <input type="submit" value="Do Crazy Stuff!" />
            </form>

This wasn’t working. The first step when a tag helper isn’t working is always to check the rendered HTML. Tag Helpers should be converted to proper HTML – if they’re actually sat on the page as a tag helper then nothing will happen, because Chrome and Firefox don’t know what the hell a tag helper is.

The rendered HTML should look something like this

<form method="post" action="/My/MyAction">
    <input type="submit" value="Do Crazy Stuff!">
    <input name="__RequestVerificationToken" type="hidden" value="requestverficationtokendsldjlcjsoihcnwoncwoin">
</form>

However, I was seeing the folllowing:

<form asp-controller="My" asp-action="MyAction" method="post">
    <input type="submit" value="Do Crazy Stuff!">
</form>

Like I said – this is no good, as browsers don’t know what to do with “asp-controller” et al.

But Why?

Having done some research, it looks like (pretty much) the only reason for this is that you (or I) don’t have the following line in the _ViewImports file:

@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers

Or, if you do have it, then for whatever reason, the _ViewImports file itself can’t be located.

For any Regular Readers* of this blog, you may have noticed this post regarding setting up a project to use feature folders.

One of the interesting features about _ViewImports, is that is applies to the folder that it’s in, and any sub folders; my folder structure looked like this:

- Feature1
-- Index.cshtml
-- Feature1.cshtml
- Feature2
-- Index.cshtml
-- Feature2.cshtml
- Views
-- _ViewImports
-- Shared

As a result, the _ViewImports will only apply inside the Views folder.

The Solution(s)

I can see three possibilities here. Each has benefits and drawbacks.

Create a Symbolic Link

One possibility is to Create a symbolic link between the _ViewImports in the views directory and the feature folder. Launch a command prompt (as admin):

C:\Project\src\MyApp\Feature1>mklink _ViewImports.cshtml ..\Views\_ViewImports.cshtml

symbolic link created for _ViewImports.cshtml <<===>> ..\Views\_ViewImports.cshtml

This works fine, but symbolic links can cause issues; especially if you’re copying code around (for example, with a source control system).

Just copy the file

Obviously, you could simply copy the _ViewImports file to the relevant directory. The benefit of this is that each folder only has the relevant imports for that folder; the downside being that you need to maintain all of these (in practice, the file shouldn’t be huge and complex, so this may be the easiest option).

Parent Directory

Finally, you could simply introduce a parent folder; for example:

- Features
-- _ViewImports
-- Feature1
--- Index.cshtml
--- Feature1.cshtml
-- Feature2
--- Index.cshtml
--- Feature2.cshtml
- Views
-- Shared

This is probably my favourite solution – it gives you the best of both worlds, but it does mean that you need to re-structure your solution.

References

https://docs.microsoft.com/en-us/aspnet/core/mvc/views/layout?view=aspnetcore-3.1

Notes

* I’m curious as to whether anyone does read this on a regular basis. Most of the traffic seems to come from Google. If you are a regular reader then please drop a comment.

CSS Animations Sprite Sheet

I’ve recently been investigating the prospect of creating a web-site, where an animation (admittedly a cheesy one) is played out when you load the page. My idea was that two sprites would walk to the centre of the screen. I was actually thinking to mimic something like Manic Miner. Initially, I thought I’d probably need to do some messing around with image manipulation in Javascript, but this gave me the idea that I might be able to run the entire thing through CSS.

Sprite Sheet

The first thing that you need to get is something to animate. The principle of a sprite sheet is that you have many images in a single image file. This is simply a speed trick – it’s faster to pull down a single image, and cache it, than to constantly load separate images for each part of the animation.

If you have separate images, then for this post to work for you, you’ll need to combine them into a single sprite sheet. The tool suggested in the video above is Sprite Sheet Packer. I’ve used this before, and it does the job – you can also do this manually (although you would have to be careful about the spacing).

Now that we have a sprite sheet, we can add it to our project; in my case, I’ve put it in wwwroot\assets.

Let’s talk about how we can layout our page, and animate it; we’ll start with the HTML.

HTML

The HTML here is the simplest part: we just want two divs:

<div id="testdiv"></div>
<div id="testdiv2"></div>

That’s all the HTML; everything else is CSS; let’s start with the animation.

CSS

Onto the CSS, which is the crux of the whole thing. Let’s start with the @keyframes. This is the part where you can actually define an animation. In our case, we’ll need three of them: one to move the sprite left, one to move it right, and one to animate it.

Move Left & Right

The animation to move an element on the screen is pretty straightforward, you just tell it where to start, and where to stop:

@keyframes moverightanimation {
    from { 
        left: 10%; 
    }
    to {
        left: calc(50% - 25px);
    }
}

@keyframes moveleftanimation {
    from {
        left: 90%;
    }
    to {
        left: calc(50% + 25px);
    }
}

As you can see, you can start (or stop) at an absolute position (10px for example), or a percentage, or a calculated value.

Animate

The animation is a bit strange; here’s the code:

@keyframes move {
    100% {
        background-position: -72px, 0px;
    }
}

What this is doing is setting the final position to be negative the width of the full sprite sheet (this will only work for horizontal sheets). We’ll then tell it to step through the images in the element itself. Here’s the CSS for one of the divs:

#testdiv {
    position: absolute;
    left: 20%;
    top: 300px;
    width: 24px;
    height: 30px;    
    animation: moverightanimation 4s forwards, move 1s steps(3) infinite;
    background: transparent url('../assets/spritesheet.png') 0 0 no-repeat;    
}

Here, we’re calling multiple animations (moverightanimation and move); for the move, we’re specifying a step – that is, we’re telling it that it needs to get from where it currently is, to 100% over 3 steps, and there are 3 sprites in my sprite sheet, so it will helpfully divide one by the other and come up with a value to increase by each time.

The opposite call does almost the same:

#testdiv2 {
    position: absolute;
    left: 80%;
    top: 300px;
    width: 24px;
    height: 30px;    
    animation: moveleftanimation 4s forwards, move 1s steps(3) infinite;
    background: transparent url('../assets/spritesheet2') 0 0 no-repeat;
}

Summary

As someone who spends as little time as they can messing with UI and CSS, I thought this was a fun little exercise.

Downloading from an SFTP site using SSH.Net

I’ve done this a few times, and have failed to document it, and so each time it’s a pain. To be clear, if you’re downloading from FTP, you should have a look here: it’s an excellent, and simple code snippet that will do the job for you.

However, this won’t work with SFTP. Having looked into this, it looks like there’s basically two options: Chilkat if you have some money to spend, and SSH.NET if you don’t. I actually implemented Chilkat before realising it was commercial – it’s a much easier experience, and it’s commercially supported. I’m not being paid by them to say this, and you can easily get by with SSH.NET (in fact, that’s the subject of this post); but there are advantages to going with a commercial option.

Using SSH.NET

The latest version of SSH was released in 2016. There does appear to be an update being worked on, but the NuGet package (at the time of writing) is from 2016:

Install-Package SSH.NET

There’s some pretty good documentation on the GitHub site, and the two links in the references offer wrapper implementations. What’s here is not really any better than what’s there, but I hadn’t seen a post with the code in (plus, I like to have these things documented in my own words).

Client

The first thing you’ll need for each call is a client; I’ve separated mine into a method:

SftpClient GetClient()
{
    var connectionInfo = new PasswordConnectionInfo(url, port, username, password);

    var client = new SftpClient(connectionInfo);
    client.Connect();
    return client;
}

If you’re not sure what your port is, it’s probably 22, although I can’t help with the rest. We’re going to cover 5 basic methods here: List, Upload, Download, Read and Delete.

List

        IEnumerable<SftpFile> ListFiles(string directory)
        {
            using var client = GetClient();
            try
            {                
                return client.ListDirectory(directory);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

There’s not much to explain here – ListDirectory returns a list of SftpFiles. The parameter directory is the directory on the remote server; if you want to access the base directory, then directory = “.”. It’s worth looking at the finally block, though. You should disconnect the client when you’re done.

Upload

        void UploadFile(string localPath, string remotePath)
        {
            using var client = GetClient();
            try
            {
                using var s = File.OpenRead(localPath);
                client.UploadFile(s, remotePath);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

Again, not much here: simply creating a stream, and passing it to client.UploadFile().

Download

This is basically the reverse of UploadFile. In this case, we create the stream locally and download to it:

        void DownloadFile(string remotePath, string localPath)
        {
            using var client = GetClient();
            try
            {
                using var s = File.Create(localPath);
                client.DownloadFile(remotePath, s);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

Read

The Read functionality is, perhaps, the most trivial, and the most useful:

        string ReadFile(string remotePath)
        {
            using var client = GetClient();
            try
            {                
                return client.ReadAllText(remotePath);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

Depending on your use case, this might be all you need.

Delete

Finally, the Delete method:

        void DeleteFile(string remotePath)
        {
            using var client = GetClient();
            try
            {
                client.DeleteFile(remotePath);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

Summary

You might be wondering what the purpose of these wrapper functions are: they do little more than call the underlying SSH library. The only reason I can give, other than that it provides some reusable documentation, is that one day the new version of SSH might be out (or you may choose to switch the Chilkat). Having already done the opposite, I can attest to how much easier that is, if you’re not picking through the main code trying to extract pieces of SSH.NET.

References

https://github.com/dotnet-labs/SftpService/blob/master/SFTPService/SftpService.cs

https://github.com/jorgepsmatos/SftpClientDemo/blob/master/Sftp.cs

Executing Dynamically Generated SQL in EF Core

Entity Framework Core is primarily concerned with defining and executing pre-defined queries on a DB table, or executing a simple join on two of more tables. You can do more, but that’s its sweet spot – and for good reason. Have a think about the last project you worked on: I reckon 95% of you will be thinking about a forms-over-data application. Get a list of orders, update the product price, create a new user: really basic CRUD operations. So it makes sense that a framework like EF Core should make the 95% as easy as possible.

But what if you’re in the 5%? What if you’re working on a project where you have a query with 5 or 6 tables. Maybe you don’t even know which fields you’ll need to filter on. Well, for those users, EF Core provides two methods:

FromSqlRaw

And

FromSqlInterpolated

Both methods basically allow you to build your own SQL string, and execute it against the DB Context. It’s worth remembering that, unlike ADO.NET, you can’t just parse the output, you need to be ready for it; but that’s not the subject of this post. Here. We’re talking about a dynamically build query that returns a known type.

So, what are we trying to do?

The Problem

Let’s imagine that we have a table, called MyTable for the sake of argument, and MyTable has five columns:

MyTable
-- Field1
-- Field2
-- Field3
-- Field4
-- Field5

Now, let’s imagine that we have an app that allows the user to pick one or more fields to filter on (obviously, exposing the DB structure to the user is a bad idea unless you’re writing an SSMS clone, but bear with me here). When this comes through to EF, you’ve basically got three ways to implement this:

1. Dynamically build the query string and execute it directly.
2. Use (either in raw SQL, a stored procedure, or in Linq) the Field1 = filter OR filter = “” method.
3. Bring the data down to the client, and filter it there.

For the purpose of this post, we’re going to discuss option (1). All the options have merit, depending on the use case.

Let’s talk about building dynamic SQL, and some of the pitfalls.

Dynamic SQL

Building dynamic SQL is easy, right? You could just do this:

string sql = "select * from MyTable ";

If (!string.IsNullOrWhitespace(filter1)
{
    sql += $"where Field1 = {filter1}"
}

// Add additional fields, and deal with the WHERE / AND problem

var result = _myDbContext.MyTable.FromSqlInterpolated(sql);

So, this code is bad for several reasons. Let’s run through them.

1. It doesn’t compile

The first thing (although by far not the worst), is that this code won’t compile. The reason this won’t compile is that FromSqlInterpolated takes a FormattableString. Of course, this is easily correctable:

var result = _myDbContext.MyTable.FromSqlInterpolated($"{sql}");

Now the code compiles, but it probably doesn’t do what you want (to be clear, it probably will work at this point).

The next issue is one of security.

2. SQL Injection

If the field above (filter1) is set to: ‘1’; DROP TABLE MyTable; (or something equivalent), your app will execute it. This is because we’re not using placeholders. What does this mean:

1. FromSqlInterpolated accepts an interpolated string, but what we’re passing here is a pre-built string. The code being passed into FromSqlInterpolated needs to be interpolated at the time; e.g.:
– _myDbContext.MyTable.FromSqlInterpolated($”select * from MyTable where field1 = {filter1}”);
2. Since this won’t work in our case, we’ll need to build up the query using FromSqlRaw, and pass in parameters.

3. Caching

The way that most (at least relational) databases work, is that they try to cache the most frequently used queries. The problem is that, if you do something like the query above: “select * from MyTable where Field1 = ‘myvalue'”, that gets cached. If you run that again, but pass ‘myvalue2’ then that gets cached. Run it 1000 times with different values, and other queries start to get pushed out of the cache.

So, how can we build up a dynamic SQL string, without leaving ourselves open to SQL injection, or flooding the cache?

A Solution

This is a solution, it is not the solution. In it, we’re reverting a little to an ADO.NET style of doing things, by providing SqlParameters. Let’s see what that might look like:

            string sql =
                "select * " +
                "from MyTable ";

            var parameters = new List<SqlParameter>();

            int i = 1;
            foreach (var filter in filters)
            {
                if (i == 1)
                {
                    sql += $"where Field{i} = @filter{i} ";                    
                }
                else
                {
                    sql += $"and Field{i} = @filter{i} ";
                }
                parameters.Add(new SqlParameter($"@filter{i++}", filter));
            }

            var result = _paymentsDbContext.MyTable
                .FromSqlRaw(sql, parameters.ToArray())
                .ToList();

We’re assuming that we have an array / list of filters here, and we just create a query that looks something like this:

select *
from MyTable
where Field1 = @filter1
and Field3 = @filter3

Because these are placeholders, you’re protected against SQL injection, and the DB engine will cache this query (so changing the values themselves doesn’t affect the cache). It’s worth bearing in mind that if we run this again, and end up with the following:

select *
from MyTable
where Field1 = @filter1
and Field4 = @filter4
and Field5 = @filter5

This will be separately cached, so you’d need to make a decision as to whether you are likely to have few enough queries that it doesn’t matter.

Summary

Quite often, people use EF as though the data was all local. It’s always worth remembering that each time you make a call, you are accessing the DB – despite the fact that Microsoft have gone to great lengths to make you think you are not. Each time you touch the DBMS, you change something – or, rather, something is changed as a result of you touching the DB. This might be obvious, like you insert a record, or it might be hidden, like the cache is updated. Nevertheless, the DB is a service, and it is probably the most important service in your system.

This post is based on my knowledge of relational databases, so the same may not be completely true of No-Sql databases.

Asp.Net Core Routing and Debugging

I recently came across an issue whereby an Asp.Net Core app was not behaving in the way I expected. In this particular case, I was getting strange errors, and began to suspect that the controller that I thought was reacting to my call, in fact, was not, and that the routing was to blame.

Having had a look around the internet, I came across some incredibly useful videos by Ryan Novak. One of the videos is linked at the end of this article, and I would encourage anyone working in web development using Microsoft technologies to watch it.

The particularly useful thing that I found in this was that, In Asp.Net Core 3.x and onwards, there is a clearly defined “Routing Zone” (Ryan’s terms – not mine). It falls here:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    …
    app.UseRouting();

    // Routing Zone

    app.UseAuthentication();
    app.UseAuthorization();            

    // End

    app.UseEndpoints(endpoints =>
    …
}

This means that middleware and services that make use of routing should sit in this zone, but also that you can intercept the routing. For example:

    app.UseRouting();

    // Routing Zone

    app.Use(next => context =>
    {
        Console.WriteLine($"Found: {context.GetEndpoint()?.DisplayName}");
        return next(context);
    });

    app.UseAuthentication();
    app.UseAuthorization();            

    // End

    app.UseEndpoints(endpoints =>

This little nugget will tell you which endpoint you’ve been directed to. There’s actually quite a lot more you can do here, too. Once you’ve got the endpoint, it has a wealth of information about the attributes, filters, and all sorts of information that makes working out why your app isn’t going where you expect much easier.

References

https://docs.microsoft.com/en-us/aspnet/core/mvc/views/overview?view=aspnetcore-3.1

https://www.youtube.com/watch?v=fSSPEM3e7yY

IConfiguration does not contain a definition for GetValue

When I search for something, I typically start with pmichaels.net [tab] [search term] – especially if I know I’ve come across a problem before. This post is one such problem: it’s not hard to find the solution, but it is hard to find the solution on this site (because until now, it wasn’t here).

The error (which is also in the title):

IConfiguration does not contain a definition for GetValue

Typically appears when you’re using IConfiguration outside of an Asp.Net Core app. In fact, GetValue is an extension method, so the solution is to simply add the following package:

Install-Package Microsoft.Extensions.Configuration.Binder

References

https://stackoverflow.com/questions/54767718/iconfiguration-does-not-contain-a-definition-for-getvalue