CSS Animations Sprite Sheet

I’ve recently been investigating the prospect of creating a web-site, where an animation (admittedly a cheesy one) is played out when you load the page. My idea was that two sprites would walk to the centre of the screen. I was actually thinking to mimic something like Manic Miner. Initially, I thought I’d probably need to do some messing around with image manipulation in Javascript, but this gave me the idea that I might be able to run the entire thing through CSS.

Sprite Sheet

The first thing that you need to get is something to animate. The principle of a sprite sheet is that you have many images in a single image file. This is simply a speed trick – it’s faster to pull down a single image, and cache it, than to constantly load separate images for each part of the animation.

If you have separate images, then for this post to work for you, you’ll need to combine them into a single sprite sheet. The tool suggested in the video above is Sprite Sheet Packer. I’ve used this before, and it does the job – you can also do this manually (although you would have to be careful about the spacing).

Now that we have a sprite sheet, we can add it to our project; in my case, I’ve put it in wwwroot\assets.

Let’s talk about how we can layout our page, and animate it; we’ll start with the HTML.

HTML

The HTML here is the simplest part: we just want two divs:

<div id="testdiv"></div>
<div id="testdiv2"></div>

That’s all the HTML; everything else is CSS; let’s start with the animation.

CSS

Onto the CSS, which is the crux of the whole thing. Let’s start with the @keyframes. This is the part where you can actually define an animation. In our case, we’ll need three of them: one to move the sprite left, one to move it right, and one to animate it.

Move Left & Right

The animation to move an element on the screen is pretty straightforward, you just tell it where to start, and where to stop:

@keyframes moverightanimation {
    from { 
        left: 10%; 
    }
    to {
        left: calc(50% - 25px);
    }
}

@keyframes moveleftanimation {
    from {
        left: 90%;
    }
    to {
        left: calc(50% + 25px);
    }
}

As you can see, you can start (or stop) at an absolute position (10px for example), or a percentage, or a calculated value.

Animate

The animation is a bit strange; here’s the code:

@keyframes move {
    100% {
        background-position: -72px, 0px;
    }
}

What this is doing is setting the final position to be negative the width of the full sprite sheet (this will only work for horizontal sheets). We’ll then tell it to step through the images in the element itself. Here’s the CSS for one of the divs:

#testdiv {
    position: absolute;
    left: 20%;
    top: 300px;
    width: 24px;
    height: 30px;    
    animation: moverightanimation 4s forwards, move 1s steps(3) infinite;
    background: transparent url('../assets/spritesheet.png') 0 0 no-repeat;    
}

Here, we’re calling multiple animations (moverightanimation and move); for the move, we’re specifying a step – that is, we’re telling it that it needs to get from where it currently is, to 100% over 3 steps, and there are 3 sprites in my sprite sheet, so it will helpfully divide one by the other and come up with a value to increase by each time.

The opposite call does almost the same:

#testdiv2 {
    position: absolute;
    left: 80%;
    top: 300px;
    width: 24px;
    height: 30px;    
    animation: moveleftanimation 4s forwards, move 1s steps(3) infinite;
    background: transparent url('../assets/spritesheet2') 0 0 no-repeat;
}

Summary

As someone who spends as little time as they can messing with UI and CSS, I thought this was a fun little exercise.

Downloading from an SFTP site using SSH.Net

I’ve done this a few times, and have failed to document it, and so each time it’s a pain. To be clear, if you’re downloading from FTP, you should have a look here: it’s an excellent, and simple code snippet that will do the job for you.

However, this won’t work with SFTP. Having looked into this, it looks like there’s basically two options: Chilkat if you have some money to spend, and SSH.NET if you don’t. I actually implemented Chilkat before realising it was commercial – it’s a much easier experience, and it’s commercially supported. I’m not being paid by them to say this, and you can easily get by with SSH.NET (in fact, that’s the subject of this post); but there are advantages to going with a commercial option.

Using SSH.NET

The latest version of SSH was released in 2016. There does appear to be an update being worked on, but the NuGet package (at the time of writing) is from 2016:

Install-Package SSH.NET

There’s some pretty good documentation on the GitHub site, and the two links in the references offer wrapper implementations. What’s here is not really any better than what’s there, but I hadn’t seen a post with the code in (plus, I like to have these things documented in my own words).

Client

The first thing you’ll need for each call is a client; I’ve separated mine into a method:

SftpClient GetClient()
{
    var connectionInfo = new PasswordConnectionInfo(url, port, username, password);

    var client = new SftpClient(connectionInfo);
    client.Connect();
    return client;
}

If you’re not sure what your port is, it’s probably 22, although I can’t help with the rest. We’re going to cover 5 basic methods here: List, Upload, Download, Read and Delete.

List

        IEnumerable<SftpFile> ListFiles(string directory)
        {
            using var client = GetClient();
            try
            {                
                return client.ListDirectory(directory);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

There’s not much to explain here – ListDirectory returns a list of SftpFiles. The parameter directory is the directory on the remote server; if you want to access the base directory, then directory = “.”. It’s worth looking at the finally block, though. You should disconnect the client when you’re done.

Upload

        void UploadFile(string localPath, string remotePath)
        {
            using var client = GetClient();
            try
            {
                using var s = File.OpenRead(localPath);
                client.UploadFile(s, remotePath);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

Again, not much here: simply creating a stream, and passing it to client.UploadFile().

Download

This is basically the reverse of UploadFile. In this case, we create the stream locally and download to it:

        void DownloadFile(string remotePath, string localPath)
        {
            using var client = GetClient();
            try
            {
                using var s = File.Create(localPath);
                client.DownloadFile(remotePath, s);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

Read

The Read functionality is, perhaps, the most trivial, and the most useful:

        string ReadFile(string remotePath)
        {
            using var client = GetClient();
            try
            {                
                return client.ReadAllText(remotePath);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

Depending on your use case, this might be all you need.

Delete

Finally, the Delete method:

        void DeleteFile(string remotePath)
        {
            using var client = GetClient();
            try
            {
                client.DeleteFile(remotePath);
            }
            catch (Exception exception)
            {
                // Log error
                throw;
            }
            finally
            {
                client.Disconnect();
            }
        }

Summary

You might be wondering what the purpose of these wrapper functions are: they do little more than call the underlying SSH library. The only reason I can give, other than that it provides some reusable documentation, is that one day the new version of SSH might be out (or you may choose to switch the Chilkat). Having already done the opposite, I can attest to how much easier that is, if you’re not picking through the main code trying to extract pieces of SSH.NET.

References

https://github.com/dotnet-labs/SftpService/blob/master/SFTPService/SftpService.cs

https://github.com/jorgepsmatos/SftpClientDemo/blob/master/Sftp.cs

Executing Dynamically Generated SQL in EF Core

Entity Framework Core is primarily concerned with defining and executing pre-defined queries on a DB table, or executing a simple join on two of more tables. You can do more, but that’s its sweet spot – and for good reason. Have a think about the last project you worked on: I reckon 95% of you will be thinking about a forms-over-data application. Get a list of orders, update the product price, create a new user: really basic CRUD operations. So it makes sense that a framework like EF Core should make the 95% as easy as possible.

But what if you’re in the 5%? What if you’re working on a project where you have a query with 5 or 6 tables. Maybe you don’t even know which fields you’ll need to filter on. Well, for those users, EF Core provides two methods:

FromSqlRaw

And

FromSqlInterpolated

Both methods basically allow you to build your own SQL string, and execute it against the DB Context. It’s worth remembering that, unlike ADO.NET, you can’t just parse the output, you need to be ready for it; but that’s not the subject of this post. Here. We’re talking about a dynamically build query that returns a known type.

So, what are we trying to do?

The Problem

Let’s imagine that we have a table, called MyTable for the sake of argument, and MyTable has five columns:

MyTable
-- Field1
-- Field2
-- Field3
-- Field4
-- Field5

Now, let’s imagine that we have an app that allows the user to pick one or more fields to filter on (obviously, exposing the DB structure to the user is a bad idea unless you’re writing an SSMS clone, but bear with me here). When this comes through to EF, you’ve basically got three ways to implement this:

1. Dynamically build the query string and execute it directly.
2. Use (either in raw SQL, a stored procedure, or in Linq) the Field1 = filter OR filter = “” method.
3. Bring the data down to the client, and filter it there.

For the purpose of this post, we’re going to discuss option (1). All the options have merit, depending on the use case.

Let’s talk about building dynamic SQL, and some of the pitfalls.

Dynamic SQL

Building dynamic SQL is easy, right? You could just do this:

string sql = "select * from MyTable ";

If (!string.IsNullOrWhitespace(filter1)
{
    sql += $"where Field1 = {filter1}"
}

// Add additional fields, and deal with the WHERE / AND problem

var result = _myDbContext.MyTable.FromSqlInterpolated(sql);

So, this code is bad for several reasons. Let’s run through them.

1. It doesn’t compile

The first thing (although by far not the worst), is that this code won’t compile. The reason this won’t compile is that FromSqlInterpolated takes a FormattableString. Of course, this is easily correctable:

var result = _myDbContext.MyTable.FromSqlInterpolated($"{sql}");

Now the code compiles, but it probably doesn’t do what you want (to be clear, it probably will work at this point).

The next issue is one of security.

2. SQL Injection

If the field above (filter1) is set to: ‘1’; DROP TABLE MyTable; (or something equivalent), your app will execute it. This is because we’re not using placeholders. What does this mean:

1. FromSqlInterpolated accepts an interpolated string, but what we’re passing here is a pre-built string. The code being passed into FromSqlInterpolated needs to be interpolated at the time; e.g.:
– _myDbContext.MyTable.FromSqlInterpolated($”select * from MyTable where field1 = {filter1}”);
2. Since this won’t work in our case, we’ll need to build up the query using FromSqlRaw, and pass in parameters.

3. Caching

The way that most (at least relational) databases work, is that they try to cache the most frequently used queries. The problem is that, if you do something like the query above: “select * from MyTable where Field1 = ‘myvalue'”, that gets cached. If you run that again, but pass ‘myvalue2’ then that gets cached. Run it 1000 times with different values, and other queries start to get pushed out of the cache.

So, how can we build up a dynamic SQL string, without leaving ourselves open to SQL injection, or flooding the cache?

A Solution

This is a solution, it is not the solution. In it, we’re reverting a little to an ADO.NET style of doing things, by providing SqlParameters. Let’s see what that might look like:

            string sql =
                "select * " +
                "from MyTable ";

            var parameters = new List<SqlParameter>();

            int i = 1;
            foreach (var filter in filters)
            {
                if (i == 1)
                {
                    sql += $"where Field{i} = @filter{i} ";                    
                }
                else
                {
                    sql += $"and Field{i} = @filter{i} ";
                }
                parameters.Add(new SqlParameter($"@filter{i++}", filter));
            }

            var result = _paymentsDbContext.MyTable
                .FromSqlRaw(sql, parameters.ToArray())
                .ToList();

We’re assuming that we have an array / list of filters here, and we just create a query that looks something like this:

select *
from MyTable
where Field1 = @filter1
and Field3 = @filter3

Because these are placeholders, you’re protected against SQL injection, and the DB engine will cache this query (so changing the values themselves doesn’t affect the cache). It’s worth bearing in mind that if we run this again, and end up with the following:

select *
from MyTable
where Field1 = @filter1
and Field4 = @filter4
and Field5 = @filter5

This will be separately cached, so you’d need to make a decision as to whether you are likely to have few enough queries that it doesn’t matter.

Summary

Quite often, people use EF as though the data was all local. It’s always worth remembering that each time you make a call, you are accessing the DB – despite the fact that Microsoft have gone to great lengths to make you think you are not. Each time you touch the DBMS, you change something – or, rather, something is changed as a result of you touching the DB. This might be obvious, like you insert a record, or it might be hidden, like the cache is updated. Nevertheless, the DB is a service, and it is probably the most important service in your system.

This post is based on my knowledge of relational databases, so the same may not be completely true of No-Sql databases.

Asp.Net Core Routing and Debugging

I recently came across an issue whereby an Asp.Net Core app was not behaving in the way I expected. In this particular case, I was getting strange errors, and began to suspect that the controller that I thought was reacting to my call, in fact, was not, and that the routing was to blame.

Having had a look around the internet, I came across some incredibly useful videos by Ryan Novak. One of the videos is linked at the end of this article, and I would encourage anyone working in web development using Microsoft technologies to watch it.

The particularly useful thing that I found in this was that, In Asp.Net Core 3.x and onwards, there is a clearly defined “Routing Zone” (Ryan’s terms – not mine). It falls here:

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
    …
    app.UseRouting();

    // Routing Zone

    app.UseAuthentication();
    app.UseAuthorization();            

    // End

    app.UseEndpoints(endpoints =>
    …
}

This means that middleware and services that make use of routing should sit in this zone, but also that you can intercept the routing. For example:

    app.UseRouting();

    // Routing Zone

    app.Use(next => context =>
    {
        Console.WriteLine($"Found: {context.GetEndpoint()?.DisplayName}");
        return next(context);
    });

    app.UseAuthentication();
    app.UseAuthorization();            

    // End

    app.UseEndpoints(endpoints =>

This little nugget will tell you which endpoint you’ve been directed to. There’s actually quite a lot more you can do here, too. Once you’ve got the endpoint, it has a wealth of information about the attributes, filters, and all sorts of information that makes working out why your app isn’t going where you expect much easier.

References

https://docs.microsoft.com/en-us/aspnet/core/mvc/views/overview?view=aspnetcore-3.1

https://www.youtube.com/watch?v=fSSPEM3e7yY

IConfiguration does not contain a definition for GetValue

When I search for something, I typically start with pmichaels.net [tab] [search term] – especially if I know I’ve come across a problem before. This post is one such problem: it’s not hard to find the solution, but it is hard to find the solution on this site (because until now, it wasn’t here).

The error (which is also in the title):

IConfiguration does not contain a definition for GetValue

Typically appears when you’re using IConfiguration outside of an Asp.Net Core app. In fact, GetValue is an extension method, so the solution is to simply add the following package:

Install-Package Microsoft.Extensions.Configuration.Binder

References

https://stackoverflow.com/questions/54767718/iconfiguration-does-not-contain-a-definition-for-getvalue

GitHub Actions – Debugging Techniques and Common Issues

In this post I covered how to debug a failing build. This is more of the same, really; a sort of hodgepodge of bits that have cropped up while discovering issues with various projects.

If you’re interested, the library that caused all this was this one.

.Net Version

Let’s start with the version number that you’re building for. When you create the initial build you get a targetted .net version; and by default, it’s very specific (and the latest version):

dotnet-version: 3.1.101

There’s two things that are worth noting here. The first is that if you intend to release this library on NuGet or somewhere else that it can be consumed, then the lower the target version, the better. A .Net Core app can consume a library of the same version or lower. This sounds obvious, but it’s not without cost: some of the GitHub actions depend on later versions. For example, Publish Nuget uses the switch —skip-duplicate, which is a .Net 3.1 thing. If you try to use this targeting a previous version of .Net, you’ll get the following error:

error: Unrecognized option '--skip-duplicate'

The second thing of note is the specific version; it’s not as well documented as it should be, but you can simply use something like this:

	
	dotnet-version: '3.1.x'

And you’re build will work with any version of 3.1.

Cross Platform, Verbosity and Tracing

As with the post mentioned above, I had an issue with a specific library, whereby two of the tests were failing. The first test in question called a line of code that compared two strings:

if (String.Compare(string1, string2, StringComparison.OrdinalIgnoreCase) == 0)  

It turns out that this doesn’t work cross platform, and was failing because it was running on Ubuntu.

The second issue was slightly more nuanced, and relates to the dates (1/3 was being read as 3/1); this isn’t specifically a Linux / Windows issue, but it is something that’s different between testing on your local environment, and on the build server. This might not be as much of an issue if you’re in the U.S. (or any country that formats its dates with the month first).

Although I initially suspected the cause, I began by changing the log level on the build:

run: dotnet test --no-restore --verbosity normal

To:

run: dotnet test --no-restore --verbosity detailed

Unfortunately, this didn’t really give me anything; and I’m sad to say that I next resorted to inserting lines line this into the code to try and determine what was going on:

Console.WriteLine("pcm-test-1");

I’m not saying this is the best method of debugging (especially in a situation like this), but it’s where we all go to when nothing else works.

A Better Way – Debugging on Linux Locally

In this post I covered how you can install Ubuntu and run it from your Windows Terminal, and in this post, I covered how you can install .Net on that instance. This means that we can run the tests directly from Linux and see what’s going on locally.

Simply cd to the directory that you’ve pulled the code down to, and run dotnet test. You may need to run it as elevated privilege, but it should run, and fail with the same error that you’re getting from GitHub:

Summary

I’ve used GitHub Actions a few times now, and this issue of the code running on a different platform is by far the most challenging thing about using them. Given that this is running on a Windows machine, being able to run (and debug) on a Linux platform is a huge step forward.

Installing .Net on Ubuntu… on Windows

With the new Windows Subsystem for Linux, and the Windows Terminal, comes the ability to run .Net programs on Linux from your Windows machine.

Here I wrote briefly about how you can install and run Linux on your Windows machine. In this post, I’ll cover how to install .Net.

If you don’t have the Windows Terminal, then you can install it here.

The installation process is pretty straightforward, and the first step is to launch the Windows Terminal. Once that’s running, open a Linux tab, and run the following two scripts (if you’re interested in where these came from, follow the link in the References section below):

wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb

Then run:

sudo apt-get update; \
  sudo apt-get install -y apt-transport-https && \
  sudo apt-get update && \
  sudo apt-get install -y dotnet-sdk-3.1

That should do it. To verify that it has:

dotnet --version

And you should see:

References

https://docs.microsoft.com/en-us/dotnet/core/install/linux-ubuntu

Install and Run Linux on Windows 10

Windows 10 now has a feature that allows you to install Linux. That’s right – you can install Linux on a Windows machine.

If you haven’t already, install the Windows Terminal from the store.

You’ll then need to turn on the subsystem. Launch Windows Features:

This requires a restart, in fact, this whole process requires at least two restarts. Once you’ve brought the machine back up, have a look on the Windows Store for your favourite Linux; here’s the Ubuntu one:

Once this has installed, launch the store app. You’ll be prompted to select a username and password – this is your root password, so make sure you know what it is.

Now launch Windows Terminal:

And you’re away – you can rm -r to your hearts content!

References

https://medium.com/@rkstrdee/how-to-add-ubuntu-tab-to-windows-10s-new-terminal-271eb6dfd8ee

Change the Default Asp.Net Core Layout to Use Feature Folders

One of the irritating things about the Asp.Net Core default project is that the various parts of your system are arranged by type, as opposed to function. For example, if you’re working on the Accounts page, you’re likely going to want to change the view, the controller and, perhaps, the model; you are, however, unlikely to want to change the Sales Order controller as a result of your change: so why have the AccountsController and SalesOrderController in the same place, but away from the AccountsView?

If you create a new Asp.Net Core MVC Web App:

Then you’ll get a layout like this (in fact, exactly like this):

If your web app has two or three controllers, and maybe five or six views, then this works fine. When you start getting a larger, more complex app, you’ll find that you’re scrolling through your solution trying to find the SalesOrderController, or the AccountsView.

One way to alleviate this, is to re-organise your project to reference features in vertical slices. For example:

There’s not much to either of these, but let’s just put them in for the sake of completeness; the View:

@{
    ViewData["Title"] = "Wibble Page";
}

<div class="text-center">
    <h1 class="display-4">Wibble</h1>    
</div>

And the controller:

namespace WebApplication1.Wibble
{
    public class WibbleController : Controller
    {
        public IActionResult Index()
        {
            return View();
        }
    }
}

The problem here is that the engine won’t know where to look for the views. We can change that by changing the ConfigureServices method in Startup.cs:

        public void ConfigureServices(IServiceCollection services)
        {
            . . . 
            services.Configure<RazorViewEngineOptions>(options =>
            {
                options.ViewLocationFormats.Clear();
                options.ViewLocationFormats.Add($"/Wibble/{{0}}{RazorViewEngine.ViewExtension}");
                options.ViewLocationFormats.Add($"/Views/Shared/{{0}}{RazorViewEngine.ViewExtension}");
            });
        }

Let’s also change the default controller action (in the Configure method of Startup.cs):

        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            }
            else
            {
                app.UseExceptionHandler("/Home/Error");
                // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
                app.UseHsts();
            }

            . . . 
            app.UseEndpoints(endpoints =>
            {
                endpoints.MapControllerRoute(
                    name: "default",
                    pattern: "{controller=Wibble}/{action=Index}/{id?}");
            });
        }

There’s more than a few libraries that will handle this for you (here’s one by the late Scott Allen), but it’s always nice to be able to do such things manually before you resort to a third-party library.