Category Archives: C#

Casting a C# Object From its Parent

Have you ever tried to do something akin to the following:

[Fact]
public void ConvertClassToSubClass_Converts()
{
    // Arrange
    var parentClass = new SimpleTestClass();
    parentClass.Property1 = "test";
 
    // Act
    var childClass = parentClass as SimpleTestSubClass;
 
    // Assert
    Assert.Equal("test", childClass.Property1);
}

This is a simple Xunit (failing) test. The reason is fails is because you (or I) am trying to cast a general type to a specific, and C# is complaining that this may not be possible; consequently, you will get null (or for a hard cast, you’ll get an InvalidCastException.

Okay, that makes sense. After all, parentClass could actually be a SimpleTestSubClass2 and, as a result, C# is being safe because there’s (presumably – I don’t work for MS) too many possibilities for edge cases.

This is, however, a solvable problem; there are a few ways to do it, but you can simply use reflection:

public TNewClass CastAsClass<TNewClass>() where TNewClass : class
{
 
    var newObject = Activator.CreateInstance<TNewClass>();
    var newProps = typeof(TNewClass).GetProperties();
 
    foreach (var prop in newProps)
    {
        if (!prop.CanWrite) continue;
 
        var existingPropertyInfo = typeof(TExistingClass).GetProperty(prop.Name);
        if (existingPropertyInfo == null || !existingPropertyInfo.CanRead) continue;
        var value = existingPropertyInfo.GetValue(_existingClass);
        
        prop.SetValue(newObject, value, null);
    }
 
    return newObject;
}

This code will effectively transfer any class over to any other class.

If you’d rather use an existing library, you can always use this one. It’s also Open Source on Git Hib.

Xamarin Dependencies – Android App Just Doesn’t Start After Deployment

Being relatively new to Xamarin, I naively expected any errors to just show up, you know, like when you run a console app after headbutting the keyboard, it gives you some vague indication that there’s a problem with your code.

My story starts with the default template of Xamarin, running just an Android application. I just want to mention again that this is the default template (admittedly I am running VS2019 and .Net Core 3.0 – at the time of writing, .Net Core 3.0 is still in preview).

Anyway, I start writing my app, and everything is running fine. I add a button, and it appears, I do something on button press: it does it – I’m on a roll! Then I add a chunk of code that calls an API… and suddenly the app just doesn’t run. It compiles and deploys fine, but it doesn’t run. At all.

It occurred to me that this does, potentially, make sense. The code that’s generated may now be complete garbage. In the same way as if you headbutted the keyboard in your console app, the C# compiler will simply run and JIT your C# into invalid IL… Except, that’s not what happens. No sane (statically typed) compiled system would compile a bunch of crap and deploy it… but hey ho.

So, why would my app not run?

Well, it was down to the following line:

var data = JsonConvert.DeserializeObject<List<MyData>>(contentString);

The reason being that, by default, JSON.Net is not installed in the default template; however, because it (or a version of it) is a dependency of one of the other libraries, it is accessible! Presumably there’s a conflict somewhere, or when this compiles it produces a big pile of steaming Java!

(I realise it doesn’t compile down to Java – but I think you’ll agree, steaming IL doesn’t have the same ring to it.)

Anyway, the moral of the story is: check your Newtonsoft dependencies, and liberally distribute try / catch blocks everywhere – it seems to be the only way to get a half sensible error from Xamarin.

Adventures in C#8 – The annotation for nullable reference types should only be used in code within a ‘#nullable’ context.

If you’re playing around with C# 8, and have been for a while, you may spot this message when using nullable reference types:

Warning CS8632 The annotation for nullable reference types should only be used in code within a ‘#nullable’ context.

There’s a couple of things you can check to fix this; the first is to check that you are, in fact, using C# 8. It’s likely that you are, because there is a separate error if you are not, but just for my own benefit:

<LangVersion>preview</LangVersion>

The next, and more likely step, is that you’re using the previous declaration for Nullable Reference Types; it used to look like this:

<NullableReferenceTypes>true</NullableReferenceTypes>

But no more! Now it looks like this:

<NullableContextOptions>enable</NullableContextOptions>

And, as if by magic – the error disappears!

But why? Well, it looks like they are making the setting more nuanced than simply on or off.

Update (8/7/2019)

It appears that the NullableContextOptions is now simply: Nullable; e.g.:

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
    <LangVersion>8.0</LangVersion>
    <Nullable>enable</Nullable>
  </PropertyGroup>

The Evolution of the Switch Statement (C#8)

Most languages have a version of the switch statement as far as I’m aware; I must admit, I don’t remember one from Spectrum Basic, but ever since then, I don’t think I’ve come across a language that doesn’t have one. The switch statement in C was interesting. For example, the following was totally valid:

switch (value)
{
    case 1:
        printf("hello ");
    case 2:
        printf("world");        
}

If you gave it a value of 1 would print “hello world”. When C# came out, they insisted on using breaks at the end of case statements, or having no code (admittedly there were a few bugs in C caused by accidentally leaving break statements out):

            int value = 1;
            switch (value)
            {
                case 1:
                    Console.Write("hello ");
                    break;
                case 2:
                    Console.Write("world");
                    break;
            }

Anyway, fast forward around 17 years to C# 7.x, and it basically has the same switch statement; in fact, as far as I’m aware, you could write this switch statement in C# 1.1 and it would compile fine. There’s nothing wrong with it, so I imagine MS were thinking why fix it if it’s not broken.

There are limitations, however; for example, what if I want to return the string, like this:

            int value = 1;
            string greeting = string.Empty;
            switch (value)
            {
                case 1:
                    greeting = "hello ";
                    break;
                case 2:
                    greeting = "world";
                    break;
            }

            Console.WriteLine(greeting);

Now it looks a bit cumbersome. What if we could write it like this:

            int value = 1;
            string greeting = value switch
            {
                1 => "hello ",
                2 => "world",
                _ => string.Empty
            };

            Console.WriteLine(greeting);

From C# 8, you can do just that. The switch statement will return its value. The case syntax is disposed of, and there’s no need for a break statement (which, to be fair, can encourage people to write large swathes of code inside the switch statement – if you don’t believe me then have a look in the Asp.Net Core source!).

And that’s not all. Pattern matching has also been brought in; for example, take the following simple class structure:

    interface IAnimal
    {
        void Eat();
        void Sleep();            
        string Name { get;}
    }

    class Dog : IAnimal
    {
        public string Name { get => "Fido"; }

        public void Eat()
        {
            Console.WriteLine("Dog Eats");
        }

        public void Sleep()
        {
            Console.WriteLine("Dog Sleeps");
        }
    }

    class Cat : IAnimal
    {
        public string Name { get => "Lemmy"; }

        public void Eat()
        {
            Console.WriteLine("Cat Eats");
        }

        public void Sleep()
        {
            Console.WriteLine("Cat Sleeps");
        }
    }

We can put that into a switch statement like this:

            IAnimal animal = new Cat();
            string greeting = animal switch
            {
                Dog d => $"hello dog {d.Name}",
                Cat c => $"hello cat {c.Name}",                
                _ => string.Empty
            };

            Console.WriteLine(greeting);

We can actually do better that this (obviously better is a relative term). Let’s say that we wanted to do something specific for our particular cat:

            IAnimal animal = new Cat();
            string greeting = animal switch
            {
                Dog d => $"hello dog {d.Name}",
                Cat c when c.Name == "Lemmy" => $"Hello motorcat!",
                Cat c => $"hello cat {c.Name}",                
                _ => string.Empty
            };

            Console.WriteLine(greeting);

It’s a bit of a silly and contrived example, but it does illustrate the point; further, if you switch the case statements around for the general and specific form of Cat, you’ll get a compile error!

Asp.Net Core 2.0 – Passing data into a Model Using DI

Imagine you have an Asp.Net Core web page, and you would like to edit some data in a form, but you’d like to default that data to something (maybe initially specified in the Web.Config).

I’m sure there’s dozens of ways to achieve this; but this is one.

Let’s start with a bog standard MVC web app:

Step one is to define a model in which to hold your data; here’s mine:

public class CurrentAppData
{
    public string DataField1 { get; set; }
}

Let’s register that in the IoC container:

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
    services.Configure<CookiePolicyOptions>(options =>
    {
        // This lambda determines whether user consent for non-essential cookies is needed for a given request.
        options.CheckConsentNeeded = context => true;
        options.MinimumSameSitePolicy = SameSiteMode.None;
    });
 
 
    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
 
    services.AddTransient<CurrentAppData, CurrentAppData>(
        a => new CurrentAppData()
        {
            DataField1 = "test value"
        });

Next thing we’ll need is a View, and a corresponding view model to edit our data; here’s the view:

@model EditDataViewModel
@{
    ViewData["Title"] = "Edit Data";
}
 
<h2>@ViewData["Title"]</h2>
 
<div>
    <label>Change data here:</label>
    <input type="text" asp-for="EditData.DataField1" />
 
</div>

And now the view model (that is, the model that is bound to the view):

public class EditDataViewModel
{
    public EditDataViewModel(CurrentAppData editData)
    {
        EditData = editData;
    }
    public CurrentAppData EditData { get; set; }
}

The final step here is to adapt the controller so that the CurrentAppData object is passed through the controller:

public class EditDataController : Controller
{
    private readonly CurrentAppData _currentAppData;
 
    public EditDataController(CurrentAppData currentAppData)
    {
        _currentAppData = currentAppData;
    }
 
    public IActionResult EditData()
    {
        return View(new EditDataViewModel(_currentAppData));
 
 
    }
}

That works as far as it goes, and we now have the data displayed on the screen:

The next step is to post the edited data back to the controller; let’s change the HTML slightly:

<form asp-action="UpdateData" asp-controller="EditData" method="post" enctype="multipart/form-data">
    <label>Change data here:</label>
    <input type="text" asp-for="EditData.DataField1" />
    <br />
    <button type="submit" class="btn-default">Submit Changes</button>
</form>

We’ve added a submit button, which should cause the surrounding form element to execute whichever “method” it’s been told to (in this case, POST). It will look for an action on the controller called UpdateData, so we should create one:

public IActionResult UpdateData(EditDataViewModel editDataViewModel)
{
    System.Diagnostics.Debug.WriteLine(editDataViewModel.EditData.DataField1);
    return View("EditData", editDataViewModel);
}

Here, we’re accepting the EditDataViewModel from the view. However; when we run this, we get the following error:

Error:

InvalidOperationException: Could not create an instance of type ‘WebApplication14.Models.EditDataViewModel’. Model bound complex types must not be abstract or value types and must have a parameterless constructor. Alternatively, give the ‘editDataViewModel’ parameter a non-null default value.

Let’s first implement a fix for this, and then go into the whys and wherefores. The fix is actually quite straightforward; simply give the view model a parameterless constructor:

public class EditDataViewModel
{
    public EditDataViewModel()
    {
        
    }
 
    public EditDataViewModel(CurrentAppData editData)
    {
        EditData = editData;
    }
    public CurrentAppData EditData { get; set; }
}

The problem that we had here is that the `EditDataViewModel` that is returned to UpdateData is a new instance of the model. We can prove this by changing our code slightly:

Here, we’ve added a field called TestField1 to the model, and populated it just before we pass the model to the view; and on the post back, it’s gone. I’m not completely sure why the view model can’t be created by the middleware in the same way that the controller is; but that’s the subject of another post.

Finally, show the value back to the screen

To wrap up, we’ll just show the same value back to the screen; we’ll add an extra value to the model:

public class CurrentAppData
{
    public string DataField1 { get; set; }
 
    public string DisplayField1 { get; set; }
}

And we’ll just display it in the view:

<form asp-action="UpdateData" asp-controller="EditData" method="post" enctype="multipart/form-data">
    <label>Change data here:</label>
    <input type="text" asp-for="EditData.DataField1" />
    <br />
    <button type="submit" class="btn-default">Submit Changes</button>
    <br />
    <label>@Model.EditData.DisplayField1</label>
    <br />
</form>

Finally, we’ll copy that value inside the controller (obviously, this is simulating something meaningful happening), and then display the result:

public IActionResult UpdateData(EditDataViewModel editDataViewModel)
{
    editDataViewModel.EditData.DisplayField1 = editDataViewModel.EditData.DataField1;
    return View("EditData", editDataViewModel);
}

Let’s see what that looks like:

Short Walks – Object Locking in C#

While playing with Azure Event Hubs, I decided that I wanted to implement a thread locking mechanism that didn’t queue. That is, I want to try and get a lock on the resource, and if it’s currently in use, just forget it and move on. The default behaviour in C# is to wait for the resource. For example, consider my method:

static async Task MyProcedure()
{
    Console.WriteLine($"Test1 {DateTime.Now}");
    await Task.Delay(5000);
    Console.WriteLine($"Test2 {DateTime.Now}");
}

I could execute this 5 times like so:

static async Task Main(string[] args)
{
    Parallel.For(1, 5, (a) =>
    {
        MyProcedure();
    });
 
    Console.ReadLine();
}

If I wanted to lock this (just bear with me and assume that makes sense for a minute), I might do this:

private static object _lock = new object();        
 
static async Task Main(string[] args)
{
    Parallel.For(1, 5, (a) =>
    {
        //MyProcedure();
        Lock();
    });
 
    Console.ReadLine();
}
 
static void Lock()
{
    Task.Run(() =>
    {
        lock (_lock)
        {
            MyProcedure().GetAwaiter().GetResult();
        }
    });
}

I re-jigged the code a bit, because you can’t await inside a lock statement, and obviously, just making the method call synchronous would not be locking the asynchronous call.

So now, I’ve successfully made my asynchronous method synchronous. Each execution of `MyProcedure` will happen sequentially, and that’s because `lock` queues the locking calls behind one another.

However, imagine the Event Hub scenario that’s referenced in the post above. I have, for example, a game, and it’s sending a large volume of telemetry up to the cloud. In my particular case, I’m sending a player’s current position. If I have a locking mechanism whereby the locks are queued then I could potentially get behind; and if that happens then, at best, the data sent to the cloud will be outdated and, at worse, it will use up game resources, potentially causing a lag.

After a bit of research, I found an alterntive:

private static object _lock = new object();        
 
static async Task Main(string[] args)
{
    Parallel.For(1, 5, (a) =>
    {
        //MyProcedure();
        //Lock();
        TestTryEnter();
    });
 
    Console.ReadLine();
}

static async Task TestTryEnter()
{
    bool lockTaken = false;
 
    try
    {
        Monitor.TryEnter(_lock, 0, ref lockTaken);
 
        if (lockTaken)
        {
            await MyProcedure();                                        
        }
        else
        {
            Console.WriteLine("Could not get lock");
        }
    }
    finally
    {
        if (lockTaken)
        {
            Monitor.Exit(_lock);
        }
    }
}

So here, I try to get the lock, and if the resource is already locked, I simply give up and go home. There are obviously a very limited number of uses for this; however, my Event Hub scenario, described above, is one of them. Depending on the type of data that you’re transmitting, it may make much more sense to have a go, and if you’re in the middle of another call, simply abandon the current one.

Playing with Azure Event Hub

I’ve recently been playing with the Azure Event Hub. This is basically a way of transmitting large amounts* of data between systems. In a later post, I may try and test these limits by designing some kind of game based on this.

As a quick disclaimer, it’s worth bearing in mind that I am playing with this technology, and so much of the content of this post can be found in the links at the bottom of this post – you won’t find anything original here – just a record of my findings. You may find more (and more accurate) information in those.

Event Hub Namespace

The first step, as with many Azure services, is to create a namespace:

For a healthy amount of data transference, you’ll pay around £10 per month.

Finally, we’ll create event hub within the namespace:

When you create the event hub, it asks how many partitions you need. This basically splits the message delivery; and it’s clever enough to work out, if you have 3 partitions and two listeners that one should have two slots, and one, one slot:

We’ll need an access policy so that we have permission to listen:

New Console Apps

We’ll need to create two applications: a producer and a consumer.

Let’s start with a producer. Create a new console app and add this NuGet library.

Here’s the code:

class Program
{
    private static EventHubClient eventHubClient;
    private const string EhConnectionString = "Endpoint=sb://pcm-testeventhub.servicebus.windows.net/;SharedAccessKeyName=Publisher;SharedAccessKey=key;EntityPath=pcm-eventhub1";
    private const string EhEntityPath = "pcm-eventhub1";
 
    public static async Task Main(string[] args)
    {
        EventHubsConnectionStringBuilder connectionStringBuilder = new EventHubsConnectionStringBuilder(EhConnectionString)
        {
            EntityPath = EhEntityPath
        };
 
        eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());
 
        while (true)
        {
            Console.Write("Please enter message to send: ");
            string message = Console.ReadLine();
            if (string.IsNullOrWhiteSpace(message)) break;
 
            await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));
        }
 
        await eventHubClient.CloseAsync();
 
        Console.WriteLine("Press ENTER to exit.");
        Console.ReadLine();
    }
}

Consumer

Next we’ll create a consumer; so the first thing we’ll need is to grant permissions for listening:

We’ll create a second new console application with this same library and the processor library, too.

class Program
{
    private const string EhConnectionString = "Endpoint=sb://pcm-testeventhub.servicebus.windows.net/;SharedAccessKeyName=Listener;SharedAccessKey=key;EntityPath=pcm-eventhub1";
    private const string EhEntityPath = "pcm-eventhub1";
    private const string StorageContainerName = "eventhub";
    private const string StorageAccountName = "pcmeventhubstorage";
    private const string StorageAccountKey = "key";
 
    private static readonly string StorageConnectionString = string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}", StorageAccountName, StorageAccountKey);
 
    static async Task Main(string[] args)
    {
        Console.WriteLine("Registering EventProcessor...");
 
        var eventProcessorHost = new EventProcessorHost(
            EhEntityPath,
            PartitionReceiver.DefaultConsumerGroupName,
            EhConnectionString,
            StorageConnectionString,
            StorageContainerName);
 
        // Registers the Event Processor Host and starts receiving messages
        await eventProcessorHost.RegisterEventProcessorAsync<EventsProcessor>();
 
        Console.WriteLine("Receiving. Press ENTER to stop worker.");
        Console.ReadLine();
 
        // Disposes of the Event Processor Host
        await eventProcessorHost.UnregisterEventProcessorAsync();
    }
}

class EventsProcessor : IEventProcessor
{
    public Task CloseAsync(PartitionContext context, CloseReason reason)
    {
        Console.WriteLine($"Processor Shutting Down. Partition '{context.PartitionId}', Reason: '{reason}'.");
        return Task.CompletedTask;
    }
 
    public Task OpenAsync(PartitionContext context)
    {
        Console.WriteLine($"SimpleEventProcessor initialized. Partition: '{context.PartitionId}'");
        return Task.CompletedTask;
    }
 
    public Task ProcessErrorAsync(PartitionContext context, Exception error)
    {
        Console.WriteLine($"Error on Partition: {context.PartitionId}, Error: {error.Message}");
        return Task.CompletedTask;
    }
 
    public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
    {
        foreach (var eventData in messages)
        {
            var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
            Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
        }
 
        return context.CheckpointAsync();
    }
}

As you can see, we can now transmit data through the Event Hub into client applications:

Footnotes

*Large, in terms of frequency, rather than volume – for example, transmitting a small message twice a second, rather than uploading a petabyte of data

References

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send

https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-standard-getstarted-receive-eph

What can you do with a logic app? Part three – Creating a Logic App Client

One of the things that are missing from Azure Logic apps is the ability to integrate human interaction. Microsoft do have their own version of an interactive workflow (PowerApps), which is (obviously) far better than what you can produce by following this post.

In this post, we’ll create a very basic client for a logic app. Obviously, with some thought, this could easily be extended to allow a fully functional, interactive, workflow system.

Basic Logic App

Let’s start by designing our logic app. The app in question is going to be a very simple one. It’s format is going to be that it will add a message to a logging queue (just so it has something to do), then we’ll ask the user a question; and we’ll do this by putting a message onto a topic: left or right. Based on the user’s response, we’ll either write a message to the queue saying left, or right. Let’s have a look at our Logic App design:

It’s worth pointing out a few things about this design:
1. The condition uses the expression base64ToString() to convert the encoded message into plain text.
2. Where the workflow picks up, it uses a peek-lock, and then completes the message at the end. It looks like it’s a ‘feature’ of logic apps that an automatic complete on this trigger will not actually complete the message (plus, this is actually a better design).

Queues and Topics

The “Log to message queue” action above is putting an entry into a queue; so a quick note about why we’re using a queue for logging, and a topic for the interaction with the user. In a real life version of this system, we might have many users, but they might all want to perform the same action. Let’s say that they all are part of a sales process, and the actions are actually actions along that process; adding these to a queue maintains their sequence. Here’s the queue and topic layout that I’m using for this post:

Multiple Triggers

As you can see, we actually have two triggers in this workflow. The first starts the workflow (so we’ll drop a message into the topic to start it), and the second waits for a second message to go into the topic.

To add a trigger part way through the workflow, simply add an action, search and select “Triggers”:

Because we have a trigger part way through the workflow, what we have effectively issued here is an await statement. Once a message appears in the subscription, the workflow will continue where it left off:

As soon as a message is posted, the workflow carries on:

Client Application

For the client application, we could simply use the Service Bus Explorer (in fact, the screenshots above were taken from using this to simulate messages in the topic). However, the point of this post is to create a client, and so we will… although we’ll just create a basic console app for now.

We need the client to do two things: read from a topic subscription, and write to a topic. I haven’t exactly been here before, but I will be heavily plagiarising from here, here, and here.

Let’s create a console application:

Once that’s done, we’ll need the service bus client library: Install it from here.

The code is generally quite straight-forward, and looks a lot like the code to read and write to queues. The big difference is that you don’t read from a topic, but from a subscription to a topic (a topic can have many subscriptions):

class Program
{
    
    static async Task Main(string[] args)
    {
        MessageHandler messageHandler = new MessageHandler();
        messageHandler.RegisterToRead("secondstage", "sub1");
 
        await WaitForever();
    }
 
    private static async Task WaitForever()
    {
        while (true) await Task.Delay(5000);
    }
}
public class MessageHandler
{
    private string _connectionString = "service bus connection string details";
    private ISubscriptionClient _subscriptionClient;
    public void RegisterToRead(string topicName, string subscriptionName)
    {            
        _subscriptionClient = new SubscriptionClient(_connectionString, topicName, subscriptionName);
 
        MessageHandlerOptions messageHandlerOptions = new MessageHandlerOptions(ExceptionReceived)
        {
            AutoComplete = false,
            MaxAutoRenewDuration = new TimeSpan(1, 0, 0)
        };
 
        _subscriptionClient.RegisterMessageHandler(ProcessMessage, messageHandlerOptions);
 
    }
 
    private async Task ProcessMessage(Message message, CancellationToken cancellationToken)
    {
        string messageText = Encoding.UTF8.GetString(message.Body);
 
        Console.WriteLine(messageText);
        string leftOrRight = Console.ReadLine();
 
        await _subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
 
        await SendResponse(leftOrRight, "userinput");
    }
 
    private async Task SendResponse(string leftOrRight, string topicName)
    {
        TopicClient topicClient = new TopicClient(_connectionString, topicName);
        Message message = new Message(Encoding.UTF8.GetBytes(leftOrRight));
        await topicClient.SendAsync(message);
    }
 
    private Task ExceptionReceived(ExceptionReceivedEventArgs arg)
    {
        Console.WriteLine(arg.Exception.ToString());
        return Task.CompletedTask;
    }
}

If we run it, then when the logic app reaches the second trigger, we’ll get a message from the subscription and ask directions:

Based on the response, the logic app will execute either the right or left branch of code.

Summary

Having worked with workflow systems in the past, one recurring feature of them is that they start to get used for things that don’t fit into a workflow, resulting in a needlessly over-complex system. I imagine that Logic Apps are no exception to this rule, and in 10 years time, people will roll their eyes at how Logic Apps have been used where a simple web service would have done the whole job.

The saving grace here is source control. The workflow inside a Logic App is simply a JSON file, and so it can be source controlled, added to a CI pipeline, and all the good things that you might expect. Whether or not a more refined version of what I have described here makes any sense is another question.

There are many downsides to this approach: firstly, you are fighting against the Service Bus by asking it to wait for input (that part is a very fixable problem with a bit of an adjustment to the messages); secondly, you would presumably need some form of timeout (again, a fixable problem that will probably feature in a future post). The biggest issue here is that you are likely introducing complex conditional logic with no way to unit test; this isn’t, per se, fixable; however, you can introduce some canary logic (again, this will probably be the feature of a future post).

References

https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config

https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions

https://stackoverflow.com/questions/28127001/the-lock-supplied-is-invalid-either-the-lock-expired-or-the-message-has-alread

Short Walks – XUnit Warning

As with many of these posts – this is more of a “note to self”.

Say you have an assertion that looks something like this in your Xunit test:

Assert.True(myEnumerable.Any(a => a.MyValue == "1234"));

In later versions (not sure exactly which one this was introduced it), you’ll get the following warning:

warning xUnit2012: Do not use Enumerable.Any() to check if a value exists in a collection.

So, Xunit has a nice little feature where you can use the following syntax instead:

Assert.Contains(myEnumerable, a => a.MyValue == "1234");

Using NSubstitute for partial mocks

I have previously written about how to, effectively, subclass using Nsubstitute; in this post, I’ll cover how to partially mock out that class.

Before I get into the solution; what follows is a workaround to allow badly written, or legacy code to be tested without refactoring. If you’re reading this and thinking you need this solution then my suggestion would be to refactor and use some form of dependency injection. However, for various reasons, that’s not always possible (hence this post).

Here’s our class to test:

public class MyFunkyClass
{
    public virtual void MethodOne()
    {        
        throw new Exception("I do some direct DB access");
    }
 
    public virtual int MethodTwo()
    {
        throw new Exception("I do some direct DB access and return a number");

        return new Random().Next(5);
    }
 
    public virtual int MethodThree()
    {
        MethodOne();
        if (MethodTwo() <= 3)
        {
            return 1;
        }
 
        return 2;
    }
}

The problem

Okay, so let’s write our first test:

[Fact]
public void Test1()
{
    // Arrange
    MyFunkyClass myFunkyClass = new MyFunkyClass();
 
    // Act
    int result = myFunkyClass.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

So, what’s wrong with that?

Well, we have some (simulated) DB access, so the code will error.

Not the but a solution

The first thing to do here is to mock out MethodOne(), as it has (pseudo) DB access:

[Fact]
public void Test1()
{
    // Arrange
    MyFunkyClass myFunkyClass = Substitute.ForPartsOf<MyFunkyClass>();
    myFunkyClass.When(a => a.MethodOne()).DoNotCallBase();
 
    // Act
    int result = myFunkyClass.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

Running this test now will fail with:

Message: System.Exception : I do some direct DB access and return a number

We’re past the first hurdle. We can presumably do the same thing for MethodTwo:

[Fact]
public void Test1()
{
    // Arrange
    MyFunkyClass myFunkyClass = Substitute.ForPartsOf<MyFunkyClass>();
    myFunkyClass.When(a => a.MethodOne()).DoNotCallBase();
    myFunkyClass.When(a => a.MethodTwo()).DoNotCallBase();
 
    // Act
    int result = myFunkyClass.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

Now when we run the code, the test still fails, but it no longer accesses the DB:

Message: Assert.Equal() Failure
Expected: 2
Actual: 1

The problem here is that, even though we don’t want MethodTwo to execute, we do want it to return a predefined result. Once we’ve told it not to call the base method, you can then tell it to return whatever we choose (there are separate events – see the bottom of this post for a more detailed explanation of why); for example:

[Fact]
public void Test1()
{
    // Arrange
    MyFunkyClass myFunkyClass = Substitute.ForPartsOf<MyFunkyClass>();
    myFunkyClass.When(a => a.MethodOne()).DoNotCallBase();
    myFunkyClass.When(a => a.MethodTwo()).DoNotCallBase();
    myFunkyClass.MethodTwo().Returns(5);
 
    // Act
    int result = myFunkyClass.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

And now the test passes.

TLDR – What is this actually doing?

To understand this better; we could do this entire process manually. Only when you’ve felt the pain of a manual mock, can you really see what mocking frameworks such as NSubtitute are doing for us.

Let’s assume that we don’t have a mocking framework at all, but that we still want to test MethodThree() above. One approach that we could take is to subclass MyFunkyClass, and then test that subclass:

Here’s what that might look like:

class MyFunkyClassTest : MyFunkyClass
{
    public override void MethodOne()
    {
        //base.MethodOne();
    }
 
    public override int MethodTwo()
    {
        //return base.MethodTwo();
        return 5;
    }
}

As you can see, now that we’ve subclassed MyFunkyClass, we can override the behaviour of the relevant virtual methods.

In the case of MethodOne, we’ve effectively issued a DoNotCallBase(), (by not calling base!).

For MethodTwo, we’ve issued a DoNotCallBase, and then a Returns statement.

Let’s add a new test to use this new, manual method:

[Fact]
public void Test2()
{
    // Arrange 
    MyFunkyClassTest myFunkyClassTest = new MyFunkyClassTest();
 
    // Act
    int result = myFunkyClassTest.MethodThree();
 
    // Assert
    Assert.Equal(2, result);
}

That’s much cleaner – why not always use manual mocks?

It is much cleaner if you always want MethodThree to return 5. Once you need it to return 2 then you have two choices, either you create a new mock class, or you start putting logic into your mock. The latter, if done wrongly can end up with code that is unreadable and difficult to maintain; and if done correctly will end up in a mini version of NSubstitute.

Finally, however well you write the mocks, as soon as you have more than one for a single class then every change to the class (for example, changing a method’s parameters or return type) results in a change to more than one test class.

It’s also worth mentioning again that this problem is one that has already been solved, cleanly, by dependency injection.