Tag Archives: .net

Introduction to Unit Tests (with examples in .Net) – Part 4 – Mocking (Including fakes and stubs)

In this, forth (and probably final) post on the subject of Unit Tests, we’re going to dive a little deeper into the subject of mocking. We’ll discuss what the difference is between a mock, a stub, and a fake; we’ll also talk about mocking frameworks.

A Fake, Stubby, Mock

These terms are often used interchangeably, and that’s fine – but they can mean different things. There are a couple of sources (that I could find) that have defined the difference between these terms:

Mocks Aren’t Stubs – an article from 2007 by Martin Fowler.

xUnit Test Patterns – a book on Unit Testing.

Broadly, they both say the same, which is this:

A Stub is a replacement for functionality that will return a given value without actually executing any life-like code.

A Mock is similar to a stub, but allows for analysis of that behaviour – for example, you can determine whether or not the method was called, or how many times.

A Fake is a replacement for functionality that is intended to mimic the actual functionality of the code.

A Test Double is a generic term to encompass all three.

Let’s have a look at an example for each. We’ll stick with manual test doubles for now. Let’s consider one of the manual mocks that we created in the last post:

    public class MockInputOutputWrapper : IInputOutputWrapper
    {
        private readonly string _inputValue;

        public MockInputOutputWrapper(string inputValue) =>
            _inputValue = inputValue;        

        public string GetInput(string prompt) => _inputValue;        

        public void Output(string text) { }
    }

Stub

Our first call is the stub, which is the Output method in the code above. It provides a method to call, but no functionality whatsoever.

Mock

Let’s imagine that we wanted to ascertain how many times we called Output – we may do something like this:

public class MockInputOutputWrapper : IInputOutputWrapper
{
    private readonly string _inputValue;
    private int _outputCount = 0;

    public MockInputOutputWrapper(string inputValue) =>
        _inputValue = inputValue;        

    public string GetInput(string prompt) => _inputValue;        

    public void OutputCallsMustBe(int count)
    {
        if (count != _outputCount) throw new Exception("Output Calls Incorrect");
    }

    public void Output(string text) 
    {
        _outputCount++;
    }
}

Now Output is a mock, rather than a stub. For this post, I won’t go to the extent of writing a mocking framework, but I think the code above illustrates the point. That is, we can ascertain that Output has been called, say, once:

[Fact]
public void Output_ValidGuess_CalledOnce()
{
    // Arrange
    var inputOutputWrapper = new MockInputOutputWrapper("12");
    var randomNumberChooser = new MockRandomNumberChooser();
    var sut = new Game(inputOutputWrapper, randomNumberChooser);

    // Act
    string result = sut.RunMethod();

    // Assert
    Assert.Equal("Well done, you guessed!", result);
    inputOutputWrapper.OutputCallsMustBe(1);
}

Finally, we’ll discuss what a fake is.

Fake

Fakes allow for functionality to be replicated in a way that’s more conducive to the test. The stub allowed us to essentially ignore the functionality altogether; the mock allowed us to assert that, despite replacing the functionality, it had actually been invoked (or would have been); the fake allows us to substitute that functionality. A good example here is a database – in order to test the interaction with a database, you may find it necessary to actually store some data in memory. Using our example, what if we needed to ascertain that the game dealt with different random numbers; we could write this:

[Fact]
public class MockRandomNumberChooser : IRandomNumberChooser
{
    private int[] _numberList = new[] { 12, 3, 43 };
    private int _index = 0;
    public int Choose() => _numberList[_index++];
}

Now that we understand the difference, we’ll see that it can be very academic, especially when dealing with mocking frameworks.

There’s a lot of boiler plate code here. Manually creating these classes does the job, but imagine the following scenario: you have 5 different mock classes, and you add a method to the interface IRandomNumberChooser. You now need to manually go through each of those mocks and add the functionality necessary to mock out the new function – you are very likely to not care about the new function in most of those methods, but nevertheless, you would need to go and honour the interface.

Mocking Frameworks

Mocking frameworks aim to solve this problem by creating a mechanism to mock or subclass an object. There are currently two main mocking frameworks for .Net: Nsubstitute and Moq. There’s also Microsoft Fakes.

We won’t cover all of these, and the principle behind them is broadly the same, with a slightly different implementation bias. I’ve always found NSubstitute much more intuitive, so we’ll cover that.

We’ll start by simply deleting the MockRandomNumberChooser. Now install Nsubstitute:

Install-Package NSubstitute

The next part is to simply tell NSubstitute to do the same thing that you had done using the mock class:

var randomNumberChooser = Substitute.For<IRandomNumberChooser>();
randomNumberChooser.Choose().Returns(12);

If you run the test, you’ll see absolutely no difference. Based on our discussion earlier in the post, we have created a stub, but we can create both Mocks and Fakes using the same class. If you want to create a mock, you’ll do so like this:

randomNumberChooser.Received(1).Choose();

Fakes are a little different, however, you can still replace the functionality.

References

https://www.pmichaels.net/2018/03/22/using-nsubstitute-for-partial-mocks/

https://github.com/nsubstitute/NSubstitute/

https://github.com/moq

Implementing a Sidecar Pattern Without Kubernetes

A sidecar pattern essentially allows a running process to communicate with a separate process (or sidecar) in order to offload some non-essential functionality. This is typically logging, or something similarly auxiliary to the main function. You tend to find sidecars in Kubernetes clusters (in fact, you can have more than one container in a pod for exactly this reason); however, the pattern is just that: a pattern; and can therefore apply to pretty much anything.

In this post, I’m going to cover how you might set-up two independent containers, and then implement the sidecar pattern using docker compose.

Console App

The code for this is going to be in .Net Core 6, so let’s create a console app:

It will be easier later on if you follow the following directory structure for the projects:

sidecar-test
	console-app
	web-api

For the purpose of the console app, just put it in a parent directory – so the code lives in /sidecar-test/console-app (we’ll come back to the API).

We’ll need to add docker support for this, which you can do in Visual Studio (right click the project and add docker support):

Once you’ve created the docker build file, you may need to edit it slightly; here’s mine:

FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base
WORKDIR /app

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src

COPY ["sidecar-test.csproj", "sidecar-test/"]
WORKDIR "/src/sidecar-test"
RUN dotnet restore "sidecar-test.csproj"
COPY . .

RUN dotnet build "sidecar-test.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "sidecar-test.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "sidecar-test.dll"]

If you find that the dockerfile doesn’t work, then you might find the following posts helpful:

Beginner’s Guide to Docker – Part 2 – Debugging a Docker Build


Beginner’s Guide to Docker – Part 3 – Debugging a Docker Build (Continued)

If you’d rather not wade through that, then the Reader’s Digest version is:

docker build -t sidecar-test .

Once the docker files are building, experiment with running them:

docker run --restart=always -d sidecar-test

Assuming this works, you should see that the output shows Hello World (as per the standard console app).

Web API

For the web API, again, we’re going to go with the default Web API – just create a new project in VS and select ASP.Net Core Web API:

Out of the box, this gives you a Weather Forecaster API, so we’ll stick with that for now, as we’re interested in the pattern itself and not the specifics.

Again, add docker support; here’s mine:

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src

COPY ["sidecar-test-api.csproj", "sidecar-test-api/"]
WORKDIR "/src/sidecar-test-api"
RUN dotnet restore "sidecar-test-api.csproj"
COPY . .

RUN dotnet build "sidecar-test-api.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "sidecar-test-api.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "sidecar-test-api.dll"]

You should now be able to build that in a similar fashion as before:

docker build -t sidecar-test-api .

You can now run this, although unlike before, you’ll need to map the ports. The port mapping works by mapping the external port to the internal port – so to connect to the mapping below, you would connect to http://localhost:5001, but that will connect to port 80 inside the container.

docker run --restart=always -d -p 5001:80 sidecar-test

Now that you can connect to the Web API, let’s now implement the sidecar pattern.

Docker Compose

We now have two independent containers; however, they cannot, and do not, communicate. Let’s remind ourselves of the directory structure that we decided on at the start of the post:

sidecar-test
	console-app
	web-api

We now want to open the parent sidecar-test directory. In there, create a file called docker-compose.yml. It should look like this:

version: '3'
services:
  sidecar-api:
    build: .\sidecar-test-sidecar\web-api
    ports: 
      - "5001:80"
  sidecar-program:
    build: .\sidecar-test\console-app
    depends_on: 
      - "sidecar-api"

There’s lots to unpick here. Let’s start with services – this contains all the individual services that we’re going to run in this file; each one has a build section, which tells compose to call docker build on the path specified. You can create the image separately and replace this with an image section. The ports section maps directly to the docker -p switch.

There’s two other important parts of this, this first is the service name (sidecar-api, sidecar-program) – we’ll come back to that shortly. The second is the depends_on – which tells docker compose to ensure that the dependant is built and running first (we could switch the order of these and it would still start the API first).

Running The Sidecar

Finally, we can run our sidecar. You can do this by navigating to the parent sidecar-test directory, and typing:

docker compose up

If you’re changing and running the code, and using PowerShell, you can use the following command:

docker-compose down; docker-compose build --no-cache; docker-compose up

Which will force a rebuild every time.

However, if you run this now, you’ll notice that, whilst it runs, it does nothing.

Linking The Two

Docker Compose has a nifty little feature – all the services in the compose file are networked together, and can be referenced by using their service name.

In the console app, change the main method to:

        public static async Task Main(string[] args)
        {
            Console.WriteLine("Get Weather");
            var httpClient = new HttpClient();
            httpClient.BaseAddress = new Uri("http://sidecar-api");
            var result = await httpClient.GetAsync("WeatherForecast");
            result.EnsureSuccessStatusCode();
            var content = await result.Content.ReadAsStringAsync();
            Console.WriteLine(content);
        }

We won’t focus on the fact that I’m directly instantiating HttpClient here; but notice how we are referencing the API, by http://sidecar-api, and because they are both part of the same compose file, this just works.

If we now recompile the docker files:

docker-compose down; docker-compose build --no-cache; docker-compose up

We should have a running sidecar – admittedly it doesn’t behave like a sidecar, but that’s just why it gets called, not the architecture.

References

https://docs.docker.com/compose/

Manually Parsing a Json String Using System.Text.Json

In this post from 2016 I gave some details as to how you could manually parse a JSON string using Newtonsoft JSON.NET. What I don’t think I covered in that post, was why you may wish to do that, nor did I cover how you could do that using System.Text.Json (although since the library was only introduced in .Net Core 3 – circa 2019, that part couldn’t really be helped!)

Why?

Let’s start with why you would want to parse a JSON string manually – I mean, the serialisation functions in, pretty much, any JSON library are really good.

The problem is coupling: by using serialisation, you’re coupling your data to a given shape, and very tightly coupling it to that shape, too. So much so, that a common pattern if you’re passing data between two services, and using serialisation, it to share the model class between the services. This sounds quite innocuous at first glance, but let’s consider a few factors (I’m assuming we’re talking exclusively about .Net, but I imagine the points are valid outside of that, too):

1. Why are you serialising and de-serialising the data to begin with?
2. Single Responsibility Principle.
3. Microservices.

Let’s start with (1). Your answer may be different, but typically, if you’re passing data as a string, it’s because you’re trying to remove a dependency to a given complex type. After all, a string can be passed anywhere: an API call, a message broker, even held in a database.

What has this got to do with the SRP (2)? Well, the SRP is typically used to describe the reason that a module has to change (admittedly it is slightly mis-named). Let’s see how the two modules may interact:

Now, let’s look at the interaction with a common module:

As you can see, they both have a dependency on a single external (external to the service) dependency. If the CustomerModel changes, then both services may also need to change, but they also need to change for alterations for business rules that relate to the module itself: so they now have two reasons to change.

Of course, you don’t have to have a common dependency like this; you could structure your system like this:

However, you don’t solve your problem – in fact, you arguably make it worse: if you change CustomerModel referenced by Service 1 you potentially break Service 2, so you now need to change CustomerModel referenced by Service 2, and Service 2!

Just to clarify what I’m saying here: there may be valid reasons for both of these designs – but if you use them, then be aware that you’re coupling the services to each other; which brings us to point (3): if you’re trying to create a Service Oriented Architecture of any description, then this kind of coupling may hinder your efforts.

The Solution

A quick caveat here: whatever you do in a system, the parts of that system will be coupled to some extent. For example, if you have built a Microservice Architecture where your system is running a nuclear reactor, and then you decide to change one of the services from monitoring the cooling tower to, instead, mine bit-coins, you’re going to find that there is an element of coupling. Some of that will be business coupling (i.e. the cooling tower may overheat), but some will be technical – a downstream service may depend on the monitoring service to assert that something is safe.

Apologies, I know absolutely nothing about nuclear reactors; or, for that matter, about mining bit-coin!

All that said, if you manually parse the JSON that’s received, you remove some dependency on the shape of the data.

The following reads a JSON document, and iterates through an array:

            using var doc = JsonDocument.Parse(json);
            var element = doc.RootElement;

            foreach (var eachElement in element.EnumerateArray())
            {
                string name = eachElement.GetProperty("Name").GetString();
                decimal someFigure = eachElement.GetProperty("SomeFigure").GetDecimal();

                if (someFigure > 500)
                {
                    Console.WriteLine($"{name} has more than 500!");
                }
            }

As you can see, if the property name of SomeFigure changed, the code would break; however, there may be a dozen more fields in each element that could change, and we wouldn’t care.

Debugging a Failed API Request, and Defining an Authorization Header Using Fiddler Everywhere

Postman is a really great tool, but I’ve recently been playing with Telerik’s new version of Fiddler – Fiddler Everywhere. You’re probably thinking that these are separate tools, that do separate jobs… and before Fiddler Everywhere, you’d have been right. However, have a look at this screen:

…In fact it’s not Postman. The previous version of this tool (called Compose) from Fiddler 4 was pretty clunky – but now you can simply right-click on a request and select “Edit in Compose”.

Use Case

Let’s imagine for a minute, that you’ve made a request, and you got a 401 because the authentication header wasn’t set. You can open that request in Fiddler:

In fact, this returns a nice little web snippet, which we can see by selecting the Web tab:

The error:

The request requires HTTP authentication

This error means that authentication details have not been passed to the API; typically, these can be passed in the header, in the form:

Authorization: Basic EncodedUsernamePassword

So, let’s try re-issuing that call in Fiddler – let’s start with the encoding. Visit this site and enter into the Encode section your username and password:

Username:password

For example:

In Fiddler, right click on the request in question, and select to Edit in Compose. You should now see the full request, and be able to edit any part of it; for example, you can add an Authorization header:

Now that you’ve proved that works, you can make the appropriate change in the code – here’s what that looks like in C#:

            byte[] bytes = Encoding.UTF8.GetBytes($"{username}:{password}");
            var auth = Convert.ToBase64String(bytes);

            var client = _httpClientFactory.CreateClient();
            client.DefaultRequestHeaders.Add($"Authorization", $"Basic {auth}");

Installing .Net on Ubuntu… on Windows

With the new Windows Subsystem for Linux, and the Windows Terminal, comes the ability to run .Net programs on Linux from your Windows machine.

Here I wrote briefly about how you can install and run Linux on your Windows machine. In this post, I’ll cover how to install .Net.

If you don’t have the Windows Terminal, then you can install it here.

The installation process is pretty straightforward, and the first step is to launch the Windows Terminal. Once that’s running, open a Linux tab, and run the following two scripts (if you’re interested in where these came from, follow the link in the References section below):

wget https://packages.microsoft.com/config/ubuntu/20.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb

Then run:

sudo apt-get update; \
  sudo apt-get install -y apt-transport-https && \
  sudo apt-get update && \
  sudo apt-get install -y dotnet-sdk-3.1

That should do it. To verify that it has:

dotnet --version

And you should see:

References

https://docs.microsoft.com/en-us/dotnet/core/install/linux-ubuntu

Short Walks – Running an Extension Method on a Null Item

I came across this issue recently, and realised that I didn’t fully understand extension methods. My previous understanding was that an extension method was simply added to the original class (possible in the same manner that weavers work); However, a construct similar to the following code changed my opinion:

class Program
{
    static void Main(string[] args)
    {
        var myList = GetList();            
        var newList = myList.Where(
            a => a.IsKosher());
        var evaluateList = newList.ToList();
 
        foreach(var a in evaluateList)
        {
            Console.WriteLine(a.Testing);
        }
    }
 
    static IEnumerable<TestClass> GetList()
    {
        return new List<TestClass>()
        {
            new TestClass() {Testing = "123"},
            null
        };
    }
}
 
public class TestClass
{
    public string Testing { get; set; }
}
 
public static class ExtensionTest
{
    public static bool IsKosher(this TestClass testClass)
    {
        return (!string.IsNullOrWhiteSpace(testClass.Testing));
    }
}

As you can see from the code, GetList() returns a null class in the collection. If you run this code, you’ll find that it crashes inside the extension method, because testClass is null.

A note on Linq

If you’re investigating this in the wild, you might find it particularly difficult because of the was that Linq works. Even though the call to the extension method is on the line above, the code doesn’t get run until you actually use the result (in this case, via a ToList()).

New understanding

As I now understand it, extension methods are simply a nice syntactical way to use a static method. That is, had I simply declared my IsKosher method as a standard static method, it would behave exactly the same. To verify this, let’s have a look at the IL; here’s the IL for my function above:

IL Code for extension method

And here’s the IL for the same function as a standard static method:

IL code for static method

The only difference is the line at the top of the extension method calling the ExtensionAttribute constructor.

References

https://stackoverflow.com/questions/847209/in-c-what-happens-when-you-call-an-extension-method-on-a-null-object

Short Walks – Using CompilerService Arguments in an Interface

Until today, I thought that the following code would work:

class Program
{
    static void Main(string[] args)
    {
        ITest test = new Test();
        test.Log("testing");
        Console.ReadLine();
    }
}
 
interface ITest
{
    void Log(string text, string function = "");
}
 
class Test : ITest
{
    public void Log(string text, [CallerMemberName] string function = "")
    {
        Console.WriteLine($"{function} : text");
    }
}

And, by work, I mean output something along the lines of:

Main : testing

However; it actually outputs:

: testing

CompilerServiceAttributes need to be on the Interface, and not on the implementation

class Program
{
    static void Main(string[] args)
    {
        ITest test = new Test();
        test.Log("testing");
        Console.ReadLine();
    }
}
 
interface ITest
{
    void Log(string text, [CallerMemberName] string function = "");
}
 
class Test : ITest
{
    public void Log(string text, string function = "")
    {
        Console.WriteLine($"{function} : text");
    }
}

Why?

When you think about it, it does kind of make sense. Because you’re calling against the interface, the compiler injected value needs to be there; if you took the interface out of the equation, then the attribute needs to be on the class.

You live and learn!

Creating a Basic Azure Web Job

In this article, I discussed the use of Azure functions; however, Web Jobs perform a similar task. Azure Functions are effectively an abstraction on top of Web Jobs – meaning that, while you have more control when using Web Jobs, there’s a little more to do when writing them.

This article covers the basics of Web Jobs, and has a walk-through for creating a very simple task using one.

Create a new Web Job

Once you create this project, you’ll need to fill in the following values in the app.config:

<configuration>
  <connectionStrings>
    <!-- The format of the connection string is "DefaultEndpointsProtocol=https;AccountName=NAME;AccountKey=KEY" -->
    <!-- For local execution, the value can be set either in this config file or through environment variables -->
    <add name="AzureWebJobsDashboard" connectionString="" />
    <add name="AzureWebJobsStorage" connectionString="" />
  </connectionStrings>

These can both be the same value, but they refer to where Azure stores it’s data.

AzureWebJobsDashboard

This is the storage account used to store logs.

AzureWebJobsStorage

This is the storage account used to store whatever the application needs to function (for example: queues or tables). In the example below, it’s where the file will go.

Storage accounts can be set-up from the Azure dashboard (more on this later):

A Basic Application

For this example, let’s take a file from a blob storage and parse it, then write out the result in a log. Specifically, we’ll take an XML file, and write the number of nodes into a log; here’s the file:

<test>
    <myNode>
    </myNode>
    <myNode>
    </myNode>
</test>

I think we’ll probably be looking for a figure around 2.

Blob Storage

Before we can do anything with blob storage, we’ll need a new storage area; create a new storage account:

Set the storage kind to “General Storage” (because we’re working with files); other than that, go with your gut.

Uploading

Once you’ve created the account, you’ll need to add a file – otherwise nothing will happen. You can do this in the web portal, or you can do it via a desktop utility that Microsoft provide: Storage Explorer.

I kind of expected this to take me to the web page mentioned… but it doesn’t! You have to navigate there manually:

http://storageexplorer.com

Install it… unless you want to upload your file using the web portal… in which case: don’t.

We can create a new container:

Now, we can see the storage account and any containers:

Now, you can upload a file from here (remember that you can do all this inside the Portal):

Once you’ve created this, go back and update the storage connection string (described above). You may also want to repeat the process for a dashboard storage area (or, as stated above, they can be the same).

Programmatically Downloading

Now we have a file in the directory, it can be downloaded via the WebJob; here’s a function that will download a file:

        public static async Task<string> GetFileContents(string connectionString, string containerString, string fileName)
        {
            CloudStorageAccount storage = CloudStorageAccount.Parse(connectionString);
            CloudBlobClient client = storage.CreateCloudBlobClient();
            CloudBlobContainer container = client.GetContainerReference(containerString);
            CloudBlob blob = container.GetBlobReference(fileName);

            MemoryStream ms = new MemoryStream();
            await blob.DownloadToStreamAsync(ms);
            ms.Position = 0;

            StreamReader sr = new StreamReader(ms);
            string contents = sr.ReadToEnd();
            return contents;
        }

The code to call this is here (note the commented out commands from the default WebJob Template):

        static void Main()
        {
            Console.WriteLine("Starting");

            var config = new JobHostConfiguration();

            if (config.IsDevelopment)
            {
                config.UseDevelopmentSettings();
            }

            //var host = new JobHost();

            string fileContents = AzureHelpers.GetFileContents(config.StorageConnectionString, "testblob", "test.xml").Result;
            Console.WriteLine(fileContents);

            // The following code ensures that the WebJob will be running continuously
            //host.RunAndBlock();

            Console.WriteLine("Done");
        }

Although this works (sort of – it doesn’t check for new files, and it would need to be run on a scheduled basis – “On Demand” in Azure terms), you don’t need it (at least not for jobs that react to files being uploaded to storage containers). WebJobs provide this functionality out of the box! There are a number of decorators that you can use for various purposes:

  • string
  • TextReader
  • Stream
  • ICloudBlob
  • CloudBlockBlob
  • CloudPageBlob
  • CloudBlobContainer
  • CloudBlobDirectory
  • IEnumerable<CloudBlockBlob>
  • IEnumerable<CloudPageBlob>

Here, we’ll use a BlobTrigger and accept a string. Moreover, doing it this way makes the writing to the log much easier, as there’s injection of sorts (at least I’m assuming that’s what it’s doing). Here’s what the complete solution looks like in the new paradigm:

        public static void ProcessFile([BlobTrigger("testblob/{name}")] string fileContents, TextWriter log)
        {            
            XmlDocument xmlDoc = new XmlDocument();
            xmlDoc.LoadXml(fileContents);            
            log.WriteLine($"Node count: {xmlDoc.FirstChild.ChildNodes.Count}");
        }

The key thing to notice here is that the function is static and public (the class it’s in needs to be public, too – even is that’s the Program class). The WebJob framework uses reflection to work out which functions it needs to run.

The other point to note is that I’m getting the parameter as a string – the article above details what you could have it as; for example, if you wanted to delete it afterwards, you’d probably want to use an ICloudBlob or something similar.

Anyway, it works:

The log file

Remember the storage area that we specified for the dashboard earlier? You should now see some new containers created in that storage area:

This has created a number of directories, but the one that we’re interested in is “output-logs” in the “azure-webjobs-hosts” container:

And here’s the log itself:

References

https://docs.microsoft.com/en-us/azure/app-service-web/web-sites-create-web-jobs

https://stackoverflow.com/questions/36610952/azure-webjobs-vs-azure-functions-how-to-choose

https://stackoverflow.com/questions/27580264/where-do-i-get-the-azurewebjobsdashboard-connection-string-information

http://www.hanselman.com/blog/IntroducingWindowsAzureWebJobs.aspx

https://stackoverflow.com/questions/24286214/where-are-azure-webjobs-blobinput-and-bloboutput-classes

https://docs.microsoft.com/en-us/azure/app-service-web/websites-dotnet-webjobs-sdk-storage-blobs-how-to

NUnit TestCaseSource

While working on this project, I found a need to abstract away a base type that the unit tests use (in this instance, it was a queue type). I was only testing a single type (PriorityQueue); however, I wanted to create a new type, but all the basic tests for the new type are the same as the existing ones. This led me to investigate the TestCaseSource attribute in NUnit.

As a result, I needed a way to re-use the tests. There are definitely multiple ways to do this; the simplest one is probably to create a factory class, and pass in a string parameter. The only thing that put me off this is that you end up with the following test case:

        [TestCase("test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9"]
        [TestCase("a1", "a", "a1", "b", "c", "d", "a"]
        public void Queue_Dequeue_CheckResultOrdering(
            string first, string last, params string[] queueItems)
        {

Becoming:

        [TestCase("PriorityQueue", "test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9"]
        [TestCase("PriorityQueue2", "test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9"]
        [TestCase("PriorityQueue", "a1", "a", "a1", "b", "c", "d", "a"]
        [TestCase("PriorityQueue2", "a1", "a", "a1", "b", "c", "d", "a"]
        public void Queue_Dequeue_CheckResultOrdering(
            string queueType, string first, string last, params string[] queueItems)
        {

This isn’t very scaleable when adding a third or fourth type.

TestCaseSource

It turns out that the (or an least an) answer to this is to use NUnit’s TestCaseSource attribute. The NUnit code base dog foods quite extensively, so that is not a bad place to look for examples of how this works; however, what I couldn’t find was a way to mix and match. To better illustrate the point; here’s the first test that I changed to use TestCaseSource:

        [Test]
        public void Queue_NoEntries_CheckCount()
        {
            // Arrange
            PQueue.PriorityQueue<string> queue = new PQueue.PriorityQueue<string>();

            // Act
            int count = queue.Count();

            // Assert
            Assert.AreEqual(0, count);
        }

Which became:

        [Test, TestCaseSource(typeof(TestableQueueItemFactory), "ReturnQueueTypes")]
        public void Queue_NoEntries_CheckCount(IQueue<string> queue)
        {
            // Arrange


            // Act
            int count = queue.Count();

            // Assert
            Assert.AreEqual(0, count);
        }

(For completeness, the TestableQueueItemFactory is here):

    public static class TestableQueueItemFactory
    {
        public static IEnumerable<IQueue<string>> ReturnQueueTypes()
        {
            yield return new PQueue.PriorityQueue<string>();
        }
    }

However, when you have a TestCase like the one above, there’s a need for the equivalent of this (which doesn’t work):

        [Test, TestCaseSource(typeof(TestableQueueItemFactory), "ReturnQueueTypes")]
        [TestCase("test", "test9", "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9")]
        [TestCase("a1", "a", "a1", "b", "c", "d", "a")]
        public void Queue_Dequeue_CheckResultOrdering(string first, string last, params string[] queueItems)
        {

A quick look at the NUnit code base reveals these attributes to be mutually exclusive.

Compromise

By no means is this a perfect solution, but the one that I settled on was to create a second TestCaseSource helper method, which looks like this (along with the test):

        private static IEnumerable Queue_Dequeue_CheckResultOrdering_TestCase()
        {
            foreach(var queueType in TestableQueueItemFactory.ReturnQueueTypes())
            {
                yield return new object[] { queueType, "test", "test9", new string[] { "test", "test2", "test3", "test4", "test5", "test6", "test7", "test8", "test9" } };
                yield return new object[] { queueType, "a1", "a", new string[] { "a1", "b", "c", "d", "a" } };
            }
        }

        [Test, TestCaseSource("Queue_Dequeue_CheckResultOrdering_TestCase")]
        public void Queue_Dequeue_CheckResultOrdering(
            IQueue <string> queue, string first, string last, params string[] queueItems)
        {

As you can see, the second helper method doesn’t really help readability, so it’s certainly not a perfect solution; in fact, with a single queue type, this makes the code more complex and less readable. However, When a second and third queue type are introduced, the test suddenly becomes resilient.

YAGNI

At first glance, this may appear to be an example of YAGNI. However, in this article, Martin Fowler does state:

Yagni only applies to capabilities built into the software to support a presumptive feature, it does not apply to effort to make the software easier to modify.

Which, I believe, is what we are doing here.

References

http://www.smaclellan.com/posts/parameterized-tests-made-simple/

http://stackoverflow.com/questions/16346903/how-to-use-multiple-testcasesource-attributes-for-an-n-unit-test

https://github.com/nunit/docs/wiki/TestCaseSource-Attribute

http://dotnetgeek.tumblr.com/post/2851360238/exploiting-nunit-attributes-valuesourceattribute

https://github.com/nunit/docs/wiki/TestCaseSource-Attribute

Programmatically List Existing Tags Using The TFS API

Having looked into this for some time; I came up with the following method of extracting team project tags. I’m not for a minute suggesting this is the best way of doing this – but it does work. My guess is that it’s not a very scalable solution, as it’s doing a LOT of work.

As it was, I couldn’t find a way to directly query the tags, so instead, I’m going through all the work items, and picking the tags. I couldn’t even find a way to filter the work items that actually have tags; so here’s the query that I ended up with:

private static IEnumerable<string> GetAllDistinctWorkItemTags(string uri, string projectName)
{
    TfsTeamProjectCollection tfs;
 
    tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(uri)); // https://mytfs.visualstudio.com/DefaultCollection
    tfs.Authenticate();
 
    var wis = new WorkItemStore(tfs);
 
    WorkItemCollection workItemCollection = wis.Query(
         " SELECT [System.Tags]" +
         " FROM WorkItems " +
         $" WHERE [System.TeamProject] = '{projectName}' ");                
 
    if (workItemCollection.Count == 0)
        return null;
 
    List<string> tags = new List<string>();
    foreach (WorkItem wi in workItemCollection)
    {
        if (string.IsNullOrWhiteSpace(wi.Tags)) continue;
 
        var splitTags = wi.Tags.Split(';');
        tags.AddRange(splitTags.ToList());                
    }
 
    return tags.Distinct();
}

From debugging, I strongly suspect that whatever you put in the “SELECT”, it returns the entire work item. I also, for the life of me, couldn’t work out a lambda query for parsing the tags.

The calling method is here:

public static IEnumerable<string> GetAllTags(string uri, string teamProject)
{
    var project = GetTeamProject(uri, teamProject);
    IEnumerable<string> tags = GetAllDistinctWorkItemTags(uri, teamProject);
 
    return tags;
}

I’ve listed GetTeamProject helper method before, but for the sake of completeness:

public static Project GetTeamProject(string uri, string name)
{
    TfsTeamProjectCollection tfs;
 
    tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(uri)); // https://mytfs.visualstudio.com/DefaultCollection
    tfs.Authenticate();
 
    var workItemStore = new WorkItemStore(tfs);
    
    var project = (from Project pr in workItemStore.Projects
                       where pr.Name == name
                       select pr).FirstOrDefault();
    if (project == null)
        throw new Exception($"Unable to find {name} in {uri}");
 
    return project;
}

Here’s the output:

tags1

Notes on Tags

A couple of points on tags: firstly, tags seem to exist in a kind of transient state; that is, while something is tagged, the tag exists, but once you remove all instances of a tag (for example, if I removed “Tagtest1” from all work items in my team project, TFS would eventually (I believe after a couple of days) just delete the tag for me. Obviously, in my example, as soon as I did this, I would no longer find it. This might leave you thinking that there is a more efficient way of removing tags (that is, you should be able to access the transient store in some way).

The existence of this Visual Studio plug-in lends support to that idea. It allows you to maintain the tags within your team project. If you’re using tags in any kind of serious way then I’d strongly recommend that you try it.

Performance

This is doing a lot of (IMO) unnecessary work, so I tried a little performance test; using this post as a template, I created a lot of bugs:

tags2

As you can see, I created a random set of tags. One other point that I’m going to put here is that a TFS database with ~30K work items and no code whatsoever increases the size of the default collection DB to around 2GB:

tags3

Now I ran the GetAllTags with some timing on:

tags4

19 seconds, which seems like quite a reasonable speed to me for 13.5k tags.