Mutation Testing

Some time ago, I heard Dan Clarke from the Unhandled Exception podcast mention Mutation testing – the latest episode on this can be found here. I thought this definitely warranted some investigation.

If you skip to the bottom of this post, you’ll see some links to the official docs for Stryker, and to a video that details exactly how to use it.

What is Mutation Testing

The hypothesis here is that, if you’ve written a test, you can test that test by changing an element of the code under test – if the change breaks your test then your test is valid, if it does not, then your test is not.

I’m not completely sure I accept this theory, but I can see its uses. In this post, I’m experimenting with a Calculator class.

Installation

For the purpose of this, I’ll assume that you have some code to test. If you don’t, then you can download the code that I used my my tests here.

You’ll need a terminal window – you can use Windows Terminal, or any other terminal window of your choice; I’ve recently started using the Developer Power Shell (it’s kind of the Visual Studio equivalent of the VS Code Terminal):

The first thing you’ll need to do (unless you’re using other .Net Tools in your project) is to install the manifest:

dotnet new tool-manifest

To install Stryker, use the following command:

dotnet tool install dotnet-stryker

Tests and Usage

The tool cannot work without tests – remember that the purpose of it is to tell you if the tests are useful, not if the tests are there (although you do get some coverage stats from it). Here’s my code:

   public static class Calculator
    {
        public static decimal Add(decimal x, decimal y) =>
            x + y;

        public static decimal Subtract(decimal x, decimal y) =>
            x - y;

    }

And here’s the tests that I have:

        [Fact]
        public void Calculator_Add_ReturnsCorrect()
        {
            // Arrange            

            // Act
            decimal result = CalculatorApp.Calculator.Add(3, 6);

            // Assert
            Assert.Equal(9, result);
        }

As you can see, we’re looking at, at most, 50% test coverage. Let’s run the mutation tool and see what happens:

If you open the URL, you’ll get a coverage report, including any mutants that survived (we’ll come back to the later):

What this is telling us is that we don’t have particularly good test coverage, but what we do have has not survived mutation.

Let’s fill out the test coverage to 100%:

        [Fact]
        public void Calculator_Subtract_ReturnsCorrect()
        {
            // Arrange            

            // Act
            decimal result = CalculatorApp.Calculator.Subtract(1, 0);

            // Assert
            Assert.Equal(1, result);

        }

Admittedly, this took some gaming of the system, but when you run this, it survives:

Why did that test survive, by the first one didn’t? And what does ‘survived’ mean? Well, you can actually get it to tell you what it does during the mutation by selecting the file in question, and clicking “Expand All”:

What this tells you is that it replaced the code in the Subtract method with 1 (i.e. just return 1), and with x + y, (rather than x – y). The mutation would be ‘killed’ if, upon this change, at least one test failed. All I had to do was to find a test that would survive both scenarios (hence 1 – 0.

Summary

Stryker looks like a really cool and useful tool, but it definitely has its limitations. It identifies test coverage, and any test coverage that isn’t definitive (where there is no assert statement, or where the assert statement is ambiguous; for example, asserting that an exception is not thrown).

I’ve still to run it on a reasonable sized code-base; which I fully intend to do, but I’m not sure that I’d necessarily build this into a CI/CD pipeline (unless you genuinely fear that your developers are gaming the code-coverage stats).

I also have reservations as to whether a code base with 100% test coverage, and 0 surviving mutants is a healthy one, or one in a straight-jacket. Having said that, I definitely think this is a useful tool – it gives you information about your code base that you didn’t have before.

References

https://www.youtube.com/watch?v=DiIFM4Iluzw

https://stryker-mutator.io/docs/stryker-net/Introduction/

Conditionally Creating Resources in Terraform

I’ve recently been learning and blogging about Terraform (the latest of which you can find here).

In this post, I’m going to cover the conditional creation of a resource, using the count variable.

Disclaimer

As with most of the stuff that finds its way into my blog, this is from finer minds than my own. It’s also worth noting, as with all the stuff that finds its way here, the main audience is future me.

Count

All of the resources in Terraform have an inbuild property of count – what that means is that you can do something like this:

resource "azurerm_resource_group" "rg" {
  count = 2
  name     = "myTFResourceGroup${count.index}"
  location = "westus2"
}

This will create two resource groups, called myTFResourceGroup0 and myTFResourceGroup1. Admittedly, this has limited uses; however, it becomes much more useful when you use something like this:

resource "azurerm_resource_group" "rg" {
  count                      = var.myvar == "some_setting" ? 1 : 0 

So now, using the variables associated with the configuration that you run, you can either create the resource, or not (a count of 0 meaning that it will not be created).

Error

So that’s all well and good, but let’s imagine that you’re using this resource; for example:

resource "azurerm_app_service_plan" "app-service-plan" {
  name                = "pcm-app-service-plan"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  sku {
    tier = "Standard"
    size = "S1"
  }
}

(See this post for an explanation of this)

If you’re using the resource group, you’ll start to get an error:

Because azurerm_resource_group.rg has “count” set, its attributes must be
accessed on specific instances.

What this is basically telling you is that, because you’ve specified a count, you now have to tell it which version of the resource you’re referring to. In our case, we know that if the resource group exists at all, there will only be one; so we can use this:

resource "azurerm_app_service_plan" "app-service-plan" {
  name                = "pcm-app-service-plan"
  location            = azurerm_resource_group.rg[0].location
  resource_group_name = azurerm_resource_group.rg[0].name
  sku {
    tier = "Standard"
    size = "S1"
  }
}

However, there’s still a problem here. Let’s imagine that our myvar setting is not set to “some_setting” – well in that case, the resource group will not create; however, the app service plan will, because no such check exists. The upshot of this is that you’ll need to ensure that anything that uses a resource that has a count, must itself, have a count (and on the same logic).

Receiving a Message Using Azure.Messaging.ServiceBus

Azure.Messaging.ServiceBus is the latest SDK library that allows you to interface with Azure Service Bus.

In this post I wrote about receiving a message in Azure Service Bus using the Microsoft.Azure.ServiceBus library. Here, I’ll cover the method of receiving a message using Azure.Messaging.ServiceBus.

The first step is to create a ServiceBusClient instance:

_serviceBusClient = new ServiceBusClient(connectionString);

Once you’ve created this, the subsequent classes are created from there. This library draws a distinction between a message receiver and a message processor – the latter being event driven.

Receiving a Message

To receive a message:

            var messageReceiver = _serviceBusClient.CreateReceiver(QUEUE_NAME);            
            var message = await messageReceiver.ReceiveMessageAsync();

            //string messageBody = Encoding.UTF8.GetString(message.Body);
            string messageBody = message.Body.ToString();

It’s worth noting here that it is no longer necessary to decode the message body explicitly.

Processing a Message

This is the new version of registering a handler for the event, and it has a few additional features. Let’s see the code:

            var processor = _serviceBusClient.CreateProcessor(QUEUE_NAME);
            processor.ProcessMessageAsync += handleMessage;
            processor.ProcessErrorAsync += ExceptionHandler;

            await processor.StartProcessingAsync();                        

            await Task.Delay(2000);
            await processor.StopProcessingAsync();

We won’t worry too much about the events themselves for now, but the important events are StartProcessingAsync and StopProcessingAsync. Note that here we have a 2 second delay – this means that we will receive messages for two seconds, and then stop; obviously the start and stop don’t need to be in the same method.

References

https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-get-started-with-queues

Using the Mongo DB Shell to remove all elements in a collection

In my recent experiments with using MongoDB, I came across a need to remove all the records from a collection – the equivalent of DELETE FROM MYTABLE in SQL.

It turns out that, in order to do this, you need to use the Mongo DB Shell. This comes with Compass, although you may miss it, tucked away at the bottom:

By default it’s collapsed, but once you un-collapse it, you can do a number of things. The first thing you should do is tell it which DB you would like to use (for the purpose of this, let’s assume the database is called aardvark):

> use aardvark
< 'switched to db aardvark'

You can see which collections you have in the database, like this:

> db.getCollectionNames()
< [ 'collection1', 'collection2' ]

You can also see some information about a specific collection; for example:

> db.collection1.exists()

This should return a JSON document that details collection1.

In my case, I wanted to clear everything from the collection:

> db.collection1.deleteMany({ })

You can also pass this a specific filter, but this allows you to delete everything.

References

https://docs.mongodb.com/mongodb-shell/run-commands/

https://www.mongodb.com/blog/post/introducing-the-new-shell

Azure Service Bus SDK Libraries

I’ve written pretty extensively on the Microsoft.Azure.ServiceBus SDK. In this post, I’m just covering the fact that this library is on its way to deprecation (don’t panic, its predecessor has been hanging around since 2011!)

Let’s see what these libraries are and some links.

WindowsAzure.ServiceBus

This library does look like it’s on its way to being deprecated. It supports .Net Framework only.

The NuGet package is here, but it’s closed source:

https://www.nuget.org/packages/WindowsAzure.ServiceBus

Microsoft.Azure.ServiceBus

This library was introduced to support .Net Core.

The NuGet package is here:

https://www.nuget.org/packages/Microsoft.Azure.ServiceBus

The code for this is open source:

https://github.com/Azure/azure-service-bus-dotnet

Azure.Messaging.ServiceBus

If you read Sean Feldman’s article here (which this was heavily based on), you’ll see that this seems to be due to some restructuring of teams. The code has changed, and MS say it’s more consistent (although what with, I’m unsure).

The NuGet Package is here:

https://www.nuget.org/packages/Azure.Messaging.ServiceBus

The source code for this is here:

https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/servicebus

References

https://markheath.net/post/migrating-to-new-servicebus-sdk

https://weblogs.asp.net/sfeldman/the-future-of-asb-dotnet-sdk

Cloudflare Workers – Creating a worker to standardise the request URL

Imagine a situation where your web-site is being linked to, or referenced, such that the case of the URL differs; or, for example, where you wish to change a given part of the URL. Let’s use the following URL from my site as an example:

One possibility in Cloudflare is that you could set up a custom redirect rule (i.e. just create a new rule that redirects from the address above to the lower case version; for example:

But what if you have dozens of such examples, and they all differ very slightly. I’ve been playing a little with a new(ish) Cloudflare feature called Workers.

In this post, I’ll cover getting started with Workers, how to use one to perform some custom logic on your URL, and link that to your domain.

Wrangler

The tool that you’d use to set these workers up is called Cloudflare Wrangler. To install Wrangler:

npm install -g @cloudflare/wrangler

Once installed, execute the following to ensure that you’ve correctly installed it:

wrangler --version

The next step is to log-in to your Cloudflare account from Wrangler:

wrangler login

That will ask to launch the browser, and get you to log-in:

Once you’ve authorised this, you can create your new Cloudflare Worker project:

wrangler generate wrangler-test-1   

This will generate a project folder in the current directory:

Once this is done, take the Account ID (shown above), and open the wrangler.toml file that has been created within your project. Set the account_id in there:

name = "wrangler-test-1"
type = "javascript"

account_id = "f8021e056d0d5e96c9b3ba4ad054bb2c"
workers_dev = true
route = ""
zone_id = ""

You should now be able to debug this locally by typing:

wrangler dev

However, if you run that at this point, you’ll most likely get a 500 error:

Error: HTTP status server error (500 Internal Server Error) for url (https://…

Set-up Workers

The reason you’ll get this is that you have yet to set-up Workers on your Cloudflare account.

(It’s worth bearing in mind that, whilst there is a free tier, Cloudflare workers can cost money after a given number of requests.)

In your Cloudflare account, select Workers and click Set up:

Now that Workers are set up, you should be able to resume:

wrangler dev

This looks like it’s running on localhost; which, in fact, it is; however, all the requests are being proxied through Cloudflare (and so the worker will execute on their servers).

You can also simply publish your worker, like so:

wrangler publish

As you can see, this gives you a specific Url to navigate to. The default example allows you to display “Hello World” based on the request.

Write the Worker

Up until now, we’ve just taken the default worker code; but what about something such as the following:

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})
/**
 * Respond with hello worker text
 * @param {Request} request
 */
async function handleRequest(request) {
  const destinationUrl = request.url;
  const newUrl = request.url.toLowerCase();
  if (destinationUrl == newUrl)  {    
    return await fetch(request)
  } else {
    return Response.redirect(newUrl, 301);
  }
}

All we’re doing here is checking is the URL is lower-case; if it’s not then we make it lower case. Obviously, if you execute this on its own domain, the worker will not respond, but you can then map this to a different domain; for example:

Now, all the requests coming into pmichaels.net will be manipulated, and then passed through. If we map this and re-execute the initial request:

It’s worth bearing in mind that this executes on all requests to your site. For example, say you did something like this:

async function handleRequest(request) {
  const destinationUrl = request.url;
  if (destinationUrl.includes('2021')) {
    return await fetch(request)
  } else {
    return Response.redirect("https://www.pmichaels.net", 301);
  }

This would work, but the CSS, for example, may fail, because the URL for the CSS may not have 2021 in the title, and so try to redirect the request away.

Git – What to do when things go wrong

I thought I’d detail some things that you can do when you get issues with git. Specifically, how you can deal with issues without having to resort to manually copying files around.

For this purpose of this post, I’ll assume that you have a git repository, and that some code has been pushed to it. I’ll be using git bash for the commands in this post.

Status and Log

Before we start working out how we can revert or fix issues that go wrong, let’s first see how we can reliably, and sensibly see what state we’re in.

Let’s start with having a look at the story so far. For this we can use the git log command:

git log --oneline

Our history has some commits in already – these have been pushed to the remote repository.

To demonstrate these commands, firstly, we’ll change a file:

$ echo "some test" >> test.txt

This file hasn’t been staged yet; we can prove this by using the git status command:

$ git status -s

This gives us a nice display of the state of our pending changes:

What git status -s gives you is a report of which files are staged, and which are committed in the following manner:

XY filename

Where X is a staged change, and Y is an unstaged, but tracked change. For example, if we stage the change to test.txt:

Of course, a file can have both staged, and unstaged changes:

Finally, let’s commit and push this change, and then view the log again:

git log --oneline

When things go wrong

Okay, so what we’ll cover here is how to revert changes that are in various states. The states that a file can exist in in git are:

1. Untracked
2. Tracked and unchanged
3. Tracked and changed

Let’s start with (1) – untracked. We’ll create a new file:

$ echo "new file 2" >> newfile2.txt
$ git status -s

This shows up like so:

The question mark here is telling us that git doesn’t know about this file – it’s untracked. In fact, if you just call git status it will tell you that the file is untracked.

Remove Untracked files

Okay, so by default, git will not remove newfile2.txt for you – you’re working away and you type the wrong command, you don’t really want git to go and wipe your files; however, sometimes, you do.

Because we aren’t tracking the new file, we’ll need the command:

git clean

However, if you run this, you’ll get the error:

fatal: clean.requireForce defaults to true and neither -i, -n, nor -f given; refusing to clean

The reason here is that, by default, git tries to prevent you from breaking things – and removing all the files in a directory that git doesn’t know about it quite drastic. We’ll therefore need the -f parameter to force the issue:

git clean -f

This will remove any untracked files that it finds; however, it will leave directories, and also anything that you’ve explicitly told it to ignore. To remove absolutely everything, we’ll include two extra parameters:

git clean -f -d -x

That will remove anything that git doesn’t know about, whether or not you’ve told it to ignore the files. Obviously, use this with caution – it WILL delete files and you WILL NOT be able to restore them – however, it will ONLY remove local files – any remote repository will be unaffected.

Let’s now consider a situation where you have made a change to a file, but you wish to revert to the unchanged state.

Reverting a tracked change

There is more than one category of tracked change; they are:

1. Changes that are unstaged
2. Changes that are staged
3. Changes that are committed
4. Changes are committed and pushed to a remote repository

All four of these can represent different changes in the same file. Let’s change a file so that it’s different in all four stages:

echo "change type 4" >> test.txt
git add .
git commit -m "change type 4"
git push
echo "change type 3" >> test.txt
git add .
git commit -m "change type 3"
echo "change type 2" >> test.txt
git add .
echo "change type 1" >> test.txt

We can see the status of the local repository by issuing a status command:

git status -s

This will show that the file has both stages and unstaged changes:

And of the remote by viewing the tree:

git log --all --graph --decorate --oneline

So, we can see that each category of change is represented. Let’s start by reverting the unstaged change.

Unstaged

This is perhaps the easiest to revert, as there’s a command designed specifically for it:

git restore .

This will leave staged changes, but will discard any unstaged changes to the state of HEAD. If you wish the keep these changes for later, then stash might be a better answer.

Staged

One way of unstaging changes is to use the git restore command with the parameter staged:

git restore --staged .

Doing this will unstage the change that you’ve made; for example:

Similarly, you can issue a reset command for the same purpose:

git reset HEAD

Which has the same effect.

However, if you wish to completely discard the staged change, you can issue a reset with the hard parameter; for example:

None of these commands will affect any of the commits made:

Let’s look at how you can revert a local commit.

Undoing a local commit

If we look at the result of git log above, we get a clue as to how this may be possible, commit 335cf90 points to the local master branch, but origin/master points to a remote master branch. So we can just point our local HEAD back to the remote. There’s a few ways that this can be done; they are quite subtly different.

git revert

To move to the state of a previous commit, you can issue a git revert. However, this does not actually revert the commit, it simply creates a new commit which is the reverse of the one that’s already there.

This can result in merge conflicts, depending on the state of the local and remote branch.

Once you’ve started a git revert, you can issue a git revert --abort to abandon the revert.

git reset

A quick way to discard a local commit is to simply reset HEAD to the origin/master:

$ git reset --hard origin/master
HEAD is now at 00f3c8e change type 4

Undoing a remote change

Because you’re now manipulating the remote repository, this carries a very real risk of causing some serious damage.

The first step here is to set your HEAD, as before, to a previous commit; for example:

$ git reset --hard origin/master~1

This will set your HEAD to the commit 1 before the current remote master (to go back two, you would use master~2

If you now look at your log, you’ll see that it now looks a little strange:

There’s a couple of things to note here – the first is that you now have a detached HEAD – this means that your HEAD no longer refers to a branch. Secondly, your branch and the remote branch are now different. To complete this process, you can push the change; however, if you just call git push you’ll get something like the following error:

$ git push
fatal: You are not currently on a branch.
To push the history leading to the current (detached HEAD)
state now, use

    git push origin HEAD:<name-of-remote-branch>

What’s needed here is a bit of force:

$ git push -f origin master

Looking at the history now, we can see that the latest commit is gone:

It’s worth bearing in mind that, one of the great things about git is that, nothing is ever really gone. If we use git reflog we’ll see that the “lost” commit is, in fact, safe and well – just orphaned:

git reflog

You can even see this in the tree structure:

$ git log --graph --all --oneline --reflog

If we now add a new commit, we can use this command to see where we went back to and forked:

Error Inserting Records into a MongoDb Database

In my recent posts I’ve been trying to learn a little about MongoDB, and the API. Whilst trying to insert data into a MongoDB database, I got an error; and, since I couldn’t really find anything about it on the internet, I thought I’d add something.

Here’s the code that I initially (and incorrectly) tried to use to insert a record:

            var newData = new MyData()
            {
                SomeDate = myDate,
                Text = "hello"
            };
            var document = BsonDocument.Create(newData);

            var collection = _db.GetCollection<BsonDocument>("testcollection");
            await collection.InsertOneAsync(document);

            return newData.Id.ToString();

If you execute the code above, you’ll get the following error:

System.ArgumentException: ‘.NET type MyApp.Entities.MyData cannot be mapped to BsonType.Document. ‘

In fact, the issue is caused because you need to convert the data into a BSON document first; like so:

            var newData = new MyData()
            {
                SomeDate = myDate,
                Text = "hello"
            };
            var bson = newData.ToBsonDocument();
            var document = BsonDocument.Create(bson);            

            var collection = _db.GetCollection<BsonDocument>("testcollection");
            await collection.InsertOneAsync(document);

            return newData.Id.ToString();

Creating a C# Client, and Listing, and Adding Data in MongoDB

In this previous post I wrote about how you can install MongoDB, and use the built in clients to interrogate the data. In this post I’ll cover setting up a simple C# client to interrogate the database.

Bring in a NuGet Package

As with many integrations in the last 10 years, 90% of the job is installing the right NuGet package:

Install-Package MongoDB.Driver

Once that’s installed, you should have access to the SDK. You’ll need the following using statement:

using MongoDB.Driver;

Connecting to the MonoDB Instance

It actually took me a while to work out the correct format here; for the default instance, you can simply use this for the connection string:

mongodb://localhost:27017

So to connect to the DB, you would use this:

MongoClient dbClient = new MongoClient("mongodb://localhost:27017");
var db = dbClient.GetDatabase("testdb");

See the referenced previous post if you’re interested where testdb came from.

Collections

As we saw in the previous post, Mongo works around the concept of collections – it’s roughly analogous to a table. The list the collections:

MongoClient dbClient = new MongoClient("mongodb://localhost:27017");
var db = dbClient.GetDatabase("testdb");

var collections = await db.ListCollectionsAsync();
            
foreach (var col in collections.ToList())
{
    Console.WriteLine(col.ToString());
}

Let’s see what inserting data would look like.

Inserting Data

In fact, inserting data is very straightforward. We need to introduce a new type here, called a BsonDocument. BSON is a Binary JSON document, and to all intents and purposes, it’s JSON; this means that you can create a document such as this:

            var document = new BsonDocument()
            {
                { "test", "test1" },
                { "test2", "test2" },
                { "test3", "test3" }
            };

Or, indeed, any valid JSON document.

To insert this into the DB, you would just call InsertOne:

            var collection = db.GetCollection<BsonDocument>("testcollection");
            await collection.InsertOneAsync(document);

You can insert many records by calling InsertMany:

            var documents = new List<BsonDocument>()
            {
                new BsonDocument()
                {
                    { "test", "test1" },
                    { "test2", "test2" },
                    { "test3", "test3" }
                },
                new BsonDocument()
                {
                    { "Date", DateTime.Now },
                    { "test2", 12 },
                    { "test3", "hello" }
                }
            };

            var collection = db.GetCollection<BsonDocument>("testcollection");
            await collection.InsertManyAsync(documents);

References

https://www.mongodb.com/blog/post/quick-start-c-sharp-and-mongodb-starting-and-setup

Installing and Running MongoDB from Scratch

I’m relatively new to Mongo, but I’ve decided to use it for a recent project, mainly because it writes quickly. I wanted to install and run it locally; however, most of the tutorials for Mongo push Atlas pretty heavily, so I thought it might be worth creating a post on installing and running Mongo from scratch. Most of what’s here is a duplication of what you can find in the official docs.

Download

Start by downloading Mongo from here.

You’re then given some options during the installation. If you decide to change any of the default directories, then make a note of what you change them to; otherwise just accept the defaults:

This process will install MongoDb Compass by default – this is Mongo’s equivalent of SQL Server Management Studio, or MySQL’s SQL Workbench.

Running the Service

Mongo will install a Windows Service by default (well, if you look closely at the screenshot above, you’ll see that it’s pretty easy to stop it doing this), and you can just connect to this; however, you can remove that service (just go into Windows -> Services and remove it).

If you choose to keep the default installation then skip down to the Compass step.

In order to run Mongo yourself, you’ll need to run a command window, and navigate to the Mongo installation directory (typically C:\Program Files\MongoDB\Server\4.4\bin):

The key files here are:
– mongo.exe which is a mongo client
– mongod.exe which is the database service

The first thing to do is to start the DB Service:

.\mongod.exe –dbpath=”c:\tmp\mongodata”

This will use c:\tmp to store the data in.

Running the Client and Manipulating the Data

Now that the server is running, you can launch the client:

First, launch a command prompt as admin.

Next navigate to: C:\Program Files\MongoDB\Server\4.4\bin\

PS C:\Program Files\MongoDB\Server\4.4\bin> .\mongo.exe

This is a command line client. To create a new DB, just tell it to use a database that doesn’t exist:

> use testdb
switched to db testdb
> db.testcollection.insertOne({test: "test", test2: 1})
{
        "acknowledged" : true,
        "insertedId" : ObjectId("60830bfe34473a83b0fe56a6")
}
>

Here, we’ve inserted an object into the database; we can view that object by using something like the following:

> db.testcollection.find()
{ "_id" : ObjectId("60830bfe34473a83b0fe56a6"), "test" : "test", "test2" : 1 }
>

Finally, we can do the same thing, but in a GUI, called Compass.

Compass

After launching Compass for the first time, select New Connection, and then Fill in connection fields individually:

The following will work if you’ve just installed the default (as above) and made no config changes:

We can now browse the data that we inserted earlier:

References

https://docs.mongodb.com/manual/tutorial/install-mongodb-on-windows/