Category Archives: Docker

How to Set-up Hangfire with a Dashboard in .Net 6 Inside a Docker Container

In this earlier post I wrote about how you might set-up hangfire in .Net 6 using Lite Storage.

In this post, we’ll talk about the Hangfire dashboard, and specifically, some challenges that may arise when trying to run that inside a container.

I won’t go into the container specifically, although if you’re interested in how the container might be set-up then see this beginner’s guide to Docker.

Let’s quickly look at the Docker Compose file, though:

    build: .\MyApi
      - "5010:80"      
      driver: "json-file"

Here you can see that my-api maps port 5010 to port 80.


Let’s see how we would set-up the Hangfire Dashboard:

var builder = WebApplication.CreateBuilder(args);


builder.Services.AddHangfire(configuration =>

// Add services here

var app = builder.Build();

// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())


var options = new DashboardOptions()
    Authorization = new[] { new MyAuthorizationFilter() }
app.UseHangfireDashboard("/hangfire", options);

app.MapPost(" . . .


public class MyAuthorizationFilter : IDashboardAuthorizationFilter
    public bool Authorize(DashboardContext context) => true;

This is the set-up, but there’s a few bits to unpack here.


The UseHangfireDashboard basically let’s hangifre know that you want the dashboard setting up. However, by default, it will only allow local connections; which does not include mapped connections via Docker.


The Authorization property allows you to specify who can view the dashboard. As you can see here, I’ve passed in a custom filter that bypasses all security – probably don’t do this in production – but you can substitute the MyAuthorizationFilter for any implementation you see fit.

Note that if you don’t override this, then attempting to access the dashboard will return a 401 error if you’re running the dashboard from inside a container.

Accessing the Dashboard

Navigating to localhost:5010 on the host will take you here:

Configuring Docker to use a Dev Cert When Calling out to the Host Machine

I’ve recently being wrestling with trying to get an ASP.Net service within a docker container to call a service running outside of that container (but on the same machine). As you’ll see as we get further into the post, this is a lot more difficult that it first appears. Let’s start with a diagram to illustrate the problem:

The diagram above illustrates, at a very basic level, what I was trying to achieve. Essentially, have the service running inside docker call to a service outside of docker. In real life, the service (2) would be remote: very likely on a different physical server, and definitely have an allocated domain address; however, for this experiment, it lives on the same physical (or virtual) machine as the docker host.

Before I continue, I must point out that the solution to this comes by way of some amazing help by Rob Richardson: he gave a talk at NDC Porto that got be about 70% of the way there, and helped me out further to actually get this working!

Referencing a Service outside of Docker from within Docker

Firstly, let’s consider a traditional docker problem: if I load Asp.Net Service (2) then I would do so in a browser referencing localhost. However, if I reference localhost from within docker, that refers to the localhost of the container, not the host machine. The way around this is with host.docker.internal: this gives you a path to the host machine of the docker container.

Certificates – The Problem

Okay, so onto the next (and main) issue: when I try to call Asp.Net Service (2) from the docker container, I get an SSL error:

The remote certificate is invalid according to the validation procedure: RemoteCertificateNameMismatch, RemoteCertificateChainErrors


The reason has to do with the way that certificates work; and, in some cases, don’t work. Firstly, if you watch the linked video, you’ll see that the dev-cert functionality in Linux has a slight flaw – in that it doesn’t do anything*. Secondly, because you’re jumping (effectively) across machines here, you can’t just issue a dev cert to each anyway, as it will be a different dev cert; and thirdly, dev-certs are issues to localhost by default: but as we saw, we’re actually trying to contact host.docker.internal.

Just to elaborate on the trust chain; let’s consider the following diagram:

In this diagram, Certificate A is based on the Root Certificate – if the machine trusts the root certificate, then it will trust Certificate A – however, the same machine will only trust Certificate B if it is explicitly told to do so.

This means that the dev cert for the container will not be trusted on the host, and vice-versa – as neither have any trust chain and relationship – this is the problem, but it’s also the solution.

Okay, so that’s the why – onto the how…


Let’s start with introducing mkcert – this is an incredibly useful tool that hugely simplifies the whole process; it can be installed via chocolatey:

choco install mkcert

If you don’t want to use Chocolatey, then the repo is here.

Essentially, what this allows us to do is to create a trusted root certificate, from which, we can base our other certificates. So, once this is installed, we can create a new trusted root certificate like this:

mkcert -install

This installs our trusted root certificate; which we can see here:

This will also generate the following files (on Windows, these will be in %localappdata%\mkcert):


These are the root certificates, so the next thing we need is a certificate that covers the specific domain. You can do that by simply calling mkcert with the appropriate domain(s):

mkcert localhost host.docker.internal

This creates a valid cert for both localhost and host.docker.internal:


You may wish to rename these to be something slightly more descriptive, but for the purpose of this post, this is sufficient.


Almost there now – we have our certificates, but we need to copy them to the correct location. Because we’ve run mkcert -install the root certificate is already on the local machine; however, we now need that on the docker client (Asp.Net Service (1) from the diagram above). Firstly, let’s download a mkcert.exe from here for the relevant version of Linux that you’re running.

Let’s copy both the rootCA.pem and rootCA-key.pem into our Asp.Net Service (1) project and then change the dockerfile:

. . .
FROM base AS final
COPY mkcert /usr/local/bin
COPY rootCA*.pem /root/.local/share/mkcert/
RUN chmod +x /usr/local/bin/mkcert \
  && mkcert -install \
  && rm -rf /usr/local/bin/mkcert 
COPY --from=publish /app/publish .
. . .

A few things to mention here:

1. The rest of this file is from the standard Ast.Net docker file. See this post for possible modifications to that file.
2. Each time you execute a RUN command docker makes a temporary image, hence why combining three lines (on line 7) with the && makes sense.
3. When you run the mkcert -install it will pick up the root certificate that you copy into the /root/.local/share/mkcert.
4. Make sure that these lines apply to the runtime version of the image, and not the SDK version (there’s absolutely no point in adding a certificate to the SDK version).
5. The last line (rm -rf /usr/local/bin/mkcert) just cleans up the mkcert files.

The Service

The final part is to copy the generated certificates (localhost.pem and localhost-key.pem) over to the service application (Asp.Net Service (2)). Finally, in the appsettings.json, we need to tell Kestrel to use that key:

  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.AspNetCore": "Warning"
  "AllowedHosts": "*",
  "Kestrel": {
    "Certificates": {
      "Default": {
        "Path": "localhost-host.pem",
        "KeyPath": "localhost-host-key.pem"

That’s it! If you open up the Asp.Net Service (2), you can check the certificate, and see that it’s based on the mkcert root:

References and Acknowledgements

As I said at the start, this video and Rob himself helped me a lot with this – so thanks to him!

It’s also worth mentioning that without mkcert this process would be considerably more difficult!


* actually, that’s not strictly true – Rob points out in his video the nuance here; but the takeaway is that it’s unlikely to be helpful

Beginner’s Guide to Docker – Part 4 – Viewing Docker Logs

In this post (part of a series around docker basics) I’ll be disucssing how you can view logs for running docker containers.

As usual with my posts, there’s very little here that you can’t get from the the docs – although I’d like to think these posts are slightly more accessible (especially for myself).

Viewing Logs

In fact, the process of viewing logs is remarkably easy; once you’ve launched the container, you need the ID:

docker ps

Once you have the container ID, you can feed it into the docker container logs command:

docker container logs --follow dcd204c97421

The follow argument basically leaves a session open, and updates with any new log entries (leave it out for a static view of the logs).

The docker desktop app actually provides the same functionality out of the box (but provides you with a nice search and formatting facility).

Writing Logs

Obviously, this varies from language to language, so I’ll stick to .Net here. It’s actually very straightforward:


You can now log in the same way as you would for any other .net application; simply inject into the constructor:

        public MyService(
            ILogger<MyService> logger)
            _logger = logger;

And then log as you would normally:

_logger.LogWarning("something happened");


Constructor Injection in .Net

Docker Container Logs

Running Docker Compose Interactively

I’ve recently been doing a fair bit of investigation into docker. My latest post was investigating the idea of setting up a sidercar pattern just using docker compose. In this post, I’m exploring the concept of launching an interactive shell using docker compose.

Why is this so hard?

Docker, and by extension, docker compose, are not really designed to run interactive tasks: they fare best left with ephemeral background tasks. However, in my particular use case, I needed to run a console app which accepted a keyboard input.

The system will let you create the console app as far as actually running, but once you issue a docker compose up, you’ll get something like the following error:

Unhandled exception. System.InvalidOperationException: Cannot read keys when either application does not have a console or when console input has been redirected. Try Console.Read.

How to run Interactively

In fact, there’s a few issues here, the first (as pointed out by this post) is that you need to tell docker compose to pass through the interactive mode to the container:

    build: .\My.App
    stdin_open: true
    tty: true    
      - "some-api"

stdin_open and tty translate to passing -it to the container. However, that doesn’t solve the problem entirely; if you issue a docker compose up you’ll still see the same issue.

The reason here is that there are (or can be) multiple containers, and so there’s no sensible way of knowing how to deal with that. In fact the solution is to simply tell docker compose which one to run; for example:

docker compose build --no-cache
docker compose run my-app

In the example above, that will also run the api some-api because we’ve declared a dependency. Note that build is optional; although while developing and debugging, I find it useful to execute both.

Docker error with .Net 6 runtime image

While trying to set-up a docker image for a .Net 6 console application, I found that, although it built fine, I got the an error when trying to run it. Let’s imagine that I used the following commands to build and run:

docker build -t pcm-exe
docker run pcm-exe

The result from the run command was the following:

It was not possible to find any compatible framework version
The framework ‘Microsoft.NETCore.App’, version ‘6.0.0’ (x64) was not found.
– The following frameworks were found:
6.0.0-rc.2.21480.5 at [/usr/share/dotnet/shared/Microsoft.NETCore.App]

You can resolve the problem by installing the specified framework and/or SDK.

The specified framework can be found at:

This had me stumped for a while, as I was under the impression that when images are updated, docker knows to go and download them – this is not the case. I discovered this by running an inspect on the runtime image from the dockerfile defined here:

FROM AS base

The inspect command:

docker inspect

This gave the following result:

“Env”: [

At this point there seemed to be two options – you can just remove the image – that would force a re-download; however, a quicker way is to pull the new version of the image:

docker pull 

After rebuilding the image, running a subsequent inspect shows that we’re now on the correct version:

          "Env": [

Implementing a Sidecar Pattern Without Kubernetes

A sidecar pattern essentially allows a running process to communicate with a separate process (or sidecar) in order to offload some non-essential functionality. This is typically logging, or something similarly auxiliary to the main function. You tend to find sidecars in Kubernetes clusters (in fact, you can have more than one container in a pod for exactly this reason); however, the pattern is just that: a pattern; and can therefore apply to pretty much anything.

In this post, I’m going to cover how you might set-up two independent containers, and then implement the sidecar pattern using docker compose.

Console App

The code for this is going to be in .Net Core 6, so let’s create a console app:

It will be easier later on if you follow the following directory structure for the projects:


For the purpose of the console app, just put it in a parent directory – so the code lives in /sidecar-test/console-app (we’ll come back to the API).

We’ll need to add docker support for this, which you can do in Visual Studio (right click the project and add docker support):

Once you’ve created the docker build file, you may need to edit it slightly; here’s mine:

FROM AS base

FROM AS build

COPY ["sidecar-test.csproj", "sidecar-test/"]
WORKDIR "/src/sidecar-test"
RUN dotnet restore "sidecar-test.csproj"
COPY . .

RUN dotnet build "sidecar-test.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "sidecar-test.csproj" -c Release -o /app/publish

FROM base AS final
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "sidecar-test.dll"]

If you find that the dockerfile doesn’t work, then you might find the following posts helpful:

Beginner’s Guide to Docker – Part 2 – Debugging a Docker Build

Beginner’s Guide to Docker – Part 3 – Debugging a Docker Build (Continued)

If you’d rather not wade through that, then the Reader’s Digest version is:

docker build -t sidecar-test .

Once the docker files are building, experiment with running them:

docker run --restart=always -d sidecar-test

Assuming this works, you should see that the output shows Hello World (as per the standard console app).


For the web API, again, we’re going to go with the default Web API – just create a new project in VS and select ASP.Net Core Web API:

Out of the box, this gives you a Weather Forecaster API, so we’ll stick with that for now, as we’re interested in the pattern itself and not the specifics.

Again, add docker support; here’s mine:

FROM AS base

FROM AS build

COPY ["sidecar-test-api.csproj", "sidecar-test-api/"]
WORKDIR "/src/sidecar-test-api"
RUN dotnet restore "sidecar-test-api.csproj"
COPY . .

RUN dotnet build "sidecar-test-api.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "sidecar-test-api.csproj" -c Release -o /app/publish

FROM base AS final
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "sidecar-test-api.dll"]

You should now be able to build that in a similar fashion as before:

docker build -t sidecar-test-api .

You can now run this, although unlike before, you’ll need to map the ports. The port mapping works by mapping the external port to the internal port – so to connect to the mapping below, you would connect to http://localhost:5001, but that will connect to port 80 inside the container.

docker run --restart=always -d -p 5001:80 sidecar-test

Now that you can connect to the Web API, let’s now implement the sidecar pattern.

Docker Compose

We now have two independent containers; however, they cannot, and do not, communicate. Let’s remind ourselves of the directory structure that we decided on at the start of the post:


We now want to open the parent sidecar-test directory. In there, create a file called docker-compose.yml. It should look like this:

version: '3'
    build: .\sidecar-test-sidecar\web-api
      - "5001:80"
    build: .\sidecar-test\console-app
      - "sidecar-api"

There’s lots to unpick here. Let’s start with services – this contains all the individual services that we’re going to run in this file; each one has a build section, which tells compose to call docker build on the path specified. You can create the image separately and replace this with an image section. The ports section maps directly to the docker -p switch.

There’s two other important parts of this, this first is the service name (sidecar-api, sidecar-program) – we’ll come back to that shortly. The second is the depends_on – which tells docker compose to ensure that the dependant is built and running first (we could switch the order of these and it would still start the API first).

Running The Sidecar

Finally, we can run our sidecar. You can do this by navigating to the parent sidecar-test directory, and typing:

docker compose up

If you’re changing and running the code, and using PowerShell, you can use the following command:

docker-compose down; docker-compose build --no-cache; docker-compose up

Which will force a rebuild every time.

However, if you run this now, you’ll notice that, whilst it runs, it does nothing.

Linking The Two

Docker Compose has a nifty little feature – all the services in the compose file are networked together, and can be referenced by using their service name.

In the console app, change the main method to:

        public static async Task Main(string[] args)
            Console.WriteLine("Get Weather");
            var httpClient = new HttpClient();
            httpClient.BaseAddress = new Uri("http://sidecar-api");
            var result = await httpClient.GetAsync("WeatherForecast");
            var content = await result.Content.ReadAsStringAsync();

We won’t focus on the fact that I’m directly instantiating HttpClient here; but notice how we are referencing the API, by http://sidecar-api, and because they are both part of the same compose file, this just works.

If we now recompile the docker files:

docker-compose down; docker-compose build --no-cache; docker-compose up

We should have a running sidecar – admittedly it doesn’t behave like a sidecar, but that’s just why it gets called, not the architecture.


Beginner’s Guide to Docker – Part 3 – Debugging a Docker Build (Continued)

In this post I starting a series on getting started with Docker; here, I’m going to expand to give some tips on how to debug a docker build when even the build itself is failing.

Imagine that you’re trying to debug a docker build script, and you keep getting build errors. The script never completes, and so you can’t attach to the running container (as shown here).

In this case, there are still a few options available. Let’s consider the following build script:

FROM AS base

FROM AS build
COPY ["pcm-test.csproj", "pcm-test/"]
RUN dotnet restore "pcm-test/pcm-test.csproj"
COPY . .
WORKDIR "/src/pcm-test"

RUN dotnet build "pcm-test.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "pcm-test.csproj" -c Release -o /app/publish

FROM base AS final
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "pcm-test.dll"]

When I run this (it’s a console app, btw), I get the following build error:

 => ERROR [build 10/10] RUN dotnet build "pcm-test.csproj" -c Release -o /app/build

#18 0.790 Microsoft (R) Build Engine version 17.0.0-preview-21501-01+bbcce1dff for .NET
#18 0.790 Copyright (C) Microsoft Corporation. All rights reserved.
#18 0.790
#18 1.351   Determining projects to restore...
#18 1.623   All projects are up-to-date for restore.
#18 1.703   You are using a preview version of .NET. See:
#18 2.897 CSC : error CS5001: Program does not contain a static 'Main' method suitable for an entry point

I can absolutely assure you that the program does have a main method.


When a build goes wrong, I’ve determined two ways to debug – the first is by simply executing debug commands inside the build, and the second is to reduce the build until it works, and then inspect the image.

Executing Debug Commands

Since you can run any command inside a build file, you can simply do something like this:

RUN ls -lrt

Remember that if you do this, you’ll need to change a couple of settings:


This can be set to auto (default), plain, and tty.

tty (or interactive terminal) and auto will compress the output; whereas plain will show all the container output (including these kind of debug messages.


Docker tries to be clever, and cache the commands that have already executed; however, for debug statements, this is going to mean that they’ll only ever fire the first time. It tells you when it’s executing these from the cache:

docker build -t pcm-test --no-cache --progress=plain .

Sometimes, executing these statements is enough; however, sometimes, it helps to build a partial image.

Removing the breaking parts of the build

When I first started writing software, on a ZX Spectrum – I’d frequently copy (BASIC) code out from a magazine. I didn’t really know what I was writing, so if it didn’t work it would give me an error that I didn’t understand; however, it would tell me the offending line number, and so I’d simply remove that line. Obviously, subsequent lines would then start to fail, and typically, I’d end up with a program that ended at the first typing (or printing) error. This didn’t make for a very good game.

This isn’t true in docker – if you remove the offending code, you can still create an environment, explore it, and even manually execute the rest of the build inside the environment, to see what’s going wrong!

FROM AS base

FROM AS build
COPY ["pcm-test.csproj", "pcm-test/"]
RUN dotnet restore "pcm-test/pcm-test.csproj"
COPY . .
WORKDIR "/src/pcm-test"

#RUN dotnet build "pcm-test.csproj" -c Release -o /app/build
#FROM build AS publish
#RUN dotnet publish "pcm-test.csproj" -c Release -o /app/publish
#FROM base AS final
#COPY --from=publish /app/publish .
#ENTRYPOINT ["dotnet", "pcm-test.dll"]

This will now create an image, and so we can run and attach to that image:

 docker run -it pcm-test sh

I can now have a look around the image:

As you can see, this looks a bit strange – there’s no code there.


In this post, I’ve covered two techniques to work out why your docker build may be failing.

Beginner’s Guide to Docker – Part 2 – Debugging a Docker Build

In this post I covered the very basics of getting started with Docker. Once you start to experiment, you’ll need to learn how to debug and investigate some of the unexpected things that happen.


In this post, you’ll see references to WebbApplication4 and WebApplication5. This is simply because, during creating the post, I switched, didn’t realise the screenshots were a mix of both, and now don’t consider it worth my time to change. Just consider the two interchangeable.

Asp.Net 5

For this example, we’ll use the default dockerfile that comes with Asp.Net 5; however, we’ll build it up ourselves. Just build a new Asp.Net project.

When setting this up, do not enable docker support:

If we had enabled docker support, we would get a docker file – so let’s build that (this is taken directly from that file). We’ll put it in the same directory as the sln file.

FROM AS base

FROM AS build
COPY ["WebApplication4/WebApplication4.csproj", "WebApplication4/"]
RUN dotnet restore "WebApplication4/WebApplication4.csproj"
COPY . .
WORKDIR "/src/WebApplication4"
RUN dotnet build "WebApplication4.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "WebApplication4.csproj" -c Release -o /app/publish

FROM base AS final
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WebApplication4.dll"]

To give a quick rundown of what’s happening in this file:
– Where you can see `AS base`, `AS build`, etc, these are known as stages (we’ll come back to that).
– The `FROM` statements are telling docker to build on an existing image.
– WORKDIR changes the running directory inside the container: if this directory doesn’t exist, then the command will create it.
– COPY does what you would think: copies whatever you tell it from the first parameter (the host machine) into the second (the docker container)
– RUN executes a command in the container.
– Finally, ENTRYPOINT tells docker what to do when it starts the container – it our case, we’re running `dotnet WebApplication4.dll`.

Building and Running

To build the image, go to the application’s solution directory (where your sln file is located):

docker build -t pcm-web-app-4 . 

Once the image builds successfully, you can run it like this:

docker run pcm-web-app-4 -p 1234:80

In this post, we’re going to be more interested in debugging the build itself that running the container. Having said that, we should quickly visit the port mapping (-p); the command above is mapping the port 80 inside the container (in the docker file, we issues an EXPOSE on that port) to post 1234 on the host (so you would navigate to localhost:1234 on the host machine to connect).

Listing Docker Containers, Stopping, Killing, and Attaching

You can list running docker containers by issuing the command:

docker ps

Once you have the container ID (highlighted in the image), you can do various things with it.


To attach to the instance:

docker attach 229

Notice that I only used the first three letters of the container instance ID? That’s because docker will let you abridge the ID to the smallest unique set of numbers. This will attach to the container, and you can browse around.

Stop and Kill

If you want to remove an instance, the command is:

docker stop 229

Which will stop the current instance. However, the instance is still there. You can see it by calling:

docker ps -a

To remove the instance completely, you’ll need to call:

docker rm 229

However, you will only be able to remove the container once it’s stopped.

Now that we’ve covered some basics, let’s get into the debugging.


The first useful tip here is to use the target parameter. To build the dockerfile above, you may do something like this (as shown above):

docker build -t web-app-5 . 

That will build the entire image; but if you get an issue, it may fail at an intermediate stage; in that case, you can break down the build; for example:

docker build --target build -t pcm-web-app-5 .

You can then have a look around at the build files by attaching to the container:

docker run -it pcm-web-app-5

A similar technique can be used if you’re getting issues with the entry point not functioning as expected.


In the dockerfile, you can simply comment out the ENTRYPOINT:

#ENTRYPOINT ["dotnet", "WebApplication5.dll"]

Now, if you run the container; for example:

docker run -d -it pcm-web-app-5

-d launches the container in detached mode; you can then attach:

docker attach eb5

You can then manually run the entry point; for example:

Finally, let’s see how we can see details of a running container.

Inspecting the Container

While the container is running, there’s a set of metadata that is accessible. This contains things like the IP, ports, Mac Address, Name, etc… To view this, call:

docker inspect <container ID>

For example:

docker inspect 41b

There’s a lot of information here, and you can traverse it by using the -f parameter to specify a specific attribute. For example:

docker inspect -f '{{ .State.Status }}' 07a

This will show specific properties without the need to scroll through the full JSON.

Beginner’s Guide to Docker – Part 1 – Creating and Running a Python Script

Disclaimer – This is pretty much my first time playing around with Python – there’s a good chance that what I’m doing here is wrong. All I can attest to is that it works.

I recently had cause to create a small python script. Instead of installing the Python libraries, it occurred to me that an easier way to do this might be to use docker to run the script: it worked, and so I thought I’d turn the whole thing into an introduction to Docker.


There are three main concepts in docker; and they closely mirror the concepts of any programming language: you have a script (or source code), an image (or compiled file), and a container (or running process).

In the case of docker, the script in question is called dockerfile.

Download Docker and Setup

The first step is to download docker and install the docker desktop:

If you haven’t, you may need to install WSL, too.

Create the Python Program, and a Dockerfile

The easiest way to do this is to find an empty directory on your machine. Use your favourite text editor to create a file called, with the following code:

print('Hello world python')

Now, in the same directory, create a file called Dockerfile, and add the following code to it:

FROM python
CMD [ "python", "./" ]

We can now build that using the command:

docker build . --tag pcm-test

This command builds the image that we mentioned earlier. You can see this by issuing a command to see all the images:

docker images

For example:

We can now run this:

docker run pcm-test

This will run the script inside the container, and you should see the output.

Playing with the script

Let’s enhance our Python program (in exactly the same way as you enhance any “hello world” app):

import sys
print('Hello ' + sys.argv[1])

sys.argv is a zero based array of arguments – argument 0 is the name of the program running (hence 1 is the first real argument). If you try this with an argument that hasn’t been passed, you’ll get an array out of bounds exception.

We can change our Dockerfile to pass in the parameter:

FROM python
CMD [ "python", "./", "wibble" ]

CMD essentially chains the array of commands together; the above is the equivalent of typing:

python ./ wibble

Okay – well, obviously that’s pretty much the zenith of an hello world program… except, what if we want to ask the user for their name? Let’s try asking in our script:

import sys
print('Hello ' + sys.argv[1])
name = input("Yes, but what's your REAL name?")
print('Hello ' + name)

Unfortunately, if we build and run that, it’ll run, but will skip the input. To fix this, we’ll need to run the container in interactive mode:

docker run -it pcm-test


Well, that was a pleasant Saturday afternoon spent playing with Python and Docker. I’m not completely new to Docker, but I’d like to improve my knowledge, so my intention if to create several more of these, and explore some new features as I build these posts.


Set-up a new Asp.Net Core 2.1 Project with Docker

Until recently, I’ve pretty much managed to avoid the container revolution. However, as the tools are becoming much more integrated, and the technology much more pervasive, I thought I’d document my journey into the world of containers.

As far as I can tell, the battle between types of containers and orchestrators is now over, and the VHS has emerged as Docker and Kubernetes; at least, MS, Google and Amazon seem to think so.

Docker – from step 0

Let’s follow the process through from nothing; the first step is to visit the Docker Store here.

Once you’re there, you’ll need to pick a username and sign in. After this, you can download Docker for Windows:

Once downloaded, run it, and you’ll see this bizarre screen, that seemingly lives in your way:

You can just close this: docker actually lives in your system tray. Right click there and select Settings:

Share your drive that you’re planning to run your Asp.Net Core code from.

By default, docker installs with a Linux bias, so the next step is to switch to Windows Containers (again in the right-click context menu):

Finally, some code…

Okay, Docker is now primed and ready to run a container. Let’s create our new project:

After you okay this, you’ll notice that there’s a new “Add Docker Support” in the create Project dialogue:

Once you create this (and persuade your browser and Docker that you’re not too bothered about certificates for the minute) it should run out of the box:

To prove it’s running, open Power Shell and type:

 docker ps

Or, to take the more brute force approach by just restarting docker and have a look at VS: