Most of us have heard about Docker and containerization in this way or another. There has been already tons of conference talks about it, a lot of books and blog posts have been written on the subject. Let’s face it, docker is taking the market by storm now, becoming the de facto standard for shipping software.

Yet, in my opinon, a big portion of the dotnet world still stands strong by IIS/packaging apps as windows services. Not saying it’s a bad choice, but I think it’s at least worth knowing what this whole container buzz is about and be able to make your own judgement. And actually, there is a lot to be gained!

What is this series? What will I learn?

I decided to wrote this series, to share my experiences with containerizing dotnet applications. For my clients, I went through the whole way of converting NET Framework applications to dotnet core, dockerizing it and running on production on both AWS ECS and Azure AKS. Even though the conversion itself has been relatively easy for the most parts, there has been questions dangling in my head, mainly on how to get things done on top of the dotnet stack.

For example, how should one deal with appsettings.json inside the container, when you have multiple environments to care about ? Or, how do I gather metrics ?

For the most time, I had to drill down into books like .NET Microservices: Architecture for Containerized .NET Applications (which I anyway recommend), or seek inspirations on Github / YouTube / stackoverflow. Hence, I felt a need to have a quick-start guide for dotnet developers.

I wanted this to be very hands on, but providing neccessary theoretical background at the same time so you can develop some intuition on how different concepts play together and be able to apply new knowledge at work quickly, successfully running containers on production. We’ll cover the container concept itself, where it came from and how you can benefit from it. I’ll do my best to explain you how it differs from VM and why you need “an VM inside VM”. Later on, we’ll get to know Docker and then dockerize aspnet core api web server, while applying industry best practices. Let’s jump right in!

So, what are containers ?

First of all, let’s start out by saying what a container is not, as there are a lot of misconceptions out there being spread over the internet. Let’s state it clearly: container is not a VM. In fact, in the Linux kernel itself, there is no such concept as the “container”.

That’s because, containerization as we know it e.g. from Docker is achieved using some fancy features built into the kernel, but a container is no more no less than a plain simple process. Meaning, you can actually create containers on a barebone Linux machine, without using any of the “containerization” tools like Docker or Rkt. In fact, there are even windows containers available. However, in this series we’ll focus on Linux exclusively, as the containerization features in the Linux kernel are far more sophisticated and mature than those built into Windows kernel.

Because of this, you’ll need a Linux machine to follow along. You can either use a VirtualBox VM I prepared especially for the sake of this series (download), or use in-browser option at Katacoda. Myself, I prefer to spin up VM on my own, as the in-browser terminal at Katacoda sometimes causes issues.

Let’s spin up example container, so we can play around with it for a while:

$ docker run -d --name tutorial bash:4.4 sleep 1h

This will start a container named tutorial from the image bash:4.4. Since we don’t have that image in our local cache, docker will pull it first from dockerhub, which is a public docker repository. After it pulls it down, sleep 1h will be executed insde a container.

Don’t worry if you don’t fully understand the commands or concepts there. We’ll elaborate on them later on, but for now I just would like to show you something. Therefore - relax, fasten your seatbelts and let’s run docker ps to see what have we just run:

$ docker ps | grep -E "CONTAINER|bash"
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
2b7e2d70e088        bash:4.4            "docker-entrypoint.s…"   7 seconds ago       Up 6 seconds                            tutorial

Now, let’s do:

$ ps -ejH | grep -E "containerd|docker-con" -C 3
1080  1080  1080 ?        00:00:01   NetworkManager
1232  1232  1080 ?        00:00:00     dhclient
1096  1096  1096 ?        00:00:00   cupsd
1105  1105  1105 ?        00:00:01   containerd
7276  7276  1105 ?        00:00:00     containerd-shim
7293  7293  7293 ?        00:00:00       sleep
1199  1199  1199 ?        00:00:00   wpa_supplicant
1267  1267  1267 ?        00:00:00   dockerd  

And then:

$ docker top tutorial
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                7293                7276                0                   22:58               ?                   00:00:00            sleep 1h

See that ?

7293 is the PID of the sleep container we have just started! In all ways, that’s just a normal process. There is no virtualization happening, the process just calls the host kernel directly. That’s why docker is so lightweight, comparing to e.g. a VM which virtualizes the whole machine along with the kernel and all the hardware.

However, these processes are setup in quite a special way. For example, let’s create a file on the host machine:

$ touch /tmp/example_file && ls -lA /tmp/ | grep example_file

Now, let’s get “into” the container and execute ls -lA /tmp there:

$ docker exec -it tutorial ls -lA /tmp | grep example_file

There is no example_file there. How is that possible, if there is no virtualization happening ? And what do you mean by getting “into” the container, how can you get inside the process ?

All of this is happening because of the fancy clone system call, which allows you to create a child process in a new namespace. These namespaces is what creates the illusion that the process running “within a container” is the only process living on a host, almost like you’d run it within a VM.

The namespace isolates mount points (“files”), network stack, PIDs, Users & Groups, Hostnames and IPC calls (like signalling). Also, there are cgroups which allow you to set up quotas on CPU and memory usage. Still, this is far different from a VM, since there is nothing virtualizing hardware. There is simply no middleman between a containerized process and the host kernel.

Role of the Docker in this picture, is just to provide a user-friendly interface to these low-level kernel features. It creates containers as a fork of containerd/dockerd process and makes them run within a given namespace, while providing elegant and througly documented CLI.

Answering the second question: “getting into” the container just means switching namespace.


In this tutorial we have covered the concept of a container. We have talked through some basic abstractions and developed some intuiton on how to reason about them.

I showed you that container is no less no more but a plain old process with some juicy kernel sauce applied on top of it. Unlike a VM which runs the kernel, containers share the kernel. The isolation is achieved using linux namespaces. If you’d like to find out more about containers, I highly recommend following along Linux Container Internals Lab on YouTube, or reading the manual on namespaces.

In the following post, we’ll get to know the most popular framework for working with containers, which is Docker. We’ll dig into what an container image is, how to build and run them. I’ll also share some tips and tricks with you, so you can develop your own kick-ass containers. Hope to see you soon!