Now, that we have some intuition on what conatainer is, I’d like to introduce you to Docker itself. Since we already stated that containers are implemented using fancy kernel features, you might wonder where does Docker fit it, then ?

Well, it defines nice and user friendly cli for creating and running containers. It used to be involved in setting up the container itself back in the days, however right now all that responsibility has been moved outside of docker.

Instead, Open Container Initiative got defined, which has separated responsibilities of running the container from creating the image. This allowed alternative interfaces like Rkt to be developed, while still using the same “engine” for running containers (runc, containerd, or even windows containers).

Although alternative solutions exists by now, Docker was the first to define user-friendly interface on creating and running containers. Nowadays, surely it’s the most popular one. No wonder, developers quickly loved it. With just a Dockerfile and a simple docker build, you create pretty strong process isolation in a matter of minutes. All of this being done through simple, declarative and well documented interface.

Without further due, let’s get started !

Building our first Docker image

Before we move on to the image building, we first need to have some executable to be run within the containerized environment. For this tutorial, I prepared a very minimal ASP .NET Core WebServer, so you can just clone it and we can focus on the docker part itself. Let’s clone it out:

$ git clone

Having the sources of our application cloned, we’ll define a Docker build now. This is done through a Dockerfile, which contains a complete reciepe on what steps need to be executed in order to define the layers as well as the manifest of our container. This reciepe will be then used to create an image.

Our reciepe is as follows:

  1. Use dotnet core sdk as the base image. This makes the compiler and dotnet CLI available in the container.
  2. Define some WORKDIR. We need the binaries to live somewhere in the containerized filesystem, so we’ll just set it to /app
  3. Copy the sources from the build context to /app. On docker build, so called build context is set (by default to the current directory). From this build context, you can freely copy file(s) “to within the container”. Once copied, a new layer is created, which is then used to build up container filesystem. We’ll get to that in just a moment!
  4. Execute dotnet publish to turn the sources into binaries we can run.
  5. Define ENTRYPOINT. This is just a command to execute, when the container is started.
  6. Save as Dockerfile and put the file into HelloAspNetCore/Dockerfile directory.

So, our Dockerfile (aka the reciepe) should look like this, according to the plan:

COPY . ./

RUN dotnet publish -c Release -o out

ENTRYPOINT ["dotnet", "out/HelloAspNetCore.dll"]

Let’s save it under HelloAspNetCore/HelloAspNetCore/Dockerfile, just next to the csproj. If you are using the VM I prepared, it has VS Code already installed, so that shuold do for the sake of editing a Dockerfile. Just issue code ./HelloAspNetCore after cloning.

Now, that we have the recipe we can bake the image. This is done through docker build command. We’ll set -t hello_asp_net_core:latest parameter to define user-friendly name (a tag) for the image:

$ cd HelloAspNetCore/HelloAspNetCore
$ docker build -t hello_asp_net_core:latest .

Having the image, we can use it to construct and run containers out of it. We’ll add -p 5000:80 to be able to access port 80 exposed by the container under port 5000 on our host machine. This is because containers live in a separated newtok namespace, so we need to explicitly map every port we’d like to access. Also, let’s put -it so we stay attached to the console:

$ docker run -p 5000:80 -it hello_asp_net_core:latest 

Now, open any browser, go to http://localhost:5000/hello and you’ll be pleased with a message coming straight from the containerized webserver!

But wait, what has just happened ? What is an image, actually?

What are docker images?

Image is nothing more than just a tarball (archive), containing the layers and a couple of json files. This archive is being constructed by the dockerfile. For example, each COPY defines a layer in the output image. The layers are incremental, so e.g. if you do:

COPY file1 .
COPY file2 .

You’ll have two layers in the output image. First, a layer with file1 and then the second layer, containing only the incremental diff from it’s parent. In this case, that diff would be equivalent to file2 only.

From these layers you build up an image, containing the dependencies neccessary to run your application. In our case, we embedded the dotnet SDK and binaries of our application into them.

Let’s see it for ourselves, using docker save command. What it does, is it saves the image as a tarball. This way we can extract it and see what’s inside:

$ docker save --output container.tar hello_asp_net_core:latest && mkdir container && tar -xf container.tar -C container

You need to be patient though, as it can take some time.

If you open up the container directory now, you’ll see few GUID named directories. Each such directory contains yet-another tarball, and a json file. These are the layers, having the contents dumped to layer.tar file and metadata defined in the json file. Try unpacking one such tarball with mkdir layer && tar -xf layer.tar -C layer and see for yourself, that these are the pieces of our container filesystem.

Now, let’s pay some attention to the config file. If you take a look at the manifest.json, you’ll see config property there with a filename of a json file. The file pointed to by config property contains all the information neccessary to setup kernel namespaces, like: what ports to expose, do you want to mount any volumes to the container, should you apply some quotas on CPU/memory etc. Give it a look and get a feel of it! You can also see the config by firing:

$ docker inspect hello_asp_net_core:latest

After you are done playing around with the bundle, it’s good idea to clean up the directories as it’ll help us in the followup posts. If we just leave it there, it’ll get sent along with the build context and that makes things slow.

$ rm -rf container/
$ rm container.tar.gz


Today, we not only got a taste of what Docker is, but we also managed to build our first image and create container out of it. Although, at this point, the Dockerfile is not production ready yet, you saw how simple it actually is to “dockerize” an example webserver. If you are interested, I strongly recommending having a look at the Dockerfile reference to get to know the syntax.

In the next post, we’ll take a second look at our Dockerfile and elaborate on where and how can we improve. We’ll also touch docker killer features like layer caching, so stay tuned!