Today I’d like to introduce you to the container orchestration concept. In the previous posts, we’ve learnt how to create container images and how to turn those images into running containers. However, what does it take to operate containers on production ?

In the real world, we usually need to consider things like scalability, availability and resilency. For example, if we need to scale up a service during peak hours we’d very much prefer it not to introduce any downtime. Or, if a service dies for whatever reason, we’d probabbly want to try starting the same container on another machine.

Hopefully, Kubernetes being the most recognizable orchestrator, makes it really easy to address concerns mentioned above.

So what is an orchestrator, exactly ?

Container orchestrator concept was introduced around year 2003 at Google, with the birth of Borg project. It came out of the observation, that imperative deployments are hard, as you need to specify the very exact steps to perform, usually in the form e.g. a bash or powershell script.

Scripting is not only tedious, but also very error prone and time consuming. Because of this, especially when under time pressure, the scripts often fail and then - fingers crossed - hopefully the rollback script will work.

So, Borg came out with this concept of declarative deployments. There, just like in declarative programming languages, you describe what you’d like to achieve, not how to do it. Because of this, you can just learn the abstractions and leave the details on how to the orchestrator itself.

So, what can it do for me?

  • Load balancing
  • Horizontal scaling of containers, based on either fixed number or custom metric
  • Zero downtime deployments
  • Auto-restarting failed containers
  • Service discovery based on internal DNS
  • Centralized metrics and log collection
  • Auto provisioning of TLS certificates
  • Defining resource quotas (min/max CPU/memory per container)
  • A/B deployments
  • Rollout deployments
  • Canary releases
  • and much, much more

Imagine how much scripting you’d have to do in order to cover all the areas mention above. In kubernetes, you can achieve most of this using couple lines of YAML.

Getting the cluster rolling locally

To get a feeling on what orchestration is about, we’re going to make our api image running in a Kubernetes cluster. Although Kubernetes is not the only orchestration tool available, it’s the most widely adopted one, so we’ll stick with it for the sake of this series.

Before we start, we need to have some cluster to play around with. Lucky for us, there are a lot of easy options for running k8s locally, the biggest contenders being: minikube and microk8s. The main difference is, that microk8s doesn’t require a VM unline minikube. Unfortunately, microk8s works on linux only, so if you want to setup your k8s environment on windows it’s probabbly easier to stick with minikube.

We’ll use microk8s there as this is simply the most lightweight way of running the cluster locally. I already preinstalled it on the container VM, so if you are using it you may play around with the dashboard at:

https://127.0.0.1:32072/

Creating kubernetes deployment

Now that we have the tools, we can proceed with creating deployment definition. Let’s start out by placing following file next to the csproj and Dockerfile and saving it as deployment.yaml

# Required Kubernetes Resource API version
apiVersion: apps/v1
# specifies Kubernetes resource type
kind: Deployment
metadata:
  name: kubernetes-hello
spec:
  # we want single instance running at a given time
  replicas: 1
  selector:
    # replicaset will consist of pods matching label
    matchLabels:
      app: kubernetes-hello-label
  # recipe for a pod
  template:
    metadata:
      labels:
        # let's attach the label
        # so replicaset can find the containers
        app: kubernetes-hello-label
    spec:
      containers:
        - name: kubernetes-hello
          image: hello_asp_net_core:latest
          # don't pull images from the docker registry
          # as we are operating locally and this would fail
          imagePullPolicy: Never
          ports:
            # we'd like to expose port 80 (http)
            - containerPort: 80

What does this file say?

Well, it declares that you’d like to have a deployment consisting of a single container (replicas property) runing hello_asp_net_core:latest image. Deployment is one of the Kubernetes resources (“bricks”) available. It provides the ability to rollout/rollback changes and horizontal scalability of containers.

Then, we have the template which is a recipe for a Pod. Are Pods equal to containers, you might ask?

While one pod often consists of a single container only, they are not the same thing. Pods are the basic, atomic units of deploying anything in Kubernetes. Pods run containers, but they are not containers. Pod actually defines the namespace and within this namespace, containers can be run. This is an important distinction to have in mind, when playing with k8s cluster.

Since pods share the same storage and network, it’s common to run e.g. metric exporters as a second container in the pod. Also, within a single pod, the containers can talk to each other through localhost which might be useful in certain scenarios.

For the sake of runinng an image locally, I also set imagePullPolicy to Never. Otherwise, kubernetes will try to pull our image from dockerhub and will fail.

Now, to make the cluster execute our manifest, all we have to do is use kubectl tool. It’s a very handy CLI to call Kubernetes API:

$ kubectl apply -f deployment.yaml

If you bring up the dashboard now you’ll see your first container deployed to the cluster! And actually, this is everything you need to do in order to deploy to Kubernetes. Your CI pipeline can now contain a single step witin, and you don’t need to write even a line of shell anymore.

Morover, if you navigate to the Pods section, you can see the service resource consumption,logs and tons of other handy stuff.

Summary

Today, we’ve learnt what orchestration is and why is it useful. Then, we set up development environment with minikube and deployed our first container in there! Although it’s just a beggining of us getting to know k8s, due to it’s declarative nature we already managed to achieve quite promising results!

However, we still miss some bits and pieces, for example: how do we reach this service from outside the cluster, over HTTP? Or, how can we pass some environmental configs (appsettings) into the container? This is what we’re going to focus on in the next post!

As always, stay tuned.