Let’s say you are dealing with an application having an external dependency on e.g. elasticsearch. So, usually you’d need to first get the search engine running locally before proceeding with any kind of development. Moreover, there are usually some magic configuration tweaks to be applied, so you are stuck either searching through wiki or asking the people around.
That sucks a bit, doesn’t it? After all, we are developers with the inherent desire to automate everything we can. Yet, for some reason, we sometimes tend to forget about automation of our own development environments.
Wouldn’t it be awesome if you could have elasticsearch running and configured by just pressing F5 from Visual Studio?
Lucky for us, there is really nice docker-compose integration built into VS which allows us to do exactly this. It allows you to define a set of containers that will then start along with your application. You can also embed the configurations there, to produce exactly the same set of containers on each development machine. All you need is two clicks in Visual Studio and a couple lines of YAML.
Let me show you how this works.
Integrate docker-compose with Visual Studio
First, to do anything with containers, you need to have Docker
and Docker Compose
tools installed. On Windows, just grab Docker Desktop and you are good to go. If you are running Linux or Mac, follow the guide from docker docs.
Having the tools ready, you can proceed with integrating your Visual Studio solution with them. Once you enable this, VS will take care of firing appropiate compose commands on Run/Rebuild/Clean. Moreover, it’ll allow you to debug your containerized application seamlessly!
Just right click on the project in Solution Explorer and select Add
->Container Orchestrator Support
. Then, choose Docker Compose
as the orchestrator and Linux
as the target OS.
Now, ensure you have docker-compose
project selected as the startup project. Run the solution and that’s it. You are ready to rock!
How does the integration work?
So, what has just happened? First, if you didn’t have a Dockerfile
in the project, it got created for you. Most of the time, you don’t need to touch it, as it’s quite well crafted out of the box.
Second, you’ve probably noticed that new project appeared in the solution, called docker-compose
. This is a special project type, dedicated solely for the sake of integrating with orchestration tools. If you expand it in VS, you’ll see three files underneath: docker-compose.yml
, docker-compose.override.yml
and .dockerignore.
1. Docker-compose.yml
version: '3.4'
services:
composeplayground:
image: ${DOCKER_REGISTRY-}composeplayground
build:
context: .
dockerfile: ComposePlayground/Dockerfile
This file defines a “playlist” of docker containers, which in our case comprises a single container only. Once started, you’ll see this guy under composeplayground
name ( use docker ps
to see it for yourself).
Then, docker-compose
will build the image using dockerfile specified under build
section. This build will result with ${DOCKER_REGISTRY-}composeplayground
image in your local docker registry. What’s this strange variable for, you might ask?
DOCKER_REGISTRY
is just an environmental variable defining target docker registry. That is, where would you like to put the image. It defaults to an empty string, but you can e.g. set it to your dockerhub registry in case you’d like to push the image there.
2. Docker-compose.override.yml
When it comes to docker-compose.override.yml
file, it’s used for .. well, overriding settings from the base docker-compose.yml
file. In this case, it’s meant to store local environment specific overrides. It works just like in CSS, merging two files together where the override wins in case both contain the same key.
The idea is that you have a single base file and multiple per-environment files for overriding things like environmental variables. So, you might want to have docker-compose.prod.yml
/docker-compose.uat.yml
files as well to supply different environment specific configurations.
In fact, Visual Studio generates such overrides on-the-fly for you to embed debugger into your container. If you are interested in how it does this, take a look at obj\Docker
directory. There should be docker-compose.vs.debug.g.yml
file containing the overrides.
Anyway, if you press F5 now you’ll see that Visual Studio fires appropriate compose commands to bring entire stack up for you. Make sure you have docker-compose
selected as the startup project
Btw, having the
Container Tools
output window opened might be really helpful when you are troubleshooting.
Adding elasticsearch to the mix
The coolest part is that, having this file, we can now add more containers to the “playlist”. All it takes is to extend the services section in docker-compose.yml
file. Let’s work on this file a bit, so it looks more like this:
version: '3.4'
services:
composeplayground:
image: ${DOCKER_REGISTRY-}composeplayground
build:
context: .
dockerfile: ComposePlayground/Dockerfile
depends_on:
- elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
ports:
- "9200:9200"
- "9300:9300"
volumes:
- elastic_data:/usr/share/elasticsearch/data
kibana:
image: docker.elastic.co/kibana/kibana:7.3.0
ports:
- "5601:5601"
depends_on:
- elasticsearch
Then, we need a couple of additional settings to configure elastic for running in single master node. Also, let’s disable authentication, since we don’t need it on local environments.
We’ll do this in docker-compose.override.yml
file, as this is the one which should be used for local environment specific configurations:
version: '3.4'
services:
composeplayground:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
- ASPNETCORE_HTTPS_PORT=44326
ports:
- "57378:80"
- "44326:443"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
elasticsearch:
environment:
- xpack.security.enabled=false
- xpack.monitoring.enabled=false
- xpack.ml.enabled=false
- xpack.graph.enabled=false
- xpack.watcher.enabled=false
- discovery.zen.minimum_master_nodes=1
- discovery.type=single-node
- bootstrap.memory_lock=true
kibana:
environment:
- xpack.security.enabled=false
- "ELASTICSEARCH_URL=http://elasticsearch:9200"
volumes:
elastic_data:
driver: local
If you save those two files and press F5 now, elastic and kibana will start alongside your application.
Since all the containers are registered in docker internal DNS, the application can talk with elastic using http://elasticsearch:9200
URL. Also, you can reach both elastic and kibana through localhost, since we’ve exposed all the ports. Try it out, by going to http://localhost:5601
. You should be pleased with Kibana UI.
Besides, all the elastic data will be persisted even when you kill the container. That’s because we configured the service to use elastic_data
volume, which will use local
driver. Of course, if you prefer your data to be ephemeral, you can always comment out the volume mount in yaml.
How cool is that? We are spinning up entire stack on the fly, with just a single button click!
In case you’d like to play around on your own, I’ve published all the sources of my “playground” project on GitHub.
Summary
No single service lives in void. Usually, rather sooner than later, the application starts to have dependencies on external processes. This could be a document database, or a search engine - like in the example I presented. Anyway, that’s something external that every developer needs to supply on his own to even start thinking about introducing changes to the application.
Not only can it be difficult to set up these dependencies on each development machine, but it can be even more painful to keep those in sync. If you don’t have this process automated, each developer might end up with slightly different settings.
Docker compose, along with container orchestration support baked into Visual Studio can greatly reduce the pain of operating those. With a couple of clicks, and some “YAML-engineering” you can easily define reproducable environments for developers. This way they can focus on writing the code, instead of figuring out why their elastic installation didn’t work. All they ever have to do is press F5.
What are your thoughts on this? How do you approach building local development environments?
Comments