Skip to content

Docker and Consul and DNS, oh my

I’m still trying to wrap my head around networking when it comes to Docker and related technologies – I think because a lot of the documentation and examples around are either not quite correct, or are subtly out of date. I’ve noticed too that a lot of the writing out there around setting up Docker and/or Consul hand waves away the trickiness of the networking. Particularly egregious is the blithe insistence on just specifying host networking for all containers, something that the Docker project itself frowns upon.

Yes, using –net=host lets you skate around the problems of wiring together containers, but completely blows out of the water the notion that the collection of containers that you are running are able to be isolated from the broader environment and expose only a very small attack surface by opening up only specifically identified ports.

I’ve found that this hand-waving is particularly prevalent in the writings around Consul – just Google it, you will find a lot, often suspiciously similar to each other – which either limit themselves to throwing up a single node in ‘dev’ mode, or talk about building a Consul cluster outside the Docker world. Neither are what I need, and I suspect other people are in the same boat.

Oh. Yes. I did not mention what I was trying to build. I’m going to be writing some code to run against Consul for a variety of reasons, and would prefer to have a cluster that is similar to the sort of environment that would be used in production (minus the mucking about with SSL at this point). Because this is for development, I want something local, and quick to throw up and tear down – in 2016 the obvious answer is ‘Docker’. To that end I’ve made a simple project that allows me to run a number of containers comprising the Consul cluster, and a container to act as a gateway into that cluster. Ideally you would grab the latest version of that project from GitHub (https://github.com/TheBellman/consulcluster), however I’ll reproduce some of it here so I can talk about it

Starting the cluster is the complicated bit (note this assumes that the default Docker machine is ready, and you are running via docker-machine)

#!/bin/bash

SERVER_COUNT=3
eval $(docker-machine env default)
DOCKER_IP=$(docker-machine ip default)

echo "==== creating encryption key ===="
KEY=$(docker run -t --rm consul keygen)

echo "==== starting servers ===="
docker run -d --name=consul0 \
-e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' \
consul agent \
-server \
-node=consul0 \
-encrypt=$KEY \
-bootstrap-expect=3

CONSUL0=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' consul0)

for ((i=1; i<SERVER_COUNT; i++))
do
docker run -d --name=consul$i \
-e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' \
consul agent \
-server \
-node=consul$i \
-encrypt=$KEY \
-retry-join=$CONSUL0 \
-bootstrap-expect=$SERVER_COUNT
done

echo "==== starting agent ===="
docker run -d --name=agent0 \
-e 'CONSUL_LOCAL_CONFIG={"skip_leave_on_interrupt": true}' \
-p 8500:8500 \
-p $DOCKER_IP:53:8600/tcp \
-p $DOCKER_IP:53:8600/udp \
consul agent \
-node=agent0 \
-encrypt=$KEY \
-client=0.0.0.0 \
-retry-join=$CONSUL0 \
-ui

echo "==== pausing ===="

sleep 2

echo "==== docker processes ===="
docker ps --format "table {{ .Names }}\t{{ .Status }}\t{{ .Ports }}"

echo "==== consul members ===="
docker exec -t consul0 consul members

echo "==== consul node catalog ===="
curl $DOCKER_IP:8500/v1/catalog/nodes?pretty

echo "==== consul agent location ===="
echo "http://$DOCKER_IP:8500"

Most of this is inspired by the documentation on the Consul DockerHub repository, however I’ve added support for DNS and encrypted communication within the cluster.

By exposing port 8600 on the agent and binding it to port 53 on the Docker machine, then other containers can use the Consul cluster for DNS resolution (using –dns $DOCKER_IP when run), and I can explore DNS resolution from the host operating system as well. Similarly by exposing port 8500 and specifying -ui on the agent, I can use the agent to provide a web console at http://$DOCKER_IP:8500.

Stopping the cluster is just a matter of halting the Docker containers, although at the moment this means that any state in the cluster is lost. I will probably extend this later to provide support for preserving state, but that’s for another day.

Post a Comment

Your email is never published nor shared. Required fields are marked *
*
*