Docker Machine, Compose & Swarm: How They Work Together
During the past year, Docker has been hard at work creating simple to use tools to set up container hosts (Machine), manage multiple containers linked together (Compose), and treating your container hosts as a cluster (Swarm).
Even though they are meant to be simple, these tools are very powerful, and they require some planning before you run off and deploy tons of containers on top of your favorite Infrastructure-as-a-Service. I’ll try to shed some light on why they’re great to have in your toolbelt, where they should be used, and how to get started with them.
The three tools are now neatly packaged into what’s called the Docker Toolbox. Make sure you have that installed before you continue further.
Docker Machine
The first tool we’ll look at from the toolbox is Docker Machine, which will help you create container hosts on many of the most popular Infrastructure-as-a-Service platforms. Of course, you can use the two most popular desktop virtualization platforms: VMware Fusion and VirtualBox, but it also supports other platforms such as AWS, Azure, DigitalOcean, Exoscale, Google Compute Engine, OpenStack, RackSpace, SoftLayer, VMware vSphere, and vCloud Air.
Let’s start with getting a container host up and running on your local desktop. When you installed the Docker Toolbox you also got VirtualBox installed, so let’s use that to create a Linux VM where we can run our containers. To create a container host, just run the following command:
$ docker-machine create --driver virtualbox containerhost Creating VirtualBox VM... Creating SSH key... Starting VirtualBox VM... Starting VM... To see how to connect Docker to this machine, run: docker-machine env containerhost
This command will tell Docker Machine to use the VirtualBox driver to create a container host called “containerhost.” We now have a place to run our containers! Let’s configure our terminal to connect to it by running the following:
eval "$(docker-machine env containerhost)"
If you’re used to working with Boot2Docker, the predecessor of Docker Toolbox, the above command is similar to the familiar boot2docker shellinit
.
After the eval of your Docker Machine env variables, you can now run regular Docker commands, such as docker run, pull, ps, rm, etc. Try it out!
Docker Compose
Now that we have a container host up and running, we’ll focus more on actually running some useful containers on it. We’ll use Docker Compose for this. It’s actually built on Docker’s first ever acquisition that happened in 2014, a company called Orchard that had created a multi-container management tool called Fig. Docker and the newly joined team from Orchard continued the great work that Orchard had done and finally renamed it as Docker Compose late last year.
Docker Compose has a simple way of describing an application as several containers that work together, how they should be linked, and what ports should be exposed to the end user. The Docker container environments are defined in a “docker-compose.yml” file; let’s look at an example and break it down a bit:
web: build: . ports: - "5000:5000" volumes: - .:/code links: - redis redis: image: redis
Here we have one application built using two containers. The first container is called “web”, and will be built from a Dockerfile that we have in the current working directory. This is great if you already have a Dockerfile for your container image but haven’t pushed it up to a registry yet. A complete Dockerfile and necessary application files for this example can be found here.
The next row shows which ports will be exposed on the host and which port the traffic will be forwarded to into the container. The third part shows that we will mount a Docker volume into the container, containing the application code. Then lastly we will link to another container that we call “redis,” which will use the standard official Redis image from the Docker Hub.
Now to run this, we issue the following command:
$ docker-compose up
This will read the docker-compose.yml and create the application environment we have defined in it, first building the web application from the Dockerfile as well as pulling down the redis image. Docker Compose will also link the web container to the redis container. It will look something like this:
$ docker-compose up Pulling redis (redis:latest)... <snip> Creating compose_redis_1... Building web... Step 0 : FROM python:2.7 2.7: Pulling from python <snip> Successfully built b88dd767cf97 Creating compose_web_1... Attaching to compose_redis_1, compose_web_1 <snip> redis_1 | 1:M 11 Sep 21:21:56.463 # Server started, Redis version 3.0.3 <snip> web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit) web_1 | * Restarting with stat
There you have it! You have successfully used Docker Compose to create a containerized environment with two services: one an outward-facing web service and the other a persistence layer. Now let’s take a look at the web page to verify the application is running.
To find the IP of your container host, run the following command:
$ docker-machine ip containerhost 192.168.99.101
Since we have also made sure we open up port 5000 on the container host and link it to the container, you should now be able to connect to http://192.168.99.101:5000. You should see the message, “Hello World! I have been seen 1 times.” You are now hitting a website that’s storing a hit counter value in a separate containerized key-value store, retrieving that value on each new hit on the website and presenting it back to you. Awesome!
All right, now that we’ve explained the basics of Docker Compose, it’s time to dig into the next topic.
Docker Swarm
Finally, let’s look at the most interesting tool in the current Docker Toolbox, Docker Swarm. What you’ve done so far is work with one container host and run a container or two, which is great for testing or local development. With Docker Swarm we’re now going to turn that small test environment into a larger setup of clustered container hosts that can be used to scale your operations into something even more useful. This is a bit more advanced and will involve things like service discovery, clustering, and remote management.
Let’s start by cleaning up the environment we have so we don’t run into any issues with having a ton of things running. Stop and remove the current local container host by running the following:
$ docker-machine stop containerhost exit status 1 $ docker-machine rm containerhost Successfully removed containerhost
Now let’s begin by creating a new fresh container host that we will use for just a short while:
$ docker-machine create -d virtualbox local
This creates a new container host for us called “local.” Get the right connection information for your terminal by running:
eval "$(docker-machine env local)"
Now we need to generate what’s called a “discovery token” that will be used to configure and make sure that your nodes are part of the correct cluster:
$ docker run swarm create <snip> Status: Downloaded newer image for swarm:latest 8d7dc66346a3e0d999ed38dd29ed0d38
That last line is your discovery token and will be different from mine. The discovery token is actually created using Docker’s public discovery service. You can find the information it keeps on your cluster by going to an address like https://discovery.hub.docker.com/v1/clusters/YOURTOKENHERE. You’ll use this new token for all new swarm members including the Swarm Master that we create like this:
$ docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery token://YOURTOKENHERE swarm-master
Now we’ll create our two first Swarm nodes:
$ docker-machine create -d virtualbox --swarm --swarm-discovery token://YOURTOKENHERE swarm-agent-00 $ docker-machine create -d virtualbox --swarm --swarm-discovery token://YOURTOKENHERE swarm-agent-01
You can now also shut down and remove the “local” container host; we won’t need it anymore.
Now let’s make sure your shell is pointing to the Swarm Master:
eval $(docker-machine env --swarm swarm-master)
You now have a Swarm Master and two Swarm Nodes running locally. Let’s see what that looks like:
$ docker info Containers: 4 Images: 3 Role: primary Strategy: spread Filters: affinity, health, constraint, port, dependency Nodes: 3 swarm-agent-00: 192.168.99.104:2376 └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.022 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.0.9-boot2docker, operatingsystem=Boot2Docker 1.8.1 (TCL 6.3); master : 7f12e95 - Thu Aug 13 03:24:56 UTC 2015, provider=virtualbox, storagedriver=aufs swarm-agent-01: 192.168.99.105:2376 └ Containers: 1 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.022 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.0.9-boot2docker, operatingsystem=Boot2Docker 1.8.1 (TCL 6.3); master : 7f12e95 - Thu Aug 13 03:24:56 UTC 2015, provider=virtualbox, storagedriver=aufs swarm-master: 192.168.99.103:2376 └ Containers: 2 └ Reserved CPUs: 0 / 1 └ Reserved Memory: 0 B / 1.022 GiB └ Labels: executiondriver=native-0.2, kernelversion=4.0.9-boot2docker, operatingsystem=Boot2Docker 1.8.1 (TCL 6.3); master : 7f12e95 - Thu Aug 13 03:24:56 UTC 2015, provider=virtualbox, storagedriver=aufs CPUs: 3 Total Memory: 3.065 GiB Name: 054cb8519400
That’s really cool! You now have full control over a cluster of container hosts after just a few minutes of work. I think that’s really fantastic!
You can now try to run containers just like normal. Sometimes it takes a while for the client to respond with the unique ID of the container, so just wait until it comes back:
$ docker run -d redis 0d7af2492be35cc9c7593f6d677185c6c44f3a06898258585c7d2d2f9aa03c2e $ docker run -d nginx 0babf055abf9b487b6bafd4651386075f8d6f46ce9f192849bc32345997438ea
And now list the containers to see that they’re being scheduled on different clustered hosts by looking at their given names:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0babf055abf9 nginx "nginx -g 'daemon off" 10 seconds ago Up 10 seconds 80/tcp, 443/tcp swarm-agent-01/grave_jones 0d7af2492be3 redis "/entrypoint.sh redis" 37 seconds ago Up 37 seconds 6379/tcp swarm-agent-00/furious_fermat
Awesome job! You now have a cluster of container hosts that you control and can be used to schedule containers across.
Unfortunately, the integration between Docker Compose and Docker Swarm is currently incomplete. But work is being done to have them properly compatible in the near future; you can follow the work here.
Until then, have fun coming up with interesting applications that can be run on your newly created cluster!
Reference: | Docker Machine, Compose & Swarm: How They Work Together from our JCG partner Jonas Rosland at the Codeship Blog blog. |