Docker for Java Developers: Docker over command line
This article is part of our Academy Course titled Docker Tutorial for Java Developers.
In this course, we provide a series of tutorials so that you can develop your own Docker based applications. We cover a wide range of topics, from Docker over command line, to development, testing, deployment and continuous integration. With our straightforward tutorials, you will be able to get your own projects up and running in minimum time. Check it out here!
Table Of Contents
1. Introduction
In this section of the tutorial we are going to master the Swiss army knife of Docker, its command line tool of the same name docker and its best friend docker-compose. To give these tools some credit, each of them supports myriads of different command line arguments and options so discussing all of them would make this section literally endless. Instead, we would be focusing on most useful classes of the commands, pointing to the relevant sections of the documentation in case you would like to learn more right away.
Docker evolves very fast and as such, docker and docker-compose are constantly changing as well, adding some new command line arguments while deprecating others. Breaking changes are not so rare but this is a reality Docker users are facing for quite a while now.
One of the largest changes which came into effect recently concerned exactly the command line tooling. In the past, the docker used to accept a single command as the first argument followed by list of options. However, the amount of different commands grew so large, that the tool became really confusing and cumbersome to use. It was decided to split the commands into classes (for example image, container, network, volume, plugin, system, …) so the every command will be preceded by its class (for example, docker build becomes docker image build). It was a really needed change and although the old-style usage of the commands is still supported, we would adhere to the recommended practices in the tutorial.
The section is structured in such a way so to familiarize you with the most useful commands while not really digging into the details and usage scenarios. The rationale behind is quite simple though, in the next sections of the tutorial we are going to battle-test most (if not all) of them while doing really practical things.
2. Images
It looks logical to start from the bottom and get acquainted with docker by learning how to build images. In the first section of the tutorial we have briefly walked through the process but it is time to do it ourselves using build command.
docker image build [OPTIONS] PATH | URL | -
As we remember, docker builds images from the Dockerfiles, executing each instruction in the order it is specified. The PATH or URL options are the hints where to look for the Dockerfile.
Building small and efficient images, and doing so fast, is one of the key goals to aim for in order to be successful with Docker. Image caching, labeling, tagging, reducing the final size of the image and many other topics are nicely summarized in the best practices for writing Dockerfiles, it is highly recommended to go over this read.
You may rarely find the need to build your own base images so let us talk instead what would be our choices in order to pick the base image for Java applications.
As of now, objectively, the Alpine Linux is the de facto base Linux distribution for the containerized applications. If you happen to run your applications on OpenJDK, you are really lucky as the project’s official DockerHub repository provides plenty of the images based on Alpine Linux. Quickly, here is a simplest Dockerfile example:
FROM openjdk:8u131-jdk-alpine CMD ["java", "-version"]
Assuming your shell is pointing to the same folder where this Dockerfile resides, you can build (and also tag) the image with this command:
docker image build . --tag base:openjdk-131-jdk
In case you bet on Oracle JVM distributions, sadly the Alpine Linux is not officially supported yet (although you may see that some people are trying to marry those two together, please be aware that it kind of feasible but the JVM process in the container could crash any time). The options here are either to use official Oracle Java 8 SE (Server JRE) image or build your own based on Ubuntu or Debian distributions.
In terms of which JVM version to build upon, please make sure to use at least Java 8 update 131 or later, for the reasons we will discuss in details in the upcoming sections (but if you are curious, here is the sneak peek behind the curtain).
Along with build, there is a couple of very useful commands which are worth mentioning. The ls command shows all the images:
docker image ls [OPTIONS] [REPOSITORY[:TAG]]
The history command shows the history of an image:
docker image history [OPTIONS] IMAGE
While the rm command removes one or more images:
docker image rm [OPTIONS] IMAGE [IMAGE...]
To interface with registries, there are pull and push commands. The first one fetches the image from a registry while the latter uploads it to the registry.
docker image pull [OPTIONS] NAME[:TAG|@DIGEST] docker image push [OPTIONS] NAME[:TAG]
Lastly, the exceptionally useful prune command removes all unused images (which you have better to use with -a option all the time):
docker image prune [OPTIONS]
One of the tremendously useful features introduced by Docker in the recent release is the support of multi-stage builds which allows to have multiple FROM statements in the Dockerfile. We would just mention that here but it will come back to us in the upcoming sections of the tutorial.
3. Containers
Container management constitutes a large portion of the Docker functionally and there are a lot of different commands to back it up. Let us start by dissecting super powerful run command, which spawns a new container:
docker container run [OPTIONS] IMAGE [COMMAND] [ARG...]
To get a feeling of how easy it is, we could run the container using our OpenJDK-based image which we have hand-built previously:
$ docker container run base:openjdk-131-jdk openjdk version "1.8.0_131" OpenJDK Runtime Environment (IcedTea 3.4.0) (Alpine 8.131.11-r2) OpenJDK 64-Bit Server VM (build 25.131-b11, mixed mode)
Once you have the containers running, you could attach to them or use start / stop / restart commands to manage their lifecycle.
docker container attach [OPTIONS] CONTAINER docker container start [OPTIONS] CONTAINER [CONTAINER...] docker container stop [OPTIONS] CONTAINER [CONTAINER...] docker container restart [OPTIONS] CONTAINER [CONTAINER...]
Additionally, with pause / unpause commands you have the control over the state of the processes within the containers.
docker container pause CONTAINER [CONTAINER...] docker container unpause CONTAINER [CONTAINER...]
Probably, the ls command will be the most used one as it lists all running containers (and with -a option all containers, running and stopped):
docker container ls [OPTIONS]
Consequently, the inspect command display detailed information about one or more containers:
docker container inspect [OPTIONS] CONTAINER [CONTAINER...]
The stats command is designated to expose runtime statistics about the container. The top command shows running container processes while the logs command fetches the logs of the container.
docker container stats [OPTIONS] [CONTAINER...] docker container top CONTAINER docker container logs [OPTIONS] CONTAINER
Over its lifetime container could go through many modifications and diverge from its base image significantly. The diff command inspects all the changes to files or directories on a container’s filesystem and reports them.
docker container diff CONTAINER
In case you would need to capture these changes, there is a handy commit command which creates a new image from the container.
docker container commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
Once container is not needed anymore, it could be terminated using rm command. Please be cautious as all the data stored inside the container will be gone (unless it is backed by volumes).
docker container rm [OPTIONS] CONTAINER [CONTAINER...]
More extreme version of rm command, the prune command, mass-removes all stopped containers.
docker container prune [OPTIONS]
4. Ports
Every container could expose ports to listen at runtime, either through image Dockerfile instructions or through the options of the run command. The port command lists all port mappings (or a specific mapping) for the container.
docker container port CONTAINER [PRIVATE_PORT[/PROTO]]
5. Volumes
As we remember, containers are ephemeral and once the container is terminated, all the data it holds will be lost. It might not be a problem in many cases, but if you run the data store in a container for example, it is very likely you would prefer to keep your data.
To fill this gap Docker introduces volumes as the preferred mechanism for persisting data used by containers and, obviously, docker has a dedicated class of commands for that (create, inspect, ls, prune and rm).
docker volume create [OPTIONS] [VOLUME] docker volume inspect [OPTIONS] VOLUME [VOLUME...] docker volume ls [OPTIONS] docker volume prune [OPTIONS] docker volume rm [OPTIONS] VOLUME [VOLUME...]
6. Networks
Docker has pretty good networking support for containers with a number of network drivers available out of the box. Surely, the standard set of create, inspect, ls, prune and rm commands is available to manage the networks.
docker network create [OPTIONS] NETWORK docker network inspect [OPTIONS] NETWORK [NETWORK...] docker network ls [OPTIONS] docker network prune [OPTIONS] docker network rm NETWORK [NETWORK...]
What distinguishes Docker networking is the fact the dockerd daemon contains an embedded DNS server which provides names resolution among containers connected to the same user-define network (so that the containers could be referenced by their names, not only IP addresses).
Any running container could be connected to the network or disconnected from the network using connect and disconnect commands respectively.
docker network connect [OPTIONS] NETWORK CONTAINER docker network disconnect [OPTIONS] NETWORK CONTAINER
Aside from that, the container could be connected to a particular network by passing the options to the run command. All unused networks could be removed by invoking the prune command.
docker network prune [OPTIONS]
7. Linking
More often than not your application stack would be composed of many connected components rather than standalone ones (a typical example would be a Java server-side application which talks to MySQL data store). Projecting that to the world of containers, you would need a group of containers which could somehow discover their upstream dependencies and communicate with each other. In Docker it used to be known as linking but nowadays it could be easily achieved using user-defined networks (which is also the recommended practice).
8. Health Checks
Usually Docker daemon provisions the containers pretty fast however it does not mean that the applications packaged inside the containers are ready and fully functional. For many Docker users it used to be one of the most annoying issues to deal with, forcing the community to come up with many ad-hoc solutions to the problem.
But kudos to Docker team, we now have health checks (which could be specified in the Dockerfile or using the options of the run command). It is an additional verification layer which instructs the Docker on how to test that the application inside the container is working. It resulted into adding a new health status property to complement the regular container status.
9. Resource Limits
Interestingly, by default a container has no resource constraints and may consume all the resources of its host operating system. It could have been a show-stopper but fortunately Docker provides a way to control how much memory, CPU, or block I/O a particular container can use by passing a number of options to the run command. Alternatively, for the running containers the update command allows to adjust the container configuration (primary, resource limits) dynamically.
docker container update [OPTIONS] CONTAINER [CONTAINER...]
With respect to JVM running inside the container, the subject of the CPU and memory limits gets a bit trickier. As of Java SE 8u131 (and surely in JDK 9) and later, the JVM is Docker-aware and with just a little bit of tuning is able to play nicely according to the rules.
10. Clean up
As we have seen so far, there are a lot of abstractions you can manage with Docker. However, with a time Docker generates a lot of garbage (like unused layers, images, containers, volumes, …), eating the precious disk space. It has been a known issue for years but since not long ago we have a dedicated prune command which cleans up all unused data:
docker system prune [OPTIONS]
Please note that by default the command will not clean up the volumes unless the –volumes option is specified.
11. All-in-One: The Deployment Stack
As we have seen up to this point, you can accomplish any task using just docker command line tool. But managing lifecycle of multiple containers connected together will quickly become a burden and force you to think about automating the process, ether with bunch of shell scripts or alike.
The community has realized the problem early on and came up with a brilliant solution to that, known these days as Docker Compose. In a nutshell, Docker Compose provides the declarative way for defining and then running multiple Docker containers, drastically simplifying the rather complex deployment of the containerized application stacks.
So how does it work? There are just a three simple steps involved, some of them we are already quite familiar with:
- Prepare the images for your applications, usually by means of Dockerfiles
- Use docker-compose.yml specification to outline your stack in terms of containers
- Use docker-compose command line tool to materialize the specification into a set of running (and usually connected) containers
Let us take a quick look on imaginable deployment stack which involves the JDK image we have created before, base:openjdk-131-jdk
, and MySQL database image mysql:8.0.2
, all put into the docker-compose.yml file.
version: '2.1' services: mysql: image: mysql:8.0.2 environment: - MYSQL_ROOT_PASSWORD=p$ssw0rd - MYSQL_DATABASE=my_app_db expose: - 3306 networks: - my-app-network java-app: image: base:openjdk-131-jdk mem_limit: 256M environment: - DB=mysql:3306 ports: - 8080 depends_on: - mysql networks: - my-app-network networks: my-app-network: driver: bridge
Pretty neat and straightforward, isn’t it? The versioning of the docker-compose.yml specification formats needs a particular discussion. The latest and recommended specification format is 3.x
, but 2.x
which we have used in the example above is also supported and evolves independently. Why is that?
The 3.x
is designed to be cross-compatible between Docker Compose and Docker Swarm (clustering solution we are going to touch upon briefly later on in the tutorial) but sadly, it also removes several very useful options (along with adding a few more). In general, we are going to stick to 3.x
whenever we can, falling back to 2.x
from time to time to showcase some really neat features.
12. Conclusions
In this section we glanced over Docker awesome command line tooling, docker and docker-compose, highlighting the most useful and important commands. We have not seen most of them in action yet but in the next sections of the tutorial each of them is going to find the time to appear on the stage.
12. What’s next
Although it is quite possible that these tools would be your primary (if not the only) way to deal with Docker, in the next section of the tutorial we are going to learn the other option by leveraging Docker Engine REST(ful) APIs.