Starting with Docker Cheatsheet
1. Introduction to Docker
Docker is a popular containerization platform that allows developers to package, distribute, and run applications in a consistent and portable way.
Traditionally, applications are developed and deployed on specific operating systems or hardware, making it difficult to move them between different environments. Docker solves this problem by creating an abstraction layer between the application and the underlying infrastructure, allowing it to run on any system that supports Docker.
Containers are at the heart of the Docker platform. They provide a lightweight, isolated environment for running an application and its dependencies. Each container shares the host system’s kernel but has its own file system, network stack, and process space. This enables multiple containers to run on the same machine without interfering with each other.
Docker also provides a registry, called Docker Hub, where developers can share and discover pre-built images. This makes it easy to build and deploy applications from existing components, reducing the time and effort required to get an application up and running.
1.1 Why use Docker?
Here’s a table summarizing some of the key benefits of using Docker:
Benefit | Definition |
Portability | Docker containers can run on any platform that supports Docker, making it easy to move applications between environments. |
Consistency | Docker ensures that the application runs consistently, regardless of the underlying infrastructure. |
Isolation | Each Docker container is isolated from the host system and other containers, providing an added layer of security and preventing conflicts between different applications. |
Scalability | Docker containers can be easily scaled up or down to meet changing demand, making it easy to handle spikes in traffic or increased workload. |
Efficiency | Docker’s lightweight architecture reduces the overhead of running multiple applications on the same host, resulting in improved resource utilization and lower infrastructure costs. |
Agility | Docker’s modular architecture allows developers to break down applications into smaller, more manageable components, making it easier to develop, test, and deploy changes. |
1.2 Docker terminology
Here are some key Docker terminology and concepts:
Term | Definition |
Docker | A platform for building, packaging, and running applications in containers. |
Container | An isolated, lightweight runtime environment for an application and its dependencies. |
Dockerfile | A text file that contains the instructions for building a Docker image. |
Image | A pre-built, read-only template for creating Docker containers. |
Registry | A repository for storing and sharing Docker images. |
Docker Hub | A public registry for Docker images, maintained by Docker. |
Docker Compose | A tool for defining and running multi-container Docker applications. |
Swarm | A clustering and orchestration tool for managing Docker containers at scale. |
Volume | A Docker-managed directory that allows data to persist across container lifetimes. |
Service | A Docker object that defines how to run a containerized application in a Swarm cluster. |
Stack | A collection of Docker services that make up an application. |
2. Installing Docker
Docker Desktop and Docker Engine are two related but distinct products:
- Docker Engine is the core component of the Docker platform, responsible for building, running, and managing Docker containers. It provides a lightweight, portable runtime environment for applications, allowing them to run consistently across different systems and platforms. Docker Engine can be installed on a wide range of operating systems, including Linux, Windows, and macOS.
- Docker Desktop is a desktop application that provides a complete development environment for building and testing Docker applications on macOS and Windows. It includes the Docker Engine, as well as a graphical user interface, pre-configured Docker CLI, and other tools and utilities. Docker Desktop also includes support for Kubernetes, allowing developers to deploy and manage containerized applications on a local Kubernetes cluster.
In summary, Docker Engine is the core runtime environment for Docker containers, while Docker Desktop provides a complete development environment that includes Docker Engine as well as additional tools and utilities. Developers working on macOS or Windows may find Docker Desktop to be a convenient way to set up and manage their Docker development environment, while those working on Linux systems may prefer to install Docker Engine directly on their system.
2.1 Windows
Here are the steps to install Docker on Windows:
- Visit the Docker website at https://www.docker.com/get-started and click the “Download for Windows” button.
- Follow the installation wizard prompts to complete the installation process.
- Once the installation is complete, launch Docker Desktop from the Start menu or desktop shortcut.
- If prompted, allow Docker to make changes to your system.
- Docker Desktop will start and display a notification when it’s ready.
- You can now use Docker from the command prompt or PowerShell.
Note that you may need to enable virtualization in your BIOS settings in order to run Docker on your Windows machine.
Also, keep in mind that Docker Desktop for Windows requires Windows 10 Pro or Enterprise edition (64-bit), with Hyper-V enabled. If you have a different version of Windows, you can use Docker Toolbox, which uses Oracle VirtualBox instead of Hyper-V.
2.2 Mac
Here are the steps to install Docker on a Mac:
- Go to the Docker website and download the Docker Desktop for Mac installer: https://www.docker.com/products/docker-desktop
- Double-click the downloaded .dmg file to open the Docker installer.
- Drag the Docker icon to the Applications folder to install Docker.
- Double-click the Docker icon in the Applications folder to launch Docker.
- Docker will prompt you to grant permission to access the filesystem and network. Click “OK” to proceed.
- Docker will then start downloading and installing the necessary components.
- Once the installation is complete, Docker will launch and display a “Welcome to Docker!” message.
That’s it! You can now start using Docker on your Mac. To verify that Docker is installed and working correctly, open a terminal window and enter the command docker version
. This will display information about the Docker installation and versions of the Docker client and server.
2.3 Linux
Here’s a general guide to installing Docker on a Linux-based system:
- Update the system’s package manager:
sudo apt-get update
- Install Docker’s dependencies:
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
- Add Docker’s GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- Add the Docker repository to the system’s package sources:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- Update the package manager again:
sudo apt-get update
- Install Docker:
sudo apt-get install docker-ce docker-ce-cli containerd.io
- Verify that Docker is installed correctly by running the hello-world container:
sudo docker run hello-world
This will download and run a simple Docker container to verify that everything is working correctly. If you see a message saying “Hello from Docker!”, then Docker is installed and running correctly.
Note that the specific commands may vary depending on your Linux distribution, so be sure to consult the Docker documentation for detailed instructions.
4. Docker Basics
4.1 Running your first Docker container
Running your first Docker container is a simple process. Here’s a step-by-step example:
Open a command prompt or terminal window on your computer.
Pull a Docker image from a registry using the following command:
docker pull <image-name>
Replace <image-name>
with the name of the Docker image you want to use. For example, to pull the official nginx image, you would use the command:
docker pull nginx
Once the image is downloaded, run a Docker container using the following command:
docker run <image-name>
Replace <image-name>
with the name of the Docker image you want to run. For example, to run the nginx image, you would use the command:
docker run nginx
The container will start and output some information to the console. You can verify that the container is running by opening a web browser and navigating to http://localhost
. You should see the default nginx page.
To stop the container, press Ctrl+C
in the command prompt or terminal window.
That’s it! You’ve successfully run your first Docker container. Keep in mind that this is just a basic example and there are many more options and features you can use with Docker containers.
4.2 Managing Docker containers
Managing Docker containers involves tasks such as starting, stopping, inspecting, and removing containers. Here’s a step-by-step example of how to manage Docker containers:
Run a container: To run a container, use the docker run
command followed by the image name. For example, to run a container based on the official nginx image, run:
docker run nginx
This will download the nginx image from Docker Hub and start a container based on that image.
List running containers: To list the containers that are currently running on your system, use the docker ps
command. For example:
docker ps
This will display a list of running containers, along with information such as the container ID, image name, and status.
Stop a container: To stop a running container, use the docker stop
command followed by the container ID or name. For example:
docker stop <container_id>
This will gracefully stop the container, allowing it to clean up any resources before shutting down.
Remove a container: To remove a stopped container, use the docker rm
command followed by the container ID or name. For example:
docker rm <container_id>
This will permanently delete the container and any associated data or resources.
Inspect a container: To inspect a container and view its configuration and runtime details, use the docker inspect
command followed by the container ID or name. For example:
docker inspect <container_id>
This will display detailed information about the container, including its network configuration, environment variables, and mount points.
Start a stopped container: To start a stopped container, use the docker start
command followed by the container ID or name. For example:
docker start <container_id>
This will start the container using its existing configuration and runtime settings.
Overall, managing Docker containers involves a range of commands and options that can help you start, stop, inspect, and remove containers as needed.
4.3 Docker images and the Docker Hub
Docker images are read-only templates that contain the instructions for creating a Docker container. Images can be created using a Dockerfile, which is a script that specifies the environment and configuration of the container.
Docker Hub is a cloud-based registry service provided by Docker, where users can store and share Docker images. It’s a great place to find pre-built images for popular applications and services, or to share your own images with the community.
Here’s a step-by-step example of how to use Docker images and the Docker Hub:
Search for an image on Docker Hub. For example, let’s say we want to run a WordPress site. We can search for the “WordPress” image on Docker Hub.
Pull the image. Once you’ve found the image you want to use, you can pull it to your local machine using the “docker pull” command. For example, to pull the latest version of the WordPress image, you would run:
docker pull wordpress
Verify that the image was downloaded. You can use the “docker images” command to see a list of all the images you have downloaded. The WordPress image should be listed.
Run a container based on the image. To run a container based on the WordPress image, you can use the “docker run” command. For example:
docker run -p 8080:80 --name my-wordpress -d wordpress
This command runs a new container named “my-wordpress” based on the “wordpress” image, and maps port 8080 on the host machine to port 80 in the container. The “-d” option tells Docker to run the container in the background.
Verify that the container is running. You can use the “docker ps” command to see a list of all the containers that are currently running. The “my-wordpress” container should be listed.
Access the WordPress site. Open a web browser and navigate to http://localhost:8080. You should see the WordPress installation screen.
5. Docker Commands
Docker commands are used to manage Docker images, containers, networks, and volumes. These commands are executed in the terminal or command prompt and provide a way to interact with Docker using the command line interface (CLI).
Using Docker commands requires some familiarity with the command line interface, but it provides a powerful way to manage Docker resources with precision and efficiency.
5.1 docker run
The docker run
command is used to create and run a new Docker container based on a specified Docker image. It is one of the most commonly used Docker commands, and is used to start new containers or restart existing ones.
The basic syntax of the docker run
command is as follows:
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Here are some of the most common options used with docker run
:
-d
: Runs the container in detached mode, allowing you to run other commands while the container is running in the background.-it
: Starts an interactive session in the container with a shell.--name
: Assigns a name to the container.-p
: Maps a port on the host machine to a port in the container.--env
: Sets environment variables inside the container.-v
: Mounts a volume on the host machine to a directory in the container.
Here’s an example of using the docker run
command to start a new container based on the “nginx” image in detached mode, mapping port 8080 on the host machine to port 80 in the container:
docker run -d -p 8080:80 nginx
This command will start a new container based on the “nginx” image and run it in the background. You can then access the container’s web server by navigating to http://localhost:8080 in your web browser.
Note that when you run a container using the docker run
command, a new container instance is created each time. If you need to restart an existing container, you can use the docker start
command instead.
5.2 docker ps
The docker ps
command is used to list all the currently running Docker containers on your system. It provides information about the containers, such as the container ID, name, status, and the ports that are being used.
Here’s the basic syntax of the docker ps
command:
docker ps [OPTIONS]
By default, docker ps
only shows information about the running containers. However, you can use various options to filter the output or show additional information. Here are some of the most commonly used options:
-a
: Shows information about all containers, including those that are stopped or exited.-q
: Only shows the container IDs, which can be useful for scripting or other automation tasks.--format
: Allows you to specify a custom format for the output.
Here’s an example of using the docker ps
command to list all the currently running containers:
docker ps
This will output a list of all the running containers on your system, along with information about each container such as its container ID, name, image, status, and the ports that are being used.
If you want to see more information about a specific container, you can use the docker inspect
command, followed by the container ID or name. For example:
docker inspect my-container
This will show detailed information about the “my-container” container, such as its configuration, network settings, and more.
5.3 docker stop
The docker stop
command is used to stop a running Docker container. When you stop a container, its state is saved and you can restart it later using the docker start
command.
Here’s the basic syntax of the docker stop
command:
docker stop [OPTIONS] CONTAINER [CONTAINER...]
Here are some of the most commonly used options for docker stop
:
-t
: Specifies a timeout value (in seconds) for stopping the container. If the container does not stop within the specified timeout, it will be forcefully killed.--time
: Specifies the time to wait before sending a SIGKILL signal to the container after issuing adocker stop
command. The default value is 10 seconds.
Here’s an example of using the docker stop
command to stop a running container with the ID “abcd1234”:
docker stop abcd1234
This will send a SIGTERM signal to the container, giving it a chance to shut down gracefully. If the container does not stop within the specified timeout (or if a timeout is not specified), Docker will send a SIGKILL signal to forcefully stop the container.
You can also stop multiple containers at once by specifying their IDs or names separated by a space. For example:
docker stop container1 container2 container3
This will stop the containers with the names “container1”, “container2”, and “container3”.
5.5 docker rm
The docker rm
command is used to remove one or more Docker containers from your system. When you remove a container, its contents are deleted and any changes made to the container are lost. You cannot remove a running container with the docker rm
command. You must first stop the container using the docker stop
command.
Here’s the basic syntax of the docker rm
command:
docker rm [OPTIONS] CONTAINER [CONTAINER...]
Here are some of the most commonly used options for docker rm
:
-f
: Forces the removal of a running container.-v
: Removes the associated volumes as well.-l
: Removes the specified link.
Here’s an example of using the docker rm
command to remove a stopped container with the ID “abcd1234”:
docker rm abcd1234
This will delete the container with the specified ID from your system.
You can also remove multiple containers at once by specifying their IDs or names separated by a space. For example:
docker rm container1 container2 container3
This will remove the containers with the names “container1”, “container2”, and “container3” from your system.
5.6 docker images
The docker images
command is used to list all the Docker images that are currently stored on your system. Each Docker image consists of multiple layers that represent the instructions in the Dockerfile used to create the image.
Here’s the basic syntax of the docker images
command:
docker images [OPTIONS] [REPOSITORY[:TAG]]
By default, docker images
shows a list of all the images stored on your system. However, you can use various options to filter the output or show additional information. Here are some of the most commonly used options:
-a
: Shows all images, including intermediate images.--format
: Allows you to specify a custom format for the output.-q
: Shows only the image IDs.
Here’s an example of using the docker images
command to list all the images stored on your system:
docker images
This will output a list of all the images on your system, including their repository, tag, image ID, and size.
If you want to remove an image, you can use the docker rmi
command, followed by the image ID or name. For example:
docker rmi my-image:latest
This will remove the “my-image” image with the “latest” tag from your system. If the image has dependent child images or containers, you may need to remove them first or use the -f
option to force the removal of the image.
5.7 docker pull
The docker pull
command is used to download a Docker image from a registry, such as Docker Hub or a private registry. If you don’t already have the image on your system, you need to use the docker pull
command to download it before you can create a container from it.
Here’s the basic syntax of the docker pull
command:
docker pull [OPTIONS] NAME[:TAG|@DIGEST]
The NAME
parameter specifies the name of the image to download, and the optional TAG
or @DIGEST
parameter specifies a specific version of the image. If you don’t specify a tag or digest, Docker will assume you want the “latest” version of the image.
Here are some of the most commonly used options for the docker pull
command:
--all-tags
: Downloads all available versions of the image.--disable-content-trust
: Disables image verification.-q
,--quiet
: Only displays the image ID.
Here’s an example of using the docker pull
command to download the latest version of the “ubuntu” image from Docker Hub:
docker pull ubuntu
This will download the latest version of the “ubuntu” image from Docker Hub to your system.
You can also download a specific version of the image by specifying the image name and tag. For example, to download the “ubuntu” image with the “18.04” tag, you would use the following command:
docker pull ubuntu:18.04
This will download the “ubuntu” image with the “18.04” tag to your system.
5.8 docker build
The docker build
command is used to build a Docker image from a Dockerfile. A Dockerfile is a text file that contains a set of instructions that Docker uses to build the image.
Here’s the basic syntax of the docker build
command:
docker build [OPTIONS] PATH
The PATH
parameter specifies the location of the Dockerfile and the build context. The build context is the set of files and directories that are used as the build input.
Here are some of the most commonly used options for the docker build
command:
-t
,--tag
: Specifies the name and optionally a tag to apply to the image.-f
,--file
: Specifies the path to the Dockerfile.--no-cache
: Prevents the use of cached intermediate images.--pull
: Forces a pull of the latest version of the base image.
Here’s an example of using the docker build
command to build an image from a Dockerfile located in the current directory:
docker build -t myimage:mytag .
This will build a Docker image with the name “myimage” and the tag “mytag” using the Dockerfile located in the current directory.
Note that the .
at the end of the command specifies the build context as the current directory. If your Dockerfile is located in a different directory, you would need to specify the path to that directory instead.
5.9 docker push
The docker push
command is used to upload a Docker image to a registry, such as Docker Hub or a private registry. This command is used after you have built a Docker image and you want to make it available for others to use.
Here’s the basic syntax of the docker push
command:
docker push [OPTIONS] NAME[:TAG]
The NAME
parameter specifies the name of the image to upload, and the TAG
parameter specifies the version of the image to upload. If you don’t specify a tag, Docker will assume you want to upload the “latest” version of the image.
Before you can push an image to a registry, you need to log in to the registry using the docker login
command. Once you’re logged in, you can use the docker push
command to upload the image.
Here’s an example of using the docker push
command to upload an image to Docker Hub:
docker push myusername/myimage:mytag
This will upload the “myimage” image with the “mytag” tag to the “myusername” repository on Docker Hub.
Note that you need to have write access to the repository in order to push an image. If you’re trying to push to a private registry, you may also need to specify additional authentication options, such as a username and password or an authentication token.
5.10 docker exec
The docker exec
command is used to execute a command inside a running Docker container. This is useful when you need to run a command in a container that is already running, rather than starting a new container.
Here’s the basic syntax of the docker exec
command:
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
The CONTAINER
parameter specifies the name or ID of the container in which you want to execute the command. The COMMAND
parameter specifies the command you want to execute, and the ARG...
parameter specifies any arguments to that command.
Here are some of the most commonly used options for the docker exec
command:
-i
,--interactive
: Keeps STDIN open even if not attached.-t
,--tty
: Allocates a pseudo-TTY.-d
,--detach
: Detaches the command from the container’s console.--user
: Specifies the username or UID to use when running the command.
Here’s an example of using the docker exec
command to execute a command inside a running container:
docker exec -it mycontainer bash
This will execute the bash
command inside the container with the name or ID mycontainer
. The -it
option allocates a pseudo-TTY and keeps STDIN open, so that you can interact with the container’s console.
5.11 docker logs
The docker logs
command is used to view the logs generated by a Docker container. When a container is started, it begins writing log messages to its STDOUT and STDERR streams, and these messages can be retrieved using the docker logs
command.
Here’s the basic syntax of the docker logs
command:
docker logs [OPTIONS] CONTAINER
The CONTAINER
parameter specifies the name or ID of the container for which you want to view the logs.
Here are some of the most commonly used options for the docker logs
command:
-f
,--follow
: Follows the log output in real-time.--since
: Shows logs since a particular timestamp or duration.--tail
: Shows a specific number of lines from the end of the logs.--timestamps
: Shows timestamps with each log message.
Here’s an example of using the docker logs
command to view the logs for a container:
docker logs mycontainer
This will show the logs generated by the container with the name or ID mycontainer
. By default, the docker logs
command shows the last 10 lines of the container’s log output. If you want to see the entire log output, you can use the -f
option to follow the log output in real-time.
6. Dockerfile
A Dockerfile is a text file that contains a set of instructions that Docker uses to build a Docker image. The instructions in a Dockerfile specify how to build the image, what base image to use, what packages to install, what files to include, and what commands to run.
6.1 Creating a Dockerfile
Creating a Dockerfile involves defining the instructions necessary for building a Docker image. Here’s a step-by-step guide for creating a Dockerfile:
- Choose a base image: The first step in creating a Dockerfile is to choose a base image. This is the starting point for your Docker image. You can use any existing image as a base, or you can create your own.
- Set the working directory: Once you have chosen a base image, you need to set the working directory for your Docker image. This is the directory where your application code and other files will be located.
- Install dependencies: If your application has any dependencies, you will need to install them in your Docker image. This is typically done using package managers like
apt-get
,yum
, orpip
. - Copy files: After installing dependencies, you can copy your application code and other files into the Docker image using the
COPY
instruction. - Expose ports: If your application runs on a specific port, you need to expose that port in your Docker image using the
EXPOSE
instruction. - Set environment variables: If your application requires any environment variables to be set, you can do so using the
ENV
instruction. - Specify the command: Finally, you need to specify the command that should be run when the Docker image is launched. This is done using the
CMD
instruction.
Here’s an example Dockerfile for a simple Python Flask application:
# Use an official Python runtime as a parent image FROM python:3.7-alpine # Set the working directory to /app WORKDIR /app # Copy the current directory contents into the container at /app COPY . /app # Install any needed packages specified in requirements.txt RUN pip install --trusted-host pypi.python.org -r requirements.txt # Expose port 5000 for the Flask app EXPOSE 5000 # Set environment variables ENV FLASK_APP app.py ENV FLASK_RUN_HOST 0.0.0.0 # Specify the command to run the Flask app CMD ["flask", "run"]
In this example, the Dockerfile sets the base image to the official Python 3.7 image for Alpine Linux, installs any required packages, exposes port 5000, sets two environment variables for the Flask application, and specifies the command to run the Flask app.
6.2 Building a Docker image from a Dockerfile
To build a Docker image from a Dockerfile, you need to run the docker build
command in the directory where the Dockerfile is located. Here’s an example:
- Create a Dockerfile: First, create a Dockerfile for your application. You can follow the steps outlined in the previous section to create a Dockerfile.
- Navigate to the directory: Open a terminal or command prompt and navigate to the directory where your Dockerfile is located. For example, if your Dockerfile is located in
/path/to/myapp
, you would navigate to that directory using the commandcd /path/to/myapp
. - Build the Docker image: Run the
docker build
command to build the Docker image. The basic syntax for this command isdocker build -t <image-name>
, where<image-name>
is the name you want to give to your Docker image and.
specifies the current directory (where the Dockerfile is located). For example, if you want to name your image “myapp”, you would run the commanddocker build -t myapp .
. - Wait for the build to complete: The
docker build
command will execute each instruction in your Dockerfile and build the Docker image. Depending on the complexity of your Dockerfile and the size of your application, this could take some time. - Verify the Docker image: Once the build is complete, you can verify that your Docker image has been created by running the
docker images
command. This will list all the Docker images on your system, including the one you just created.
7. Docker Compose
Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to define your application’s services, networks, and volumes in a single YAML file, which makes it easy to manage and deploy complex applications with multiple components. With Docker Compose, you can define how different Docker containers interact with each other, and how they share resources like volumes and networks.
Docker Compose works by reading a YAML file called docker-compose.yml
, which defines the configuration for your Docker application. In this file, you can specify the services that your application needs, such as web servers, databases, and caching servers. You can also define the networks and volumes that your application requires, as well as any environment variables or configuration options that your services need.
Once you have defined your application in the docker-compose.yml
file, you can use Docker Compose to start and stop your application with a single command. Docker Compose will automatically create and configure all the necessary Docker containers and networks, and will start and stop them as needed. This makes it easy to manage complex applications with multiple components, and allows you to easily test and deploy your application on different environments.
7.1 Creating a Docker Compose file
To create a Docker Compose file, you’ll need to follow these general steps:
- Decide on the services: First, you’ll need to decide which services your application needs. These can be any Docker images that you want to run as part of your application, such as a web server, database, or caching service.
- Define the services: Once you’ve decided on the services, you’ll need to define them in the
docker-compose.yml
file. For each service, you’ll need to specify the Docker image to use, any environment variables or configuration options, and any volumes or networks that the service requires. - Define the networks and volumes: Next, you’ll need to define any networks or volumes that your application requires. You can define these in the same
docker-compose.yml
file, and then reference them from your services. - Start the services: Once you’ve defined all the services, networks, and volumes, you can use the
docker-compose up
command to start all the services defined in yourdocker-compose.yml
file.
Here’s an example docker-compose.yml
file that defines two services, a web server and a database:
version: "3.9" services: web: image: nginx:latest ports: - "80:80" volumes: - ./web:/usr/share/nginx/html depends_on: - db db: image: mysql:latest environment: MYSQL_ROOT_PASSWORD: example MYSQL_DATABASE: myapp MYSQL_USER: myuser MYSQL_PASSWORD: mypass volumes: - ./db:/var/lib/mysql
In this example, the web
service uses the nginx:latest
image, maps port 80 to the host machine, mounts the ./web
directory to the container’s /usr/share/nginx/html
directory, and depends on the db
service. The db
service uses the mysql:latest
image, sets some environment variables, and mounts the ./db
directory to the container’s /var/lib/mysql
directory.
You can save this file as docker-compose.yml
in the root directory of your project, and then run docker-compose up
to start the services. Docker Compose will automatically download the required Docker images and start the containers, allowing you to easily run and manage your application.
7.2 Running a Docker Compose file
To run a Docker Compose file, you’ll need to follow these steps:
Navigate to the directory where your docker-compose.yml
file is located using the terminal.
Run the docker-compose up
command to start the services defined in the docker-compose.yml
file. This will download the required Docker images and start the containers.
If you want to run the services in the background, use the docker-compose up -d
command instead. This will start the containers in detached mode, allowing you to continue using the terminal.
If you want to stop the services, use the docker-compose down
command. This will stop and remove the containers, networks, and volumes created by the docker-compose up
command.
If you want to see the logs for the services, use the docker-compose logs
command. You can also specify a specific service name to only see the logs for that service.
If you want to scale a service to run multiple instances, use the docker-compose up --scale SERVICE_NAME=NUM_INSTANCES
command. This will start multiple instances of the specified service.
If you want to run a single command in a container, use the docker-compose run SERVICE_NAME COMMAND
command. This will start a new container for the specified service and run the specified command inside it.
8. Docker Networking
Docker Networking refers to the process of connecting Docker containers with each other and with the outside world. Docker provides several networking options that allow you to customize how containers communicate with each other and the host machine.
Here are some common Docker networking concepts:
Network Type | Description |
Bridge | The default network type in Docker, allowing containers to communicate with each other and the host machine on a private network |
Host | Allows containers to use the host machine’s network stack, providing better performance but reducing isolation between containers |
Overlay | Connects multiple Docker hosts into a single virtual network, useful for multi-host applications that need to communicate with each other |
Macvlan | Gives a container a MAC address visible on the physical network, useful for certain types of network communication |
None | Disables all networking for the container, useful for running containers in isolation |
8.1 Creating custom Docker networks
Creating custom Docker networks can be useful for separating different groups of containers and providing network isolation. Here are the steps to create a custom Docker network:
First, list the existing Docker networks on your host machine using the command docker network ls
.
Decide on a name for your custom network, and create it using the docker network create
command. For example, to create a custom network named “my_network”, run the command:
docker network create my_network
Verify that the new network has been created by running the docker network ls
command again. You should see your new network listed along with the default “bridge” network.
To connect a container to your custom network, add the --network
option to the docker run
command when starting the container. For example, to start a new container named “my_container” and connect it to the “my_network” network, run the command:
docker run --network=my_network --name=my_container IMAGE_NAME
You can verify that the container has been connected to the custom network by running the docker network inspect
command, followed by the name of your custom network. This will display detailed information about the network, including a list of all containers connected to it.
Custom Docker networks can also be useful when working with Docker Compose, as they allow you to group related containers together on a separate network. To specify a custom network in a Docker Compose file, add a networks
section to the file and define your custom network.
By default, Docker Compose will create a custom network for each Compose file, with a name based on the directory name. However, you can also specify a custom network name using the name
option in the networks
section of the Compose file.
9. Docker Volumes
In Docker, a volume is a way to store data persistently outside of a container’s filesystem. Using volumes, you can separate your application code from its data, which can be useful for managing data over the lifetime of a container.
9.1 Creating a Docker volume
To create a Docker volume, you can use the docker volume create
command, followed by a name for the volume. Here’s an example:
docker volume create my_volume
This will create a new volume named “my_volume”.
You can also create a volume when running a container using the --mount
or -v
option, followed by the name and location of the volume. For example:
docker run --mount source=my_volume,target=/app/data IMAGE_NAME
This will start a new container and mount the “my_volume” volume at the /app/data
directory inside the container.
Note that when you create a volume using docker volume create
, it is empty by default. If you want to initialize the volume with data from a container’s filesystem, you can use the --mount
or -v
option when running the container, followed by the path to the data on the container. For example:
docker run --mount source=my_volume,target=/app/data -v /path/to/local/data:/app/data IMAGE_NAME
This will create a new volume named “my_volume” and initialize it with the data at /path/to/local/data
on the container.
9.2 Docker volume command options
Here is a table of common options for creating and managing Docker volumes:
Command | Description |
docker volume create <name> | Creates a new Docker volume with the specified name. |
docker volume ls | Lists all Docker volumes on the host machine. |
docker volume inspect <name> | Displays detailed information about a Docker volume, including its mountpoint and other metadata. |
docker volume rm <name> | Deletes a Docker volume. You cannot delete a volume that is currently in use by a container. |
docker volume prune | Deletes all unused Docker volumes on the host machine. |
--mount or -v option | Mounts a Docker volume when running a container. You can specify the source, target, and optional options. For example: --mount source=my_volume,target=/app/data,volume-driver=local |
-v <host_path>:<container_path> option | Mounts a host directory or file as a volume inside the container. For example: -v /path/to/host/dir:/app/data |
-v <name>:/container/path option | Mounts a Docker volume by name inside the container. For example: -v my_volume:/app/data |
--mount type=bind,source=<host_path>,target=<container_path> option | Mounts a host directory or file as a bind mount inside the container. This is similar to the -v option, but provides more flexibility and control over the mount options. For example: --mount type=bind,source=/path/to/host/dir,target=/app/data,bind-propagation=shared |
These are just a few examples of the many options available for managing Docker volumes. Be sure to refer to the Docker documentation for more information on volume management and usage.
10. Docker Swarm
Docker Swarm is a clustering and orchestration tool for Docker containers. It allows you to create and manage a swarm of Docker nodes, which can be used to deploy and scale containerized applications across a cluster of machines. Docker Swarm provides features such as load balancing, service discovery, and rolling updates, making it easier to manage large, distributed container deployments.
Docker Swarm uses a declarative approach to defining the desired state of a cluster, using a YAML file called a Compose file to describe the services and containers that should be running on the swarm. Once the Compose file is defined, you can use the Docker CLI to deploy and manage the services across the swarm.
Docker Swarm supports various deployment modes, including:
- Single-host mode: This is the default mode and is useful for development and testing purposes.
- Swarm mode: This is the mode used for managing a swarm of Docker nodes.
- Ingress mode: This mode provides a simple and scalable way to route traffic into a swarm.
Docker Swarm can be used in conjunction with other tools, such as Docker Compose, to manage complex container deployments across multiple nodes. It is a powerful tool for building scalable and resilient containerized applications.
10.1 Creating a Docker Swarm cluster
To create a Docker Swarm cluster, you need at least two nodes: one manager node and one or more worker nodes. Here are the steps to create a Docker Swarm cluster:
Create a Docker Swarm manager node:
docker swarm init --advertise-addr <MANAGER-IP-ADDRESS>
This command initializes the Swarm mode on the node and makes it a manager node. The --advertise-addr
option specifies the IP address that other nodes will use to connect to this node.
If you want to add additional manager nodes to the Swarm, run the following command on the other nodes:
docker swarm join-token manager
This command will generate a command that can be run on the other nodes to join the Swarm as a manager.
Create one or more worker nodes by running the command that was generated by the previous step on each worker node:
docker swarm join --token <TOKEN> <MANAGER-IP-ADDRESS>:<PORT>
This command adds the node to the Swarm as a worker node.
To check the status of the nodes in the Swarm, run the following command on the manager node:
docker node ls
This command shows the list of nodes in the Swarm and their status.
Once the Swarm is set up, you can deploy services to it using a Docker Compose file or by using the Docker CLI. The manager node will handle the scheduling and orchestration of the services across the worker nodes.
10.2 Deploying services on a Docker Swarm cluster
To deploy services on a Docker Swarm cluster, you can use either a Docker Compose file or the Docker CLI. Here are the basic steps to deploy a service using the Docker CLI:
Create a Docker image for your service and push it to a Docker registry that is accessible to the Swarm nodes.
In the manager node, create a Docker service by running the following command:
docker service create --name <SERVICE-NAME> --replicas <NUMBER-OF-REPLICAS> <IMAGE-NAME>
This command creates a service with the specified name, number of replicas, and image. The replicas define how many instances of the service should be created and distributed across the worker nodes in the Swarm.
To check the status of the service, run the following command on the manager node:
docker service ls
This command shows the list of services in the Swarm and their status.
To scale the service, you can run the following command:
docker service scale <SERVICE-NAME>=<NUMBER-OF-REPLICAS>
This command changes the number of replicas for the specified service.
To update the service with a new Docker image, run the following command:
docker service update --image <NEW-IMAGE-NAME> <SERVICE-NAME>
This command updates the service with the new image.
To remove the service, run the following command:
docker service rm <SERVICE-NAME>
This command removes the service from the Swarm.
These are the basic steps to deploy services on a Docker Swarm cluster. Docker Swarm also provides more advanced features such as rolling updates, health checks, and service discovery.
11. Common Docker Issues
Here are some common Docker issues and how to solve them:
Docker container not starting
This issue can occur due to a variety of reasons, including incorrect Docker run commands, issues with the Docker image, or issues with the Docker daemon. To solve this issue, you can try the following solutions:
- Check the Docker logs for error messages and troubleshoot accordingly.
- Verify that the Docker image is correctly configured and try running the container with different options.
- Restart the Docker daemon and try starting the container again.
Docker container unable to connect to the internet
This issue can occur if the container is not configured with the correct DNS settings. To solve this issue, you can try the following solutions:
- Verify that the container is connected to the correct network.
- Verify that the container has the correct DNS settings configured.
- Restart the Docker daemon and try starting the container again.
Docker container running out of disk space
This issue can occur if the container is generating a large amount of data and is not configured with enough disk space. To solve this issue, you can try the following solutions:
- Increase the container’s disk space by using the “–storage-opt” option in the Docker run command.
- Limit the container’s data generation by configuring the application running inside the container.
Docker container unable to communicate with other containers
This issue can occur if the containers are not on the same network or if the container ports are not correctly exposed. To solve this issue, you can try the following solutions:
- Verify that the containers are on the same network and can communicate with each other.
- Verify that the container ports are correctly exposed and can be accessed from other containers or the host machine.
Docker image build failing
This issue can occur if the Dockerfile is not correctly configured or if there are issues with the Docker build context. To solve this issue, you can try the following solutions:
- Verify that the Dockerfile is correctly configured and follows best practices.
- Optimize the Docker build context by excluding unnecessary files and directories.
- Use the “–no-cache” option in the Docker build command to ensure a clean build environment.