DevOps

Dual Deployment: A Low-Risk Way to Run Containers in Production

How many of you are running containers in production?

I’ve heard this question asked many times since DockerCon 2014. Docker adoption has been meteoric over the past three years, but containers in production are still proving to be a challenge. In December 2015, Robin Systems surveyed 200 representatives from various industries about their container adoption status. Of the respondents, 36 percent are running containers in production, and 59 percent are planning to adopt, are still investigating, or are simply playing with containers.

At Demonware, we have been using Docker for over two years. However, we’re still very much within the “planning to adopt, still investigating, or simply playing with containers” category. Why is the jump to production so difficult? Is there a way to massage containers into production in a low risk, low cost manner?

As a member of the Build Engineering (BE) team, I have worked with development teams to Dockerize their continuous integration (CI) pipelines. When the BE team started using Docker in September 2013, we almost unanimously agreed that we could leverage containers in build, test, and development environments without a hard requirement to run containers in production. Our attitude has since changed.

The benefits of containerizing our CI pipelines, tooling, build infrastructure, and test infrastructure have been immense. We would like to see these similar benefits in production. So what is stopping us?

Most of the barriers aren’t technical. There is still concern in some quarters that containers are not the right technology to use. We wanted to move forward while re-assuring concerned parties that we are doing so in a controlled and low risk manner.

Dual Deployment

The plan was to tweak the existing continuous Delivery (CD) pipeline to move containers closer to production without any risk to existing deployments. We already had a number of disparate build pipelines responsible for building service packages (RPM), service containers (Docker), and Amazon Machine Images (AMI).

We combined theses pipelines to produce a “dual deployment” AMI artifact. This AMI artifact is created using Packer. Packer is a wonderful tool for building and provisioning images of any type and supports a wide range of providers and provisioners. Packer is completely free and provides reliable, repeatable images based on a JSON format template. A sample template can be found here.

{
  "variables": {
    "base_image": "change me with the '-var base_image=whatever' arg",
    "new_image": "change me with the '-var new_image=whatever' arg",
    "access_key": "change me with the '-var access_key=whatever' arg",
    "secret_key": "change me with the '-var secret_key=whatever' arg",
    "script": "change me with the '-var script=whatever' arg",
    "script_args": "change me with the '-var script_args=whatever' arg"
  },
  "builders": [{
    "type": "amazon-ebs",
    "region": "us-west-2",
    "vpc_id": "",
    "subnet_id": "",
    "instance_type": "t2.micro",
    "access_key": "{{user `access_key`}}",
    "secret_key": "{{user `secret_key`}}",
    "ssh_username": "centos",
    "ami_name": "{{user `new_image`}} {{timestamp}}",
    "source_ami": "{{user `base_image`}}",
    "ami_block_device_mappings": [
        {
          "device_name": "/dev/sda1",
          "volume_size": 30
        }
    ],
    "ssh_pty" : "true",
    
    "tags": {
      "project": "demo"
    },
    "run_tags":{
      "project": "demo"
    }
    }],
    "provisioners": [
    {
    "type": "shell",
    "execute_command":    "chmod +x {{ .Path }}; {{ .Vars }} sudo -E '{{ .Path }}' {{user `script_args`}}",
    "scripts": "{{user `script`}}"
    },
    {
      "type": "shell",
      "inline": [
        "echo 'Rebooting for bootstrap changes to take effect.';sudo systemctl reboot"
      ]
    },
    {
      "type": "shell",
      "execute_command":    "chmod +x {{ .Path }}; {{ .Vars }} sudo -E '{{ .Path }}'",
      "scripts": "install_service_rpm.sh"
    },
    {
      "type": "shell",
      "execute_command":    "chmod +x {{ .Path }}; {{ .Vars }} sudo -E '{{ .Path }}'",
      "scripts": "build_service_container.sh"
    }
    ]

}

The “dual deployment” AMI build has two parts.

  1. The service package (RPM) is installed, configured and ready to run when the EC2 instance is started.
  2. The service container, based on the same source as our service package, is built during the AMI build stage.

Benefits

This allows us to deploy our service in the usual manner but also have the option to route traffic through the service container for testing purposes. This required a small tweak to the AMI Packer template and extended the build time by a few minutes.

This is a low risk method to get containers into production by piggybacking the existing CD pipeline. Starting the service container is still a manual task, and we currently use this to test the container. When we start the container we use the --host docker run option to route traffic through the service container.

Let’s walk through what an example of the flow and a sample Packer template for AWS might look like.

An Example of Dual Deployment

In this example, our service is installed on an EC2 instance (Centos 7) and can handle traffic using the VM interface. The same code is containerized and available to handle traffic using the host (VM) interface. Currently, we manually redirect traffic for the purposes of testing.

Building a dual deployment environment using Packer

The Packer template aws.json is used to build an Amazon Machine Image. This image will be bootstrapped, our service will be installed as an RPM, and our service container built.

The bootstrap.sh can be as simple or complex as you need it to be. We use it to install some helper tools and install Docker.

The container.sh pulls the service container code from GitHub and builds the container images as part of the AMI build. The aws.json template supports six arguments:

base_image : The AMI ID of the base image you want to build from.
    new_image  : The name of the new AMI you are building.
    access_key : AWS access key 
    secret_key : AWS secret key
    script     : The name of the script you want to run to bootstrap the base image.
    script_args: Pass in additional arguments to the script.

Create a Primed Centos 7 AMI which includes both the service and service container:

packer build -var 'base_image=ami-c7d092f7' -var 'new_image=CentOS-7-x86_64 (Demo)' -var 'script=bootstrap.sh' -var 'access_key=12345674345456454' -var 'secret_key=erewrwrw2424esfsfdfwrwerwe2423wer' -var 'script_args=' aws.centos.json

When the build command completes successfully, you will have an AMI with the service installed and with the service container built.

Our CD pipeline has supported AMI builds and RPM-based service installs for a while. We wanted to run a single containerized service with as little disruption to the existing pipeline as possible. By tweaking the Packer templates, we had a simple, low risk way to get our containerized services into production.

Conclusion

We are currently at the point of testing our containerized services in these ‘dual deploy’ environments. Some of the fear and stigma of putting containers into production has been removed. We understand the deployment of containers better, and we are piggybacking a well-defined CD pipeline.

This post doesn’t aim to solve any technical barriers to deploying containers in production. It is merely proposing a way to use existing processes to introduce containers into a CD pipeline in a low risk manner.

So often the first step is the hardest; this approach may provide a less intimidating way of getting your services closer to a containerized future in Production.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button