Running a Private Docker Registry on EC2
Docker provides an open source registry implementation called “Distribution,” making it simple for anyone to run a private Docker registry. In this article, I’ll provide a brief introduction to the AWS services integrated with Docker and describe how to use AWS for hosting your own registry.
Running server software in the AWS cloud has several advantages:
- Sophisticated network infrastructure management
- Sophisticated access control tooling
- Elastic load balancers
- Integrated DNS management
AWS also provides Docker container management services via the Elastic Container Service (ECS) and Elastic Beanstalk. These two services differ in one important way: ECS is focused on managing tasks and services (run as containers) on preprovisioned and configured hardware, while Elastic Beanstalk is a tool for provisioning whole stacks that are purposed to run a specified application.
Elastic Beanstalk used to define platforms and context for the applications it ran. Now with Docker integration, you can run a platform agnostic application in containers, and ECS highlights strengths of Docker-based deployment better than Elastic Beanstalk. For that reason and in consideration of the wealth of existing content on Elastic Beanstalk, this article will use ECS.
The Elastic Container Service (ECS) is not a host-provisioning service. While the management console does provide a screen for provisioning a basic cluster infrastructure, it only uses a CloudFormation template on the backend. After the cluster is provisioned with that template, changes to the new infrastructure must be made directly with the resource specific tooling. This also means that integration with existing infrastructure or specialization is difficult to achieve with that automated cluster provisioner. If you’re comfortable with AWS in general, you should provision your own ECS-integrated EC2 hosts (more on this in a moment).
ECS is in the business of managing “clusters,” “tasks,” and “services.” A cluster is a name that resolves to a set of member hosts. When an ECS host comes online, it starts an agent that connects to the ECS service and registers with a named cluster. A task is a “run once” container, and a service is a container that should be restarted if it stops. When you create a new task or service, you specify the cluster where it should be run. The cluster handles scheduling tasks and service instances to member hosts.
To integrate any Docker host with an ECS cluster, that host must run an ECSAgent container and set the cluster name that you want to join in /etc/ecs/ecs.config
on the host. If you were provisioning an ECS host to run in a cluster named “default,” you might set the the user data for the launch configuration to the following script:
#!/bin/bash echo ECS_CLUSTER=default >> /etc/ecs/ecs.config
For now, use the ECS section of the AWS Management Console to create a default cluster. It will provision everything down to a new VPC. You could build one yourself, but doing so is tedious and nuanced. Be ready for a longer project if you head down that route. ECS instances are created with Amazon Linux AMI where Docker and the ECS Agent has been preinstalled.
Before you begin, make sure that you’ve created an AWS account, installed the AWS command line interface (CLI), and configured your environment to work with your account. You can find information about installing and configuring the CLI here.
Prepare AWS Resources for Your Private Docker Registry
The Docker Distribution project is an opensource implementation of the Docker Registry API. While the ECS cluster should already be provisioned with all of the AWS resources required to run a container, the Distribution project has (optional) application level integration with S3. To take advantage of that you will need to provision a few application-level AWS dependencies.
The first AWS resource that you will create is an S3 bucket where the registry will store its data. You can create a new bucket through the management console or from the command line. Replace “my-docker-registry-data” with a bucket name of your choosing in the following command:
# Use the AWS ClI and "make bucket" (mb) command. aws s3 mb s3://my-docker-registry-data
Earlier I mentioned AWS’s sophisticated access control tooling. Well, a new S3 bucket is of no use if your private Docker registry cannot read from it or write to it. The next step is to create a new IAM policy that grants read/write privileges to the bucket.
Create a new file named “registry-policy.json” and include the following document. Make sure to swap out “my-docker-registry-data” with the name you selected for your new S3 bucket.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::my-docker-registry-data" ] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::my-docker-registry-data/*" ] } ] }
Once you’ve saved the policy file, register it in AWS with the following command. This action will require IAM management privileges:
aws iam create-policy --policy-name registryaccess --policy-document file://registry-policy.json
Next create a new IAM user and credentials for your registry and attach the policy you just created. Choose a username that is specific to the use-case. I used, “Registry.” You can create the user from the command line with these two commands:
# Create a user aws iam create-user --user-name Registry # Create credentials for the new user aws iam create-access-key --user-name Registry # Output similar to: # { # "AccessKey": { # "UserName": "Registry", # "Status": "Active", # "CreateDate": "2015-03-09T18:39:23.411Z", # "SecretAccessKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", # "AccessKeyId": "AKIAXXXXXXXXXXXXX" # } # }
Note the AccessKeyId and SecretAccessKey provided by the response output. You will need to provide these values to the registry so that it can authenticate with AWS as the Registry user. Before you can move on, attach the policy you created to the new user with the following command. Make sure to substitute <YOUR AWS ACCOUNT ID
> with your actual account ID.
aws iam attach-user-policy --policy-arn arn:aws:iam::<YOUR AWS ACCOUNT ID>:policy/registryaccess --user-name Registry
Once the policy has been attached, the Registry user will be able (and only able) to read, write, and list files in the bucket you created. That is the last peice of preparation required before you launch your registry.
Launching a Private Docker Registry with ECS
Launching any application in an ECS cluster means identifying the image or images that you want to run, defining containers that will use those images, and setting scaling options. You are going to launch a specialization of the "registry:2"
image in this example.
You can build your own specialization with a config.yml and Dockerfile similar to the two that follow. You can find documentation for the Docker Distribution configuration file here. If you’d rather reuse resources than worry about creating a Docker Hub repository, then you should use an image I’ve released with this exact configuration. The repository name is "allingeek/registry:2-s3."
If you want to further specialize your configuration, you’ll need to build your own or fork an existing image.
# config.yml version: 0.1 log: fields: service: registry http: addr: :5000 storage: cache: layerinfo: inmemory s3: accesskey: <your awsaccesskey> secretkey: <your awssecretkey> region: <your bucket region> bucket: <your bucketname> encrypt: true secure: true v4auth: true chunksize: 5242880 rootdirectory: / # Dockerfile FROM registry:2 COPY config.yml /etc/docker/registry/config.yml
No matter what happens, do not include your AWS Access Key or AWS Secret Key in these files. These need to be committed to version control to be of any use, and the last thing you want to do is commit secrets to version control. You will see in the next section that there are better ways to inject configuration at runtime. Once you have identified the image you are going to run in ECS, you need to define a task family and service.
A task family defines a versioned set of container definitions. In this example we will only need a single container. Create a new file named “registry-task.json” and paste in the following document. Replace <YOUR NEW IAM USER ACCESS KEY>
and <YOUR NEW IAM USER SECRET ACCESS KEY>
with the AccessKey and SecretAccessKey returned by the create-user
command you ran earlier. You also need to replace <YOUR S3 BUCKET REGION>
and <YOUR S3 BUCKET NAME>
with the appropriate values.
[ { "name": "registry", "image": "allingeek/registry:2-s3", "cpu": 1024, "memory": 1000, "entryPoint": [], "environment": [ { "name": "REGISTRY_STORAGE_S3_ACCESSKEY", "value": "<YOUR NEW IAM USER ACCESS KEY>" }, { "name": "REGISTRY_STORAGE_S3_SECRETKEY", "value": "<YOUR NEW IAM USER SECRET ACCESS KEY>" }, { "name": "REGISTRY_STORAGE_S3_REGION", "value": "<YOUR S3 BUCKET REGION>" }, { "name": "REGISTRY_STORAGE_S3_BUCKET", "value": "<YOUR S3 BUCKET NAME>" } ], "command": ["/etc/docker/registry/config.yml"], "portMappings": [ { "hostPort": 5000, "containerPort": 5000, "protocol": "tcp" } ], "volumesFrom": [], "links": [], "mountPoints": [], "essential": true } ]
The environment variables declared in this container definition will override the values for the corresponding entries in the config.yml file packaged with the image. This document will be versioned within AWS and accessible only to users with permissions to the ECS service. Since this document contains secret material, do not otherwise commit this file to version control or make it available in any public location.
Once you’ve created the container definition list file, register the new task family with the following command:
aws ecs register-task-definition --family registry --container-definitions file://registry-task.json
Next you need to define a service that will run on your cluster. You can find the CLI documentation for the create-service command here. The load balancer membership features provided by ECS are a wonderful time saver, so you should try them out here.
Create a new document describing your service. Name the new file “registry-service.json.” Copy and paste the following into that file and replace <YOUR ELB NAME...
with the name of the load balacer that was created for your cluster. If you used the cluster provisioner provided by the management console, then this should be the only load balancer in that VPC. If you have built your own ECS nodes in your own VPC and registered your own cluster, then you should create a new ELB in that VPC.
{ "serviceName": "s3-registry-elb", "taskDefinition": "registry", "loadBalancers": [ { "loadBalancerName": "<YOUR ELB NAME, something like: EC2Contai-EcsElast-S06278JGSJCM>", "containerName": "registry", "containerPort": 5000 } ], "desiredCount": 1, "role": "ecsServiceRole" }
A quick note about the IAM role named “ecsServiceRole.” This is an IAM role that should have been created during cluster provisioning. If you encounter problems where this role has not been created, you can create it yourself with the following policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "elasticloadbalancing:Describe*", "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", "elasticloadbalancing:RegisterInstancesWithLoadBalancer", "ec2:Describe*", "ec2:AuthorizeSecurityGroupIngress" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::ecs-registry-demo" ] }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::ecs-registry-demo/*" ] } ] }
Once you’ve got the service description document created (and the requisite IAM role created), go ahead and launch the service with your cluster. By default, ECS creates a cluster named “default,” and you can omit the “cluster” argument in the following command. But if you created your own, you will need to replace <YOUR CLUSTER NAME>
with the name you chose.
aws ecs create-service --service-name s3-registry-elb --cluster <YOUR CLUSTER NAME> --cli-input-json file://ecs-simple-service-elb.json
When the cluster has come up, check the instance membership of your ELB and the listener configuration. You should find that port 80 is being forwarded to port 5000 on the ECS node where the service has been deployed. When it is, fire up a web browser and hit http://<YOUR ELB CNAME>/v2/
. You shoud get a response like:
{ }
Now that is what I’d call anticlimactic! You have deployed an insecure and unauthenticated Docker registry. A better test would be to push an image and try pulling it again. If you want to do that, you’ll need to make a configuration change on your local Docker daemon.
In Conclusion
The Elastic Container Service and ECS nodes provide easy integration with AWS’ existing scaling and load balancing technology. Its integration with Docker Hub makes getting started simple. The bulk of the complexity in working with this service is in provisioning the cluster and related infrastructure. With that in place, ECS makes defining and deploying services as simple as a few commands.
You can independently expand on this example in a few different ways. First, secure your private Docker registry by uploading an SSL certificate to AWS and configure your load balancer with an HTTPS listener. Second, consider implementing a registry authentication mechanism. Instructions for doing so can be found in the Distribute configuration documentation. Last, try scaling out the service to multiple instances and see how the service maintains ELB instance membership.
Reference: | Running a Private Docker Registry on EC2 from our JCG partner Jeff Nickoloff at the Codeship Blog blog. |