Integrating Carina with Your Continuous Integration Pipeline
Rackspace recently announced the public beta for their hosted Docker offering, Carina. This is part of a strategic effort to provide a competitive in-house container solution that integrates with the Rackspace suite of tools and services, especially the famed “Fanatical Support.”
Up until now you’ve been able to run Docker clusters on the Rackspace infrastructure by provisioning VMs or bare metal servers and building your own Docker cluster. With Carina, the complexity of doing this — as well as the infrastructure overhead — are both greatly reduced.
In this blog post, I’ll talk about Carina, how to use it, and how to integrate it into your continuous deployment pipeline using Codeship.
How Does Rackspace’s Carina Work?
Supporting the Carina rollout is OpenStack Magnum, which provides container orchestration as a first-class resource. This allows OpenStack deployments to create Magnum nodes alongside traditional Nova nodes, using either Docker Swarm, Kubernetes, or Apache Mesos as an orchestration engine.
The end result is a homogenous, automated, and scalable Docker cluster, running your favorite container orchestration engine, on top of LXC running on bare metal servers.
This is all accessed via a separate interface; the Carina API allows for host scaling and orchestration engine control, while the cluster access and container creation is handled via the standard Docker API. At this early stage, only Docker Swarm is supported by Carina.
Carina refers to hosts within the Docker cluster as “segments.” A separate term is used to differentiate between operating systems with the Docker Engine installed, running in virtual machines or on bare metal servers (hosts) and the Docker Engine running directly in a container on top of LXC. The OpenStack containers team boasts a significant speed boost using this method.
Given the state of this beta, and the tendency of Rackspace to lean toward simple, user-friendly interfaces that sacrifice customization for usability, it’s hard to tell how much of Carina is incomplete and how much is intentionally unsupported.
Currently you can autoscale your Carina cluster, but cannot set limits or triggers on this. Many such features exist with sensible defaults but without the ability for advanced customization. Hopefully these aspects of the application can be built out in the future without sacrificing usability.
Using Carina
Carina supports both a web UI and a CLI backed by an API. You can find out how to install the CLI in the getting started docs. To get started using the web UI, visit the Carina site. Both interfaces provide a similar level of control over your cluster, however the CLI has the more traditional feel of the core Docker toolset.
It’s free to create a Rackspace account, and the Carina Beta is currently free to experiment with up to a certain size. To use the CLI you’ll need to set your Rackspace username and API key in ENV or on the command line:
export CARINA_USERNAME=myuse export CARINA_APIKEY=abcdef1234
You can now interact with your Carina clusters:
bfosberry~$ carina list ClusterName Flavor Nodes AutoScale Status testcluster container1-4G 1 false active bfosberry~$ carina grow --nodes=2 testcluster testcluster container1-4G 1 false growing bfosberry~$ carina list ClusterName Flavor Nodes AutoScale Status testcluster container1-4G 3 false active bfosberry~$ carina get testcluster testcluster container1-4G 3 false active bfosberry~$ carina delete testcluster testcluster container1-4G 3 false deleting
Most commands support a synchronous mode, where by specifying the --wait
flag, the command will block until the remote action is complete.
bfosberry~$ carina create --nodes=2 --autoscale --wait testcluster2 testcluster2 container1-4G 2 true active
Independent of your cluster control, you can interact with the underlying Docker orchestration engine the same way you would when using Docker Machine.
bfosberry~$ eval `carina env testcluster2` bfosberry~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 08:50:38-bfosberry~$ docker info Containers: 3 Images: 2 Role: primary Strategy: spread Filters: health, port, dependency, affinity, constraint Nodes: 1 5a00874f-6b1d-4c9e-bd92-411b9cd3a167-n1: 104.130.22.20:42376 └ Containers: 3 └ Reserved CPUs: 0 / 12 └ Reserved Memory: 0 B / 4.2 GiB └ Labels: executiondriver=native-0.2, kernelversion=3.18.21-1-rackos, operatingsystem=Debian GNU/Linux 7 (wheezy) (containerized), storagedriver=aufs CPUs: 12 Total Memory: 4.2 GiB Name: f78aba1baa5c bfosberry~$ docker run -it ubuntu root@1038df1314a7:/# exit bfosberry~$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1038df1314a7 ubuntu "/bin/bash" 28 seconds ago Exited (0) 18 seconds ago 5a00874f-6b1d-4c9e-bd92-411b9cd3a167-n1/naughty_mestorf f78aba1baa5c swarm:1.0.0 "/swarm manage -H=tcp" 5 minutes ago Up 5 minutes 2375/tcp, 104.130.22.20:2376->2376/tcp 5a00874f-6b1d-4c9e-bd92-411b9cd3a167-n1/swarm-manager ff87c8a61e49 swarm:1.0.0 "/swarm join --addr=1" 5 minutes ago Up 5 minutes 2375/tcp 5a00874f-6b1d-4c9e-bd92-411b9cd3a167-n1/swarm-agent cbe07f4d6f56 cirros "/sbin/init" 5 minutes ago 5a00874f-6b1d-4c9e-bd92-411b9cd3a167-n1/swarm-data
At one point, I did create a number of clusters which ended up in an error state. There was no immediate mechanism to retry, no error log, and no auto-generated support ticket on my account. There was also no mechanism in the control panel to reach out to support for help, but this is a beta.
As the product moves out of beta, there should be more visibility and access. In general the Rackspace support team is very responsive and the Carina team is active in their IRC channel, so be sure to reach out there for support.
Integrating with Your CD Pipeline
Since Carina exposes a standard Docker endpoint, getting access and deploying changes is fairly simple. I’ll give a quick example about how to get and share access to your Docker cluster within a CD pipeline using Carina; however, keep in mind that this workflow does not apply if any container PAAS, such as Rancher or Deis, is being run on top of your Docker cluster.
In that scenario, you can just deploy to the relevant PAAS without needing to interact directly with Carina. In this example I’ll be using the Codeship Docker platform for my CD pipeline.
In order to deploy your application directly to the Docker cluster, you’ll need to first get access to your Carina account and then pull down Docker credentials to connect directly. You can then make changes directly to your cluster, such as pulling images and creating and destroying containers.
Setting up Carina access
Let’s get started by saving our Carina credentials and encrypting them using our project AES key.
# carina.env CARINA_USERNAME=myuser CARINA_APIKEY=abcdef12345
$ jet encrypt carina.env carina_env.encrypted $ git add carina_env.encrypted
Unless you plan on deleting carina.env
, it’s probably wise to add that to our .gitignore
so we don’t accidentally commit it to our repo.
# .gitignore carina.env *.aes
We can now reference this encrypted environment data from within our deploy step by adding it to the relevant service. With this in place, any steps run on the deploy service will have the credentials we encrypted as part of the environment.
# codeship-services.yml carina_deploy: build: ./ encrypted_env_file: carina_env.encrypted
We’ll also need to build a Docker image with the carina
binary which will allow us to pull down our Docker cluster credentials. Using this image, we can make changes to our Docker cluster using standard Docker commands.
FROM debian:jessie # install deps RUN apt-get update && apt-get install -y apt-transport-https wget # install Docker RUN echo 'deb https://apt.dockerproject.org/repo debian-jessie main' > /etc/apt/sources.list.d/docker.list RUN apt-get update && apt-get install -y --force-yes docker-engine # install carina RUN wget -O /usr/bin/carina https://github.com/getcarina/carina/releases/download/v0.9.1/carina-linux-amd64 RUN chmod +x /usr/bin/carina
$ root@952340f468c5:/# carina credentials --path="/tmp/carain" mycluster $ root@952340f468c5:/# source '/tmp/carina/docker.env' $ root@952340f468c5:/# docker info Containers: 5 Images: 3 Role: primary Strategy: spread Filters: health, port, dependency, affinity, constraint Nodes: 1 5a00874f-6b1d-4c9e-bd92-411b9cd3a167-n1: 104.130.22.20:42376 └ Containers: 5 └ Reserved CPUs: 0 / 12 └ Reserved Memory: 0 B / 4.2 GiB └ Labels: executiondriver=native-0.2, kernelversion=3.18.21-1-rackos, operatingsystem=Debian GNU/Linux 7 (wheezy) (containerized), storagedriver=aufs CPUs: 12 Total Memory: 4.2 GiB Name: f78aba1baa5c
By wrapping these Carina commands and any Docker commands into a script, you can run this from within your deployment pipeline.
# codeship-steps.yml - service: carina_deploy command: deploy_myapp
Finally, this deployment can be executed using jet steps
. Using a simple script and the Carina binary, your continuous deployment pipeline can update a Docker cluster by stopping and starting containers and pulling updated images.
The Future of Carina
In its current form, Carina is a highly effective, simple Docker cluster provisioning tool. However for advanced users, it’s not particularly useful. It lacks the advanced support to allow it to be used in highly efficient ways, as well as the command line access to manually implement plugins and alterations.
Much of the power of Carina will be realized through users running platforms such as Rancher and Deis on top of Carina, and by Rackspace gradually integrating its other services. This will provide a highly homogenous platform for running applications, not just containers.
Carina lags behind other Docker platforms in features, so as the product develops, it’ll be interesting to see how they plan to support the plugin ecosystem and newer features like CRIU. There are plans to provide registry support, so if Rackspace can build a strong Docker support talent base and provide a supported Docker platform, it could be a real competitor for the upcoming Tutum GA product.
As a platform, Carina holds a lot of promise. With the right direction, the product could become competitive amongst both performance and user-friendly container hosting providers.
Reference: | Integrating Carina with Your Continuous Integration Pipeline from our JCG partner Brendan Fosberry at the Codeship Blog blog. |