Couchbase on Kubernetes
This blog is possible because of this tweet!
Had a great #Couchbase #Kubernetes hacking session with @saturnism, learned a lot, look forward to some nice blogs.
— Arun Gupta (@arungupta) February 27, 2016
Kubernetes is an open source orchestration system by Google for Docker containers. It manages containerized applications across multiple hosts and provides basic mechanisms for deployment, maintenance, and scaling of applications.
It allows the user to provide declarative primitives for the desired state, for example “need 5 Couchbase servers”. Kubernetes self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers then ensure that this state is met. The user just define the state and Kubernetes ensures that the state is met at all times on the cluster.
Key Concepts of Kubernetes explains the key concepts of Kubernetes.
This multi-part blog series will show how to run Couchbase on Kubernetes in multiple ways. The first part starts with a simple setup using Vagrant.
Getting Started with Kubernetes
There are multiple ways to run Kubernetes but I found the simples (not necessarily predictable �� way is to run using Vagrant.
- Download the latest Kubernetes release, 1.1.8 as of this writing, and expand the archive.
- Start the Kubernetes cluster as:
cd kubernetes export KUBERNETES_PROVIDER=vagrant ./cluster/kube-up.sh
This shows the output as:
kubernetes-1.1.8 > ./kubernetes/cluster/kube-up.sh ... Starting cluster using provider: vagrant ... calling verify-prereqs ... calling kube-up Bringing machine 'master' up with 'virtualbox' provider... Bringing machine 'minion-1' up with 'virtualbox' provider... ==> master: Importing base box 'kube-fedora21'... . . . Validate output: NAME STATUS MESSAGE ERROR controller-manager Healthy ok nil scheduler Healthy ok nil etcd-0 Healthy {"health": "true"} nil etcd-1 Healthy {"health": "true"} nil Cluster validation succeeded Done, listing cluster services: Kubernetes master is running at https://10.245.1.2 Heapster is running at https://10.245.1.2/api/v1/proxy/namespaces/kube-system/services/heapster KubeDNS is running at https://10.245.1.2/api/v1/proxy/namespaces/kube-system/services/kube-dns KubeUI is running at https://10.245.1.2/api/v1/proxy/namespaces/kube-system/services/kube-ui Grafana is running at https://10.245.1.2/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana InfluxDB is running at https://10.245.1.2/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
Run Couchbase on Kubernetes Cluster
The easiest way to start running a Docker container in Kubernetes is using the kubectl run
command.
The command usage is:
kubectl run NAME --image=image [--env="key=value"] [--port=port] [--replicas=replicas] [--dry-run=bool] [--overrides=inline-json] [flags]
The command runs a particular image, possibly replicated. The image replication is handled by creating a Replication Controller to manage the created container(s).
Complete list of options to run this command can be seen using:
./cluster/kubectl.sh run --help
Couchbase Docker Container explains the different Docker container for Couchbase. For this blog, we’ll use arungupta/couchbase
image as that is pre-configured.
./cluster/kubectl.sh run couchbase --image=arungupta/couchbase
This shows the output:
replicationcontroller "couchbase" created
The output confirms that a Replication Controller is created. Lets verify it:
./kubernetes/cluster/kubectl.sh get rc CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE couchbase couchbase arungupta/couchbase run=couchbase 1 17s
Now, check the pods:
./kubernetes/cluster/kubectl.sh get po NAME READY STATUS RESTARTS AGE couchbase-tzdhl 0/1 Pending 0 36s
Lets check the status of the pod:
./kubernetes/cluster/kubectl.sh describe pod couchbase-tzdhl Name: couchbase-tzdhl Namespace: default Image(s): arungupta/couchbase Node: 10.245.1.4/10.245.1.4 Start Time: Fri, 26 Feb 2016 18:05:10 -0800 Labels: run=couchbase Status: Running Reason: Message: IP: 10.246.67.2 Replication Controllers: couchbase (1/1 replicas created) Containers: couchbase: Container ID: docker://56dddb66bf60a590e588b972d5cae997ec96149066a9fb8075548c982eb14961 Image: arungupta/couchbase Image ID: docker://080e2e96b3fc22964f3dec079713cdf314e15942d6eb135395134d629e965062 QoS Tier: cpu: Burstable Requests: cpu: 100m State: Running Started: Fri, 26 Feb 2016 18:05:56 -0800 Ready: True Restart Count: 0 Environment Variables: Conditions: Type Status Ready True Volumes: default-token-clfeb: Type: Secret (a secret that should populate this volume) SecretName: default-token-clfeb Events: FirstSeen LastSeen Count From SubobjectPath Reason Message ───────── ──────── ───── ──── ───────────── ────── ─────── 1m 1m 1 {scheduler } Scheduled Successfully assigned couchbase-tzdhl to 10.245.1.4 1m 1m 1 {kubelet 10.245.1.4} implicitly required container POD Pulling Pulling image "gcr.io/google_containers/pause:0.8.0" 59s 59s 1 {kubelet 10.245.1.4} implicitly required container POD Created Created with docker id 2dac5f81f4c2 59s 59s 1 {kubelet 10.245.1.4} spec.containers{couchbase} Pulling Pulling image "arungupta/couchbase" 59s 59s 1 {kubelet 10.245.1.4} implicitly required container POD Started Started with docker id 2dac5f81f4c2 59s 59s 1 {kubelet 10.245.1.4} implicitly required container POD Pulled Successfully pulled image "gcr.io/google_containers/pause:0.8.0" 19s 19s 1 {kubelet 10.245.1.4} spec.containers{couchbase} Pulled Successfully pulled image "arungupta/couchbase" 18s 18s 1 {kubelet 10.245.1.4} spec.containers{couchbase} Created Created with docker id 56dddb66bf60 18s 18s 1 {kubelet 10.245.1.4} spec.containers{couchbase} Started Started with docker id 56dddb66bf60
Fifth line of the output says the node’s IP is 10.245.1.4. This would be used to access the Web Console later.
The last line in this output shows that the pod is now ready. Checking the status of the pod again shows:
./kubernetes/cluster/kubectl.sh get po NAME READY STATUS RESTARTS AGE couchbase-tzdhl 1/1 Running 0 2m
Couchbase Web Console on Kubernetes Cluster
Now that your Couchbase container is running in Kubernetes cluster, you may like to view the Web Console.
Each pod is assigned a unique IP address, but this address is only accessible within the cluster. It can exposed using the kubectl expose
command.
This command takes a Replication Controller, Service or Pod and expose it as new Kubernetes Service. This can be done by giving the command:
./cluster/kubectl.sh expose rc couchbase --target-port=8091 --port=8091 --external-ip=10.245.1.4 service "couchbase" exposed
In this command:
--target-port
is the name or number for the port on the container that the service should direct traffic to--port
is the port that the service should serve on--external-ip
is the external IP address to set for the service. Note, this IP address was obtained withkubectl describe pod
command earlier.
Now, you can access the Couchbase Web Console at http://10.245.1.4:8091 and looks like:
Enter the password credentials as Administrator
/password
. These credentials are specified during Docker image creation at github.com/arun-gupta/docker-images/blob/master/couchbase/configure-cluster.sh#L9.
Voila!
Discuss with us at StackOverflow or Couchbase Forums. You can also follow us at @couchbasedev and @couchbase.
Reference: | Couchbase on Kubernetes from our JCG partner Arun Gupta at the Miles to go 2.0 … blog. |