Couchbase XDCR using Docker Swarm, Machine and Compose
Cross Datacenter Replication (XDCR) in Couchbase provides an easy way to replicate data from one cluster to another. The clusters are typically set in geographically diverse data centers. This enables for disaster recovery or to bring data closer to users for faster data access.
This blog will show:
- Setup two data centers using Docker Swarm
- Run Couchbase containers on each node of Docker Swarm
- Setup a Couchbase cluster on each Docker Swarm cluster
- Configure one-way XDCR between two Couchbase clusters
For the purpose of this blog, the two data centers will be setup on a local machine using Docker Machine.
Complete code used in this blog is available at: github.com/arun-gupta/couchbase-xdcr-docker.
Create Consul Discovery Service
Each node in Docker Swarm needs to be registered with a discovery service. This blog will use Consul for that purpose. And even Consul will be running on a Docker Machine.
Typically, you’ll run a cluster of Consul but for simplicity a single instance is started in our case.
Create a Docker Machine and start Consul using this script:
# Docker Machine for Consul docker-machine \ create \ -d virtualbox \ consul-machine # Start Consul docker $(docker-machine config consul-machine) run -d --restart=always \ -p "8500:8500" \ -h "consul" \ progrium/consul -server -bootstrap
Create Docker Swarm cluster
Docker Swarm allows multiple Docker hosts to be viewed as a single unit. This allows your multi-container applications to easily run on multiple hosts. Docker Swarm serves the same Remote API as served by a single host. This allows your existing tools to target a single host or a cluster of hosts.
Both the Docker Swarm clusters will be registered with a single discovery service. This is achieved by using the following value for --swarm-discovery
:
consul://$(docker-machine ip consul-machine):8500/v1/kv/<key>
Create a Docker Swarm cluster using Docker Machine using this script:
# Docker Swarm master docker-machine \ create \ -d virtualbox \ --swarm \ --swarm-master \ --swarm-discovery="consul://$(docker-machine ip consul-machine):8500/v1/kv/cluster$1" \ --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500/v1/kv/cluster$1" \ --engine-opt="cluster-advertise=eth1:2376" \ swarm-master-$1 # Docker Swarm node-01 docker-machine \ create \ -d virtualbox \ --swarm \ --swarm-discovery="consul://$(docker-machine ip consul-machine):8500/v1/kv/cluster$1" \ --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500/v1/kv/cluster$1" \ --engine-opt="cluster-advertise=eth1:2376" \ swarm-node-$1-01 # Docker Swarm node-02 docker-machine \ create \ -d virtualbox \ --swarm \ --swarm-discovery="consul://$(docker-machine ip consul-machine):8500/v1/kv/cluster$1" \ --engine-opt="cluster-store=consul://$(docker-machine ip consul-machine):8500/v1/kv/cluster$1" \ --engine-opt="cluster-advertise=eth1:2376" \ swarm-node-$1-02 # Configure to use Docker Swarm cluster eval "$(docker-machine env --swarm swarm-master-$1)"
The script needs to be invoked as:
./create-docker-swarm-cluster.sh A ./create-docker-swarm-cluster.sh B
This will create two Docker Swarm clusters with one “master” and two “worker” as shown below:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS consul-machine - virtualbox Running tcp://192.168.99.101:2376 v1.11.1 default * virtualbox Running tcp://192.168.99.100:2376 v1.11.1 swarm-master-A - virtualbox Running tcp://192.168.99.102:2376 swarm-master-A (master) v1.11.1 swarm-master-B - virtualbox Running tcp://192.168.99.105:2376 swarm-master-B (master) v1.11.1 swarm-node-A-01 - virtualbox Running tcp://192.168.99.103:2376 swarm-master-A v1.11.1 swarm-node-A-02 - virtualbox Running tcp://192.168.99.104:2376 swarm-master-A v1.11.1 swarm-node-B-01 - virtualbox Running tcp://192.168.99.106:2376 swarm-master-B v1.11.1 swarm-node-B-02 - virtualbox Running tcp://192.168.99.107:2376 swarm-master-B v1.11.1
Consul is running on Docker Machine with IP address 192.168.99.101. And so Consul UI is accessible at http://192.168.99.101:8500:
It shows two Docker Swarm clusters that have been registered.
Exact list of nodes for each cluster can also be seen. Nodes in clusterA
are shown:
Nodes in clusterB
are shown:
Run Couchbase containers
Run Couchbase container on each node of Docker Swarm cluster using this Compose file.
version: "2" services: db: image: arungupta/couchbase network_mode: "host" ports: - 8091:8091 - 8092:8092 - 8093:8093 - 11210:11210
Configure Docker CLI for the first cluster and run 3 containers:
eval "$(docker-machine env --swarm swarm-master-A)" docker-compose scale db=3
Check the running containers:
> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3ec0f15aaee0 arungupta/couchbase "/entrypoint.sh /opt/" 3 hours ago Up 3 hours swarm-master-A/couchbasexdcrdocker_db_3 07af2ac53539 arungupta/couchbase "/entrypoint.sh /opt/" 3 hours ago Up 3 hours swarm-node-A-02/couchbasexdcrdocker_db_2 c94878f543fd arungupta/couchbase "/entrypoint.sh /opt/" 3 hours ago Up 3 hours swarm-node-A-01/couchbasexdcrdocker_db_1
Configure Docker CLI for the second cluster and run 3 containers:
eval "$(docker-machine env --swarm swarm-master-B)" docker-compose scale db=3
Check the running containers:
> eval "$(docker-machine env --swarm swarm-master-B)" > docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3e3a45480939 arungupta/couchbase "/entrypoint.sh /opt/" 3 hours ago Up 3 hours swarm-master-B/couchbasexdcrdocker_db_3 1f31f23e337d arungupta/couchbase "/entrypoint.sh /opt/" 3 hours ago Up 3 hours swarm-node-B-01/couchbasexdcrdocker_db_1 1feab04c494c arungupta/couchbase "/entrypoint.sh /opt/" 3 hours ago Up 3 hours swarm-node-B-02/couchbasexdcrdocker_db_2
Create/Rebalance Couchbase cluster
Scaling and Rebalancing Couchbase Cluster using CLI explains how to create a cluster of Couchbase nodes and rebalance an existing cluster using Couchbase CLI.
Create Couchbase cluster on each Swarm cluster using this script.
export COUCHBASE_CLI=/Users/arungupta/tools/Couchbase-Server-4.0.app/Contents/Resources/couchbase-core/bin/couchbase-cli for node in 01 02 do $COUCHBASE_CLI \ server-add \ --cluster=$(docker-machine ip swarm-master-$1):8091 \ --user Administrator \ --password password \ --server-add=$(docker-machine ip swarm-node-$1-$node) \ --server-add-username=Administrator \ --server-add-password=password done $COUCHBASE_CLI \ setting-cluster \ --cluster=$(docker-machine ip swarm-master-$1):8091 \ --user Administrator \ --password password \ --cluster-name=cluster$1
The script needs to be invoked as:
./create-couchbase-cluster.sh A
And now rebalance this cluster using this script:
export COUCHBASE_CLI=/Users/arungupta/tools/Couchbase-Server-4.0.app/Contents/Resources/couchbase-core/bin/couchbase-cli $COUCHBASE_CLI \ rebalance \ --cluster=$(docker-machine ip swarm-master-$1):8091 \ --user Administrator \ --password password \ --server-add-username=Administrator \ --server-add-password=password
This script is invoked as:
./rebalance-couchbase-cluster.sh A
Couchbase Web Console for any node in the cluster will show the output:
Invoke this script to create the second Couchbase cluster as:
./create-couchbase-cluster.sh B
Rebalance this cluster as:
./rebalance-couchbase-cluster.sh B
Couchbase Web Console for any node in the second cluster will show the output:
Setup XDCR
Cross datacenter replication can be setup to be uni-directional, bi-directional or multi-directional. Uni-directional allows data to replicated from source cluster to destination cluster, bi-directional allows replication both ways, multi-directional allows to configure in any direction.
We’ll create a simple uni-directional replication using this script:
export COUCHBASE_CLI=/Users/arungupta/tools/Couchbase-Server-4.0.app/Contents/Resources/couchbase-core/bin/couchbase-cli $COUCHBASE_CLI \ xdcr-setup \ --cluster=$(docker-machine ip swarm-master-$1):8091 \ --user Administrator \ --password password \ --create \ --xdcr-cluster-name=cluster$1 \ --xdcr-hostname=$(docker-machine ip swarm-master-$2):8091 \ --xdcr-username=Administrator \ --xdcr-password=password \ --xdcr-demand-encryption=0 $COUCHBASE_CLI \ xdcr-replicate \ --cluster $(docker-machine ip swarm-master-$1):8091 \ --xdcr-cluster-name=cluster$1 \ --user Administrator \ --password password \ --create \ --xdcr-from-bucket=travel-sample \ --xdcr-to-bucket=travel-sample
This script is invoked as:
./setup-xdcr.sh A B
A bi-directional replication can be easily created by executing the commands again but reversing the source and destination cluster.
Couchbase Web Console for the source cluster will show:
Couchbase Web Console for the destination cluster will show:
Enjoy!
This blog shows how you can simplify your complex deployments using Docker Machine, Docker Swarm, and Docker Compose.
Reference: | Couchbase XDCR using Docker Swarm, Machine and Compose from our JCG partner Arun Gupta at the Miles to go 3.0 … blog. |