Kubernetes: Copy a dataset to a StatefulSet’s PersistentVolume
Neo4j Clusters on Kubernetes
This posts assumes that we’re familiar with deploying Neo4j on Kubernetes. I wrote an article on the Neo4j blog explaining this in more detail.
The StatefulSet we create for our core servers require persistent storage, achieved via the PersistentVolumeClaim (PVC) primitive. A Neo4j cluster containing 3 core servers would have the following PVCs:
$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE datadir-neo-helm-neo4j-core-0 Bound pvc-043efa91-cc54-11e7-bfa5-080027ab9eac 10Gi RWO standard 45s datadir-neo-helm-neo4j-core-1 Bound pvc-1737755a-cc54-11e7-bfa5-080027ab9eac 10Gi RWO standard 13s datadir-neo-helm-neo4j-core-2 Bound pvc-18696bfd-cc54-11e7-bfa5-080027ab9eac 10Gi RWO standard 11s
Each of the PVCs has a corresponding PersistentVolume (PV) that satisifies it:
$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-043efa91-cc54-11e7-bfa5-080027ab9eac 10Gi RWO Delete Bound default/datadir-neo-helm-neo4j-core-0 standard 41m pvc-1737755a-cc54-11e7-bfa5-080027ab9eac 10Gi RWO Delete Bound default/datadir-neo-helm-neo4j-core-1 standard 40m pvc-18696bfd-cc54-11e7-bfa5-080027ab9eac 10Gi RWO Delete Bound default/datadir-neo-helm-neo4j-core-2 standard 40m
The PVCs and PVs are usually created at the same time that we deploy our StatefulSet. We need to intervene in that lifecycle so that our dataset is already in place before the StatefulSet is deployed.
Deploying an existing dataset
We can do this by following these steps:
- Create PVCs with the above names manually
- Attach pods to those PVCs
- Copy our dataset onto those pods
- Delete the pods
- Deploy our Neo4j cluster
We can use the following script to create the PVCs and pods:
pvs.sh
#!/usr/bin/env bash set -exuo pipefail for i in $(seq 0 2); do cat <<EOF | kubectl apply -f - kind: PersistentVolumeClaim apiVersion: v1 metadata: name: datadir-neo-helm-neo4j-core-${i} labels: app: neo4j spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi EOF cat <<EOF | kubectl apply -f - kind: Pod apiVersion: v1 metadata: name: neo4j-load-data-${i} labels: app: neo4j-loader spec: volumes: - name: datadir-neo4j-core-${i} persistentVolumeClaim: claimName: datadir-neo-helm-neo4j-core-${i} containers: - name: neo4j-load-data-${i} image: ubuntu volumeMounts: - name: datadir-neo4j-core-${i} mountPath: /data command: ["/bin/bash", "-ecx", "while :; do printf '.'; sleep 5 ; done"] EOF done;
Let’s run that script to create our PVCs and pods:
$ ./pvs.sh ++ seq 0 2 + for i in $(seq 0 2) + cat + kubectl apply -f - persistentvolumeclaim "datadir-neo-helm-neo4j-core-0" configured + cat + kubectl apply -f - pod "neo4j-load-data-0" configured + for i in $(seq 0 2) + cat + kubectl apply -f - persistentvolumeclaim "datadir-neo-helm-neo4j-core-1" configured + cat + kubectl apply -f - pod "neo4j-load-data-1" configured + for i in $(seq 0 2) + cat + kubectl apply -f - persistentvolumeclaim "datadir-neo-helm-neo4j-core-2" configured + cat + kubectl apply -f - pod "neo4j-load-data-2" configured
Now we can copy our database onto the pods:
for i in $(seq 0 2); do kubectl cp graph.db.tar.gz neo4j-load-data-${i}:/data/ kubectl exec neo4j-load-data-${i} -- bash -c "mkdir -p /data/databases && tar -xf /data/graph.db.tar.gz -C /data/databases" done
graph.db.tar.gz contains a backup from a local database I created:
$ tar -tvf graph.db.tar.gz drwxr-xr-x 0 markneedham staff 0 24 Jul 15:23 graph.db/ drwxr-xr-x 0 markneedham staff 0 24 Jul 15:23 graph.db/certificates/ drwxr-xr-x 0 markneedham staff 0 17 Feb 2017 graph.db/index/ drwxr-xr-x 0 markneedham staff 0 24 Jul 15:22 graph.db/logs/ -rw-r--r-- 0 markneedham staff 8192 24 Jul 15:23 graph.db/neostore -rw-r--r-- 0 markneedham staff 896 24 Jul 15:23 graph.db/neostore.counts.db.a -rw-r--r-- 0 markneedham staff 1344 24 Jul 15:23 graph.db/neostore.counts.db.b -rw-r--r-- 0 markneedham staff 9 24 Jul 15:23 graph.db/neostore.id -rw-r--r-- 0 markneedham staff 65536 24 Jul 15:23 graph.db/neostore.labelscanstore.db ... -rw------- 0 markneedham staff 1700 24 Jul 15:23 graph.db/certificates/neo4j.key
We’ll run the following command to check the databases are in place:
$ kubectl exec neo4j-load-data-0 -- ls -lh /data/databases/ total 4.0K drwxr-xr-x 6 501 staff 4.0K Jul 24 14:23 graph.db $ kubectl exec neo4j-load-data-1 -- ls -lh /data/databases/ total 4.0K drwxr-xr-x 6 501 staff 4.0K Jul 24 14:23 graph.db $ kubectl exec neo4j-load-data-2 -- ls -lh /data/databases/ total 4.0K drwxr-xr-x 6 501 staff 4.0K Jul 24 14:23 graph.db
All good so far. The pods have done their job so we’ll tear those down:
$ kubectl delete pods -l app=neo4j-loader pod "neo4j-load-data-0" deleted pod "neo4j-load-data-1" deleted pod "neo4j-load-data-2" deleted
We’re now ready to deploy our Neo4j cluster.
helm install incubator/neo4j --name neo-helm --wait --set authEnabled=false
Finally we’ll run a Cypher query to check that the Neo4j servers used the database that we uploaded:
$ kubectl exec neo-helm-neo4j-core-0 -- bin/cypher-shell "match (n) return count(*)" count(*) 32314 $ kubectl exec neo-helm-neo4j-core-1 -- bin/cypher-shell "match (n) return count(*)" count(*) 32314 $ kubectl exec neo-helm-neo4j-core-2 -- bin/cypher-shell "match (n) return count(*)" count(*) 32314
Success!
We could achieve similar results by using an init container but I haven’t had a chance to try out that approach yet. If you give it a try let me know in the comments and I’ll add it to the post.
Published on Java Code Geeks with permission by Mark Needham, partner at our JCG program. See the original article here: Kubernetes: Copy a dataset to a StatefulSet’s PersistentVolume Opinions expressed by Java Code Geeks contributors are their own. |