Deploying and scaling an Oracle database on a multi-node Kubernetes cluster
In this post I am going to explain how to deploy and scale an Oracle Express database on a multi-node Kubernetes cluster. I am going to use this Docker container by Maxym Bylenko. I am referring to the container for the Oracle XE 11g because of the following open issue with that for Oracle XE 12c at the time I did the process described below. I am assuming the readers have at least basic or middle level knowledge of the Kubernetes concepts.
First thing to do is to create a Pod. We can do this (and other operations described in this post) declaratively through a YAML file:
apiVersion: v1 kind: Pod metadata: name: "oradb" labels: name: "oradb" spec: containers: - image: "sath89/oracle-xe-11g:latest" name: "oradb" ports: - containerPort: 1521 restartPolicy: Always
Once the Pod has been successfully created, we need to create a Service for it:
apiVersion: v1 kind: Service metadata: name: "oradb" labels: app: "oradb" spec: ports: - port: 1521 selector: app: "oradb"
Now we need to create a ReplicationController. It enables to easily create multiple pods and then ensure that that number of pods always exists: if a pod crashes, the Replication Controller replaces it. Here’s how we can declaratively create a ReplicationController, specifying we want 2 replicas:
apiVersion: v1 kind: ReplicationController metadata: name: "oradb" labels: app: "oradb" spec: replicas: 2 selector: app: "oradb" template: metadata: labels: app: "oradb" spec: containers: - image: "sath89/oracle-xe-11g:latest" name: "oradb"
We can check if the ReplicationController has been created successfully from a shell through
kubectl:
kubectl get rc
or, if in OpenShift Origin:
oc get rc NAME DESIRED CURRENT AGE oradb 2 2 1d
Let’s check for the pods:
kubectl get pods
or, in OpenShift Origin:
oc get pods NAME READY STATUS RESTARTS AGE oradb 1/1 Running 0 1d oradb-6rs8h 1/1 Running 0 1d oradb-cq2x9 1/1 Running 0 1d
Imagine now we need to scale the cluster from 2 to 3 Pods. It is possible to do this simply with the kubectl scale command:
kubectl scale rc oradb --replicas=3
or the oc scale command:
oc scale rc oradb --replicas=3
As soon as the command above has been completed, we would find a new pod in the list:
NAME READY STATUS RESTARTS AGE oradb 1/1 Running 0 1d oradb-6rs8h 1/1 Running 0 1d oradb-cq2x9 1/1 Running 0 1d oradb-rplzj 1/1 Running 0 1d
And that’s the new situation for the ReplicationController:
NAME DESIRED CURRENT AGE oradb 3 3 1d
The database sid is xe and the credentials to connect are:
username: system password: oracle
Published on Java Code Geeks with permission by Googlielmo Iozzia, partner at our JCG program. See the original article here: Deploying and scaling an Oracle database on a multi-node Kubernetes cluster Opinions expressed by Java Code Geeks contributors are their own. |
No volume? No pvc? What if a pod crashes? How is the data persisted?
Hi Michael. This work wasn’t meant to be used in a prod environment. The goal there was to leaverage several PL/SQL development teams working on legacy applications to have a database space to just unit test the code they produce before committing changes to a source control system (or to be used by Jenkins pipelines when running the PL/SQL unit tests). So no need to persist the data: the lifecycle of these pods was short. The data was automatically provisioned on daily basis starting from a de-identified snapshot from prod or mock data.
And now you have three oracle pods without any persistent volume oder synchronisation. Sorry, this is useless
Hi Christian, please read my comment to Michael above. Thanks.
There is no Oracle XE 12c.
There is recently Oracle XE 18c for Linux with Docker version soon.
Well, the xe database is nice to play with, but if you are going to use it commercially, check the oracle licence carefully. Really careful!
Hi Leukhe, please read my reply to Michael above. Of course our pre-prod and prod environments run on Oracle commercial licenses and aren’t deployed on Kubernetes. This was for PL/SQL unit testing purposes only, where the express edition was enough and the costs of the internal Kubernetes infrastructure was minimal.
Any particular reason you used pod + replication controller instead of deployment?
Hi Lorant, this work has been done 1.5 years ago for an old Red Hat OpenShift Origin environment based on Kubernetes 1.5 and that was the easiest option.