Docker Java App Deployment With A Containerized Oracle XE And With An Existing Oracle XE Instance
Background
Many enterprises today are extremely excited about the prospects of containerized application stacks to achieve application portability and to speed up application deployment. However we met with many enterprise customers that are still concerned about how this containerization effort would fit into their existing workflows and processes. In almost every large organization we talked to, Oracle database is still used for mission critical applications.
In the spirit of Oracle OpenWorld and JavaOne, we decided to cover a multi-tier Java application deployed using two database setups:
- Movie Store application with Nginx (for load balancing), Tomat (as the application server), and Oracle XE 11g (deployed as a container)
- Movie Store application with Nginx (for load balancing) and Tomat (as the application server) – but this time connecting to an existing running instance of Oracle XE 11g
The reason we’re covering the second deployment option (i.e. connecting to an existing Oracle service) is because most enterprises are looking to containerize their web and application servers first – but databases (like Oracle) may still need time to be containerized. Many enterprises may not be ready to move away from the advanced database management capabilities they’re getting on typical virtualized infrastructure and are waiting to see the containerized database management capabilities mature.
With that said, however, containerizing enterprise Java applications continues to be a challenge mostly because existing application composition frameworks do not address complex dependencies, external integrations or auto-scaling workflows post-provision. Moreover, the ephemeral design of containers meant that developers had to spin up new containers and re-create the complex dependencies & external integrations with every version update.
DCHQ, available in hosted and on-premise versions, addresses all of these challenges and simplifies the containerization of enterprise Java applications through an advance application composition framework that extends Docker Compose with cross-image environment variable bindings, extensible BASH script plug-ins that can be invoked at request time or post-provision, and application clustering for high availability across multiple hosts or regions with support for auto scaling.
Once an application is provisioned, a user can monitor the CPU, Memory, & I/O of the running containers, get notifications & alerts, and perform day-2 operations like Scheduled Backups, Container Updates using BASH script plug-ins, and Scale In/Out. Moreover, out-of-box workflows that facilitate Continuous Delivery with Jenkins allow developers to refresh the Java WAR file of a running application without disrupting the existing dependencies & integrations.
In this blog, we will go over the end-to-end automation of a Java application called Movie Store that is deployed in two different ways:
- Movie Store application with Nginx (for load balancing), Tomat (as the application server), and Oracle XE 11g (deployed as a container)
- Movie Store application with Nginx (for load balancing) and Tomat (as the application server) – but this time connecting to an existing running instance of Oracle XE 11g
The same Java WAR file will be deployed on two different application servers. DCHQ not only automates the application deployments – but it also integrates with 13 different clouds to automate the provisioning and auto-scaling of clusters with software-defined networking. We will cover:
- Building the Application Templates for the Java-based Movie Store Application with a Containerized Oracle XE and with an Existing Oracle XE Instance
- Provisioning the Underlying Infrastructure on SoftLayer and Adding the Existing Oracle XE Host to the Cluster
- Deploying the multi-tier Java-based Movie Store applications on the SoftLayer cluster
- Monitoring the CPU, Memory & I/O of the Running Containers
- Enabling the Continuous Delivery Workflow with Jenkins to update the WAR file of the running applications when a build is triggered
- Scaling out the Tomcat Application Server Cluster
Building the Application Templates for the Java-based Movie Store Application with a Containerized Oracle XE and with an Existing Oracle XE Instance
Since there isn’t an official Docker image for Oracle XE, we’re going to use a popular GitHub project to build our image (https://github.com/wnameless/docker-oracle-xe-11g).
Once logged in to DCHQ (either the hosted DCHQ.io or on-premise version), a user can navigate to Automate > Image and then click on the + button to select Dokerfile (Git/GitHub/GitBucket). We can then automate the creation of our Oracle XE Docker image by pointing to the public GitHub project and pushing the new image to our own “private” repository (dchq/oracle-xe-11g:latest).
A user can then navigate to Manage > Templates and then click on the + button to create a new Docker Compose template.
We’ll use the same application template for the two set ups we need. If we need to connect to the existing Oracle service, then all we have to do is simply delete the database entry in the application template and provide the right database credentials in the application server bore requesting the application.
You will notice that Nginx is invoking a BASH script plug-in to add the container IP’s of the application servers in the default.conf file dynamically (or at request time).
The application server (Tomcat) are also invoking a BASH script plug-in to deploy the Movie Store Java WAR files from an external URL in the directory /usr/local/tomcat/webapps/ROOT.war
You will notice that the cluster_size parameter allows you to specify the number of containers to launch (with the same application dependencies).
You will notice that we’re using the parameter registry_id to be able to pull the Oracle XE image from a private Docker Hub repository. The actual ID can be retrieved by navigating to Manage > Cloud Providers and Repos and then clicking Edit on (your Docker Hub account). The ID will be there for you to copy and paste.
The host parameter allows you to specify the host you would like to use for container deployments. That way you can ensure high-availability for your application server clusters across different hosts (or regions) and you can comply with affinity rules to ensure that the database runs on a separate host for example. Here are the values supported for the host parameter:
- host1, host2, host3, etc. – selects a host randomly within a data-center (or cluster) for container deployments
- <IP Address 1, IP Address 2, etc.> — allows a user to specify the actual IP addresses to use for container deployments
- <Hostname 1, Hostname 2, etc.> — allows a user to specify the actual hostnames to use for container deployments
- Wildcards (e.g. “db-*”, or “app-srv-*”) – to specify the wildcards to use within a hostname
Additionally, a user can create cross-image environment variable bindings by making a reference to another image’s environment variable. In this case, we have made several bindings – including database.url=jdbc:oracle:thin:@{{Oracle|container_ip}}:1521:{{Oracle|sid}} – in which the database container IP and SID are resolved dynamically at request time and are used to ensure that the application servers can establish a connection with the database.
Here is a list of supported environment variable values:
- {{alphanumeric | 8}} – creates a random 8-character alphanumeric string. This is most useful for creating random passwords.
- {{<Image Name> | ip}} – allows you to enter the host IP address of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a connection with the database.
- {{<Image Name> | container_ip}} – allows you to enter the internal IP of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a secure connection with the database (without exposing the database port).
- {{<Image Name> | port _<Port Number>}} – allows you to enter the Port number of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a connection with the database. In this case, the port number specified needs to be the internal port number – i.e. not the external port that is allocated to the container. For example, {{PostgreSQL | port_5432}} will be translated to the actual external port that will allow the middleware tier to establish a connection with the database.
- {{<Image Name> | <Environment Variable Name>}} – allows you to enter the value an image’s environment variable into another image’s environment variable. The use cases here are endless – as most multi-tier applications will have cross-image dependencies.
Provisioning the Underlying Infrastructure on SoftLayer and Adding the Existing Oracle XE Host to the Cluster
Once an application is saved, a user can register a Cloud Provider to automate the provisioning and auto-scaling of clusters on 13 different cloud end-points including vSphere (in pilot mode), OpenStack, CloudStack, Amazon Web Services, SoftLayer, Microsoft Azure, DigitalOcean, HP Public Cloud, IBM SoftLayer, Google Compute Engine, and many others.
First, a user can register a Cloud Provider for SoftLayer (for example) by navigating to Manage > Repo & Cloud Provider and then clicking on the + button to select SoftLayer (IBM). The SoftLayer API Key needs to be provided – which can be retrieved from the Account Settings section of the SoftLayer Panel.
A user can then create a cluster with an auto-scale policy to automatically spin up new Cloud Servers. This can be done by navigating to Manage > Clusters page and then clicking on the + button. You can select a capacity-based placement policy, a lease of two days in this example and then Weave as the networking layer in order to facilitate secure, password-protected cross-container communication across multiple hosts within a cluster.
A user can now provision a number of Cloud Servers on the newly created cluster by navigating to Manage > Hosts and then clicking on the + button to select SoftLayer. Once the Cloud Provider is selected, a user can select the region, size and image needed. Ports can be opened on the new Cloud Servers (e.g. 32000-59000 for Docker, 6783 for Weave, and 5672 for RabbitMQ). A Cluster is then selected and the number of Cloud Servers can be specified.
Once the cluster is created, we can add the host of the already running Oracle XE instance to this cluster. This can be done by navigating to Manage > Hosts and then clicking on Any Host/VM. A user can then provide the name, IP and then select the cluster that was created earlier. When a user click save, an auto-generated script is provided on the screen that the user needs to execute on the Oracle XE host to install the DCHQ agent, along with Docker and the software-define networking layer. You will notice that the dchq_agent_isntall.sh script has six arguments:
- A unique server key
- DCHQ Server IP/Hostname
- RabbitMQ port
- Name of the Weave network (e.g. weave.local)
- A randomized password for the Weave network
- The IP of the 1st server registered in this Weave network
Deploying the Multi-Tier Java-based Movie Store Application on the SoftLayer Cluster
Once the Cloud Servers are provisioned, a user can deploy a multi-tier, Docker-based Java application on the new Cloud Servers.
We will first deploy the Java application with the Oracle XE container. This can be done by navigating to the Self-Service Library and then clicking on Customize to request a multi-tier application.
A user can select an Environment Tag (like DEV or QE) and the SoftLayer Cluster created before clicking on Run.
To deploy the exact same application, but this time connecting to an existing Oracle XE instance, we simple delete the database entry in the application template, and configure the environment variables for Tomcat so that they directly connect to the existing Oracle XE instance.
- database.url=jdbc:oracle:thin:@{{Oracle|container_ip}}:1521:{{Oracle|sid}}
- database.username={{Oracle|username}}
- database.password={{Oracle|password}}
A user can select an Environment Tag (like DEV or QE) and the SoftLayer Cluster created before clicking on Run.
Monitoring the CPU, Memory & I/O Utilization of the Running Containers
Once the application is up and running, our developers monitor the CPU, Memory, & I/O of the running containers to get alerts when these metrics exceed a pre-defined threshold. This is especially useful when our developers are performing functional & load testing.
A user can perform historical monitoring analysis and correlate issues to container updates or build deployments. This can be done by clicking on the Actions menu of the running application and then on Monitoring. A custom date range can be selected to view CPU, Memory and I/O historically.
Enabling the Continuous Delivery Workflow with Jenkins to Update the WAR File of the Running Application when a Build is Triggered
For developers wishing to follow the “immutable” containers model by rebuilding Docker images containing the application code and spinning up new containers with every application update, DCHQ provides an automated build feature (under Automate > Image Build) that allows developers to automatically create Docker images from Dockerfiles or private GitHub projects containing Dockerfiles.
However, many developers may wish to update the running application server containers with the latest Java WAR file instead. For that, DCHQ allows developers to enable a continuous delivery workflow with Jenkins. This can be done by clicking on the Actions menu of the running application and then selecting Continuous Delivery. A user can select a Jenkins instance that has already been registered with DCHQ, the actual Job on Jenkins that will produce the latest WAR file, and then a BASH script plug-in to grab this build and deploy it on a running application server. Once this policy is saved, DCHQ will grab the latest WAR file from Jenkins any time a build is triggered and deploy it on the running application server.
Developers, as a result will always have the latest Java WAR file deployed on their running containers in DEV/TEST environments.
Scaling out the Tomcat Application Server Cluster
If the running application becomes resource constrained, a user can to scale out the application to meet the increasing load. Moreover, a user can schedule the scale out during business hours and the scale in during weekends for example.
To scale out the cluster of Tomcat servers from 2 to 4, a user can click on the Actions menu of the running application and then select Scale Out. A user can then specify the new size for the cluster and then click on Run Now.
We then used the BASH plug-in to update Nginx’s default.conf file so that it’s aware of the new application server added. The BASH script plug-ins can also be scheduled to accommodate use cases like cleaning up logs or updating configurations at defined frequencies. An application time-line is available to track every change made to the application for auditing and diagnostics.
To execute a plug-in on a running container, a user can click on the Actions menu of the running application and then select Plug-ins. A user can then select the load balancer (Nginx) container, search for the plug-in that needs to be executed, enable container restart using the toggle button. The default argument for this plug-in will dynamically resolve all the container IP’s of the running Tomcat servers and add them as part of the default.conf file.
An application time-line is available to track every change made to the application for auditing and diagnostics. This can be accessed from the expandable menu at the bottom of the page of a running application.
Alerts and notifications are available for when containers or hosts are down or when the CPU & Memory Utilization of either hosts or containers exceed a defined threshold.
Conclusion
Containerizing enterprise Java applications is still a challenge mostly because existing application composition frameworks do not address complex dependencies, external integrations or auto-scaling workflows post-provision. Moreover, the ephemeral design of containers meant that developers had to spin up new containers and re-create the complex dependencies & external integrations with every version update.
DCHQ, available in hosted and on-premise versions, addresses all of these challenges and simplifies the containerization of enterprise Java applications through an advance application composition framework that facilitates cross-image environment variable bindings, extensible BASH script plug-ins that can be invoked at request time or post-provision, and application clustering for high availability across multiple hosts or regions with support for auto scaling.
Enterprises can now containerize the web and application server components of their applications while connecting to the existing database services they’re running. Moreover, continuous delivery processes for Java applications do not need to be changed – but can easily be plugged into the DCHQ application deployment & management workflows.
Sign Up for FREE on http://DCHQ.io or download DCHQ On-Premise
to get access to out-of-box multi-tier Java application templates along with application lifecycle management functionality like monitoring, container updates, scale in/out and continuous delivery.
Reference: | Docker Java App Deployment With A Containerized Oracle XE And With An Existing Oracle XE Instance from our JCG partner Amjad Afanah at the DCHQ.io blog. |