Microservices for Java Developers: Deployment and Orchestration
In this post, we feature a comprehensive article on Microservices for Java Developers: Deployment and Orchestration.
1. Introduction
These days more and more organizations are relying on cloud computing and managed service offerings to host their services. This strategy has a lot of benefits but you still have to choose the best deployment game plan for your microservices fleet.
Using some sort of PaaS is probably the easiest option but for many it is not sustainable in the long run due to the inherited constraints and limitations such model has. From the other side, using IaaS does relief the costs of infrastructure management and maintenance, but still requires a significant amount of work with respect to deployment of the applications and services and keeping them afloat. Last but not least, a lot of the organizations still prefer to manage their software stacks internally, only offloading the virtual (or bare-metal) machines management to cloud providers.
The challenge to decide which model is right for the majority of the organizations stayed unsolved (at large) for quite a long time, waiting for some kind of breakthrough to happen. And luckily, the “big bang” came in due time.
Table Of Contents
2. Containers
Although the seeds have been planted long before, the revolution has been initiated by Docker and has drastically changed the way we used to approach the distribution, deployment and development of the applications and services. The game changer popped up in a form of operating system level virtualization and containers. It is an exceptionally lightweight (comparing to traditional virtual machines) architecture, imposes little to no overhead, share the same operating system kernel and do not require special hardware support to perform efficiently.
Nowadays the container images became the de-facto packaging and distribution blueprint whereas the containers serve as the mainstream execution and isolation model. There are a lot to say about Docker and container-based virtualization, specifically with respect to the applications and services on the JVM platform, but along this part of the tutorial we are going to focus on the deployment and operational aspects.
The tooling around containers is available for mostly any programming language or/and platform and in most cases it could be easily integrated into the build and deployment pipelines. With respect to the JCG Car Rentals platform, for example, the Customer Service builds a container image using jib (more precisely jib-maven-plugin) which assembles and publishes the image without requiring Docker daemon.
<plugin> <groupId>com.google.cloud.tools</groupId> <artifactId>jib-maven-plugin</artifactId> <version>1.3.0</version> <configuration> <to>  <tags> <tag>${project.version}</tag> </tags> </to> <from>  </from> <container> <user>1000</user> <mainClass>ws.ament.hammock.Bootstrap</mainClass> </container> </configuration> </plugin>
So what we could do with the container now? The move towards container-based runtimes spawned a new category of the infrastructure components, the container orchestration and management.
3. Apache Mesos
We are going to start with Apache Mesos, one of the oldest and well-established open-source platforms for fine-grained resource sharing.
Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively. – http://mesos.apache.org/
Strictly speaking, Apache Mesos is not a container orchestrator, more like a cluster-management platform, but it also gained a native support for launching containers not so long ago. There are certain overlaps with traditional cluster management frameworks (like for example Apache Helix), so Apache Mesos is often called the operating system for the datacenters, to emphasize its larger footprint and scope.
4. Titus
Titus, yet another open-source project from Netflix, is an example of the dedicated container management solution.
Titus is a container management platform that provides scalable and reliable container execution and cloud-native integration with Amazon AWS. Titus was built internally at Netflix and is used in production to power Netflix streaming, recommendation, and content systems. – https://netflix.github.io/titus/
In the essence, Titus is a framework on top of Apache Mesos. The seamless integration with AWS as well as Spinnaker, Eureka and Archaius make it quite a good fit after all.
5. Nomad
Nomad, one more open-sourced gem from the HashiCorp, is a workload orchestrator which is suitable for deploying a mix of microservices, batch jobs, containerized and non-containerized applications.
Nomad is a highly available, distributed, data-center aware cluster and application scheduler designed to support the modern datacenter with support for long-running services, batch jobs, and much more. – https://www.nomadproject.io/
Besides being really very easy to use, it has outstanding native integration with Consul and Vault to complement the service discovery and secret management (which we have introduced in the previous parts of the tutorial).
6. Docker Swarm
If you are an experienced Docker user, you may know about the swarm, a special Docker operating mode for natively managing a cluster of Docker Engines. It is probably the easiest way to orchestrate the containerized deployments but at the same time not widely adopted.
7. Kubernetes
The true gem we left to the very end. Kubernetes, built upon 15 years of experience of running production workloads at Google, is an open-source, hyper-popular and production-grade container orchestrator.
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. – https://kubernetes.io/
Undoubtedly, Kubernetes is a dominant container management platform these days. It could be run literally on any infrastructure, and as we are going to see shortly, is offered by all major cloud providers.
The JCG Car Rentals platform is going to leverage Kubernetes capabilities to run all its microservices and supporting components (like API gateways and BFFs). Although we could start crafting the YAML manifests right away, there is one more thing to talk about.
Kubernetes is a platform for building platforms. It’s a better place to start; not the endgame.- https://twitter.com/kelseyhightower/status/935252923721793536?lang=en
It is quite interesting statement which is already being put into life these days by the platforms like OpenShift and a number of commercial offerings.
8. Service Meshes
The container orchestration platforms significantly simplified and improved the deployment and operational practices, specifically related to microservices. However there were a number of crosscutting concerns which services and applications developers have to take care of, like secure communication, resiliency, authentication and authorization, tracing and monitoring, to name a few. The API gateways took some of that burden off but the service-to-service communication still struggled. The search for the viable solution led to rise of the service meshes.
A service mesh is an infrastructure layer that handles service-to-service communication, freeing applications from being aware of the complex communication network. The mesh provides advanced capabilities, including encryption, authentication and authorization, routing, monitoring and tracing. – https://medium.com/solo-io/https-medium-com-solo-io-supergloo-ff2aae1fb96f
To be fair, service meshes came in like a life savers. Deployed along the container orchestrator of choice, they allowed the organizations to keep the focus on implementing the business concerns and features, whereas the mesh was taking care of the rest.
Service mesh is the future of cloud-native apps – https://medium.com/solo-io/https-medium-com-solo-io-supergloo-ff2aae1fb96f
There are a number of service meshes in the wild, quite mature and battle-tested in production. Let us briefly go over some of them.
8.1. Linkerd
We are going to start with Linkerd, one of the pioneers of the open source service meshes. It has taken off as the dedicated layer for managing, controlling, and monitoring service-to-service communication. Since then, it has been rewritten and refocused on Kubernetes integration.
Linkerd is an ultralight service mesh for Kubernetes. It gives you observability, reliability, and security without requiring any code changes. – https://linkerd.io/
The meaning of “ultralight” may not sound significant but it actually is. You might be surprised by how much cluster resources a service mesh may consume and, depending on your deployment model, may incur substantial additional costs.
8.2. Istio
If there is a service mesh everyone have heard of, it is very likely Istio.
It is a completely open source service mesh that layers transparently onto existing distributed applications. It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservice – https://istio.io/docs/concepts/what-is-istio/
Although Istio is used mostly with Kubernetes, it is in fact platform independent. For example, as of now it could be run along with Consul-based deployments (with or without Nomad).
The ecosystem around Istio is really flourishing. One notable community contribution is Kiali, which visualizes the service mesh topology and provides visibility into features like request routing, circuit breakers, request rates, latency and more.
The need of the service mesh for JCG Car Rentals platform is obvious and we are going to deploy Istio to fulfill this gap. Here is the simplistic example of the Kubernetes deployment manifest for Customer Service using Istio and previously built container image.
apiVersion: v1 kind: Service metadata: name: customer-service labels: app: customer-service spec: ports: - port: 18800 name: http selector: app: customer-service --- apiVersion: apps/v1 kind: Deployment metadata: name: customer-service spec: replicas: 1 selector: matchLabels: app: customer-service template: metadata: labels: app: customer-service spec: containers: - name: customer-service image: jcg-car-rentals/customer-service:0.0.1-SNAPSHOT resources: requests: cpu: "200m" imagePullPolicy: IfNotPresent ports: - containerPort: 18800 volumeMounts: - name: config-volume mountPath: /app/resources/META-INF/microprofile-config.properties subPath: microprofile-config.properties volumes: - name: config-volume configMap: name: customer-service-config --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: customer-service-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: customer-service spec: hosts: - "*" gateways: - customer-service-gateway http: - match: - uri: prefix: /api/customers route: - destination: host: customer-service port: number: 18800
8.3. Consul Connect
As we know from the previous part of the tutorial, Consul started off as service discovery and configuration storage. One of the recent additions to the Consul is Connect feature which allowed it to enter into the space of the service meshes.
Consul Connect provides service-to-service connection authorization and encryption using mutual Transport Layer Security (TLS). Applications can use sidecar proxies in a service mesh configuration to automatically establish TLS connections for inbound and outbound connections without being aware of Connect at all. – https://www.consul.io/docs/connect/index.html
Consul already had the perfect foundation each service mesh needed, adding the missing features was a logical step towards adapting to this fast changing landscape.
8.4. SuperGloo
With quite a few service meshes available, it becomes really unclear which one is the best choice for your microservices, and how to deploy and operate one? If that is the problem you are facing right now, you may take a look at SuperGloo, the service mesh orchestration platform.
SuperGloo, an open-source project to manage and orchestrate service meshes at scale. SuperGloo is an opinionated abstraction layer that will simplify the installation, management, and operation of your service mesh, whether you use (or plan to use) a single mesh or multiple mesh technologies, on-site, in the cloud, or on any topology that best fits you. – https://supergloo.solo.io/
From the service meshes perspective, SuperGloo currently supports (to some extent) Istio, Consul Connect, Linkerd and AWS App Mesh.
On the same subject, the wider Service Mesh Interface (SMI) specification was announced recently, an undergoing initiative to align different service mesh implementations so they could be used interchangeably. .
9. Cloud
The industry-wide shift towards container-based deployments has forced the cloud providers to come up with the relevant offerings. As of today, every major player in the cloud business has managed Kubernetes offering along with the service mesh.
9.1. Google Kubernetes Engine (GKE)
Since Kubernetes emerged from Google and from its experience managing world’s largest computing clusters, it is only natural that Google Cloud has an outstanding support for it. And that is really the case, Google’s Kubernetes Engine (GKE) is a fully managed Kubernetes platform hosted in the Google Cloud.
Kubernetes Engine is a managed, production-ready environment for deploying containerized applications. It brings our latest innovations in developer productivity, resource efficiency, automated operations, and open source flexibility to accelerate your time to market. – https://cloud.google.com/kubernetes-engine/
As for the service mesh, Google Cloud provides Istio support through Istio on GKE add-on for Kubernetes Engine (currently in beta).
9.2. Amazon Elastic Kubernetes Service (EKS)
For quite a while AWS offers the support for running containerized applications in the form of Amazon Elastic Container Service (ECS). But since the last year AWS announced the general availability of the Amazon Elastic Kubernetes Service (EKS).
Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure. – https://aws.amazon.com/eks/
From the service mesh side, you are covered by AWS App Mesh which could be used with Amazon Elastic Kubernetes Service. Under the hood it is powered by Envoy service proxy.
9.3. Azure Container Service (AKS)
The Microsoft Azure Cloud followed a similar to AWS approach by offering Azure Container Service first (which by the way could have been deployed with Kubernetes or Docker Swarm) and then deprecating it in favor of the Azure Kubernetes Service (AKS).
The fully managed Azure Kubernetes Service (AKS) makes deploying and managing containerized applications easy. It offers serverless Kubernetes, an integrated continuous integration and continuous delivery (CI/CD) experience, and enterprise-grade security and governance. – https://azure.microsoft.com/en-us/services/kubernetes-service/
Interestingly, as of moment of this writing Microsoft Azure Cloud does not bundle the support of any service mesh with its Azure Kubernetes Service offering but it is possible to install Istio components on AKS following the manual procedure.
9.4. Rancher
It is very unlikely that your microservices fleet will be deployed in one single Kubernetes cluster. At least, you may have production and staging ones and these should be better kept separated. If you care about your customers, you would probably think hard about the high-availability and disaster recovery, which essentially means multi-region or multi-cloud deployments. Managing many Kubernetes clusters across the wide range of the environments could be cumbersome and difficult, unless you know about Rancher.
Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads. – https://rancher.com/what-is-rancher/overview/
By and large, Rancher becomes a single platform to operate your Kubernetes clusters, including managed cloud offerings or even bare-metal servers.
10. Deployment and Orchestration – Conclusions
In this section of the tutorial we have talked about the container-based deployments and orchestration. Nonetheless there are a few options on the table it is fair to say that Kubernetes is the de-facto choice nowadays and for good reasons. Although not strictly required, the presence of the service mesh is going to greatly relief certain pains of the microservices operational concerns and let you focus on what is important for business instead.
11. What’s next
In the next section of the tutorial we are going to talk about log management, consolidation and aggregation.
Hi Andrey,
your articles in this series have been very beneficial. Is the source code for JCG microservices publically available? This will be very helpful to further understand the concepts. Thank you very much.
Hi!
Thank you very much for your comment. Yes, the source code will be available soon, there are few more parts of the tutorial in the work, and the JCG microservices will change a bit to accommodate these topics. But definitely, the plan is to make them all public. Thank you.
Best Regards,
Andriy Redko