The Short History of CI/CD Tools
Jenkins (forked from Hudson after a dispute with Oracle) has been around for a long time and established itself as the leading platform for the creation of continuous integration (CI) and continuous delivery/deployment (CD) pipelines. The idea behind it is that we should create jobs that perform certain operations like building, testing, deploying, and so on. Those jobs should be chained together to create a CI/CD pipeline. The success was so big that other products followed its lead and we got Bamboo, Team City, and others. They all used a similar logic of having jobs and chaining them together. Operations, maintenance, monitoring, and the creation of jobs is mostly done through their UIs. However, none of the other products managed to suppress Jenkins due to its strong community support. There are over one thousand plugins and one would have a hard time imagining a task that is not supported by, at least, one of them. The support, flexibility, and extensibility featured by Jenkins allowed it to maintain its reign as the most popular and widely used CI/CD tool throughout all this time. The approach based on heavy usage of UIs can be considered the first generation of CI/CD tools (even though there were others before).
With time, new products come into being and, with them, new approaches were born. Travis, CircleCI, and the like, moved the process to the cloud and based themselves on auto-discovery and, mostly YML, configurations that reside in the same repository as the code that should be moved through the pipeline. The idea was good and provided quite a refreshment. Instead of defining your jobs in a centralized location, those tools would inspect your code and act depending on the type of the project. If, for example, they find build.gradle file, they would assume that your project should be tested and built using Gradle. As the result, they would run gradle check
to test your code and, if tests passed, follow it by gradle assemble
to build the artifacts. We can consider those products to be the second generation of CI/CD tools.
The first and the second generation of tools suffer from different problems. Jenkins and the like feature power and flexibility that allow us to create custom tailored pipelines that can handle almost any level of complexity. This power comes with a price. When you have tens of jobs, their maintenance is quite easy. However, when that number increases to hundreds, managing them can become quite tedious and time demanding.
Let’s say that an average pipeline has five jobs (building, pre-deployment testing, deployment to a staging environment, post-deployment testing, and deployment to production). In reality, there are often more than five jobs but let’s keep it an optimistic estimate. If we multiple those jobs with, let’s say, twenty pipelines belonging to twenty different projects, the total number reaches one hundred. Now, imagine that we need to change all those jobs from, let’s say, Maven to Gradle. We can choose to start modifying them through the Jenkins UI or be brave and apply changes directly in Jenkins XML files that represent those jobs. Either way, this, seemingly simple change would require quite some dedication. Moreover, due to its nature, everything is centralized in one location making it hard for teams to manage jobs belonging to their own projects. Besides, project specific configurations and code belong to the same repository where the rest of application code resides and not in some central location. And Jenkins is not alone with this problem. Most of the other self-hosted tools have it as well. It comes from the era when heavy centralization and horizontal division of tasks was thought to be a good idea. At approximately the same time, we thought that UIs should solve most of the problems. Today, we know that many of the types of tasks are easier to define and maintain as code, than through some UI.
I remember the days when Dreamweaver was big. That was around the end of the nineties and the beginning of year two thousand (bear in mind that at that time Dreamweaver was quite different than today). It looked like a dream come true (hence the name?). I could create a whole web page with my mouse. Drag and drop a widget, select few options, write a label, repeat. We could create things very fast. What was not so obvious at that time was that the result was a loan that would need to be paid with interests. The code Dreamweaver created for us was anything but maintainable. As a matter a fact, sometimes it was easier to start over than modify pages created with it. This was especially true when we had to do something not included in one of its widgets. It was a nightmare. Today, almost no one writes HTML and JavaScript by using drag & drop tools. We write the code ourselves instead of relying on other tools to write it for us. That does not mean that GUIs are not used any more. They are, but for very specific purposes. A web designer might rely on drag & drop before passing the result to a coder. There are plenty of other examples. For example, Oracle ESB, at least in its infancy, was similarly wrong. Drag & drop was not a thing to rely on (but good for sales).
What I’m trying to say is that different approaches belong to different contexts and types of tasks. Jenkins, and similar tools, benefit greatly from their UIs for monitoring and visual representations of statuses. The part it fails with is the creation and maintenance of jobs. That type of tasks would be much better done through some kind of code. With Jenkins, we had the power but needed to pay the price for it in the form of maintenance effort.
The “second generation” CI/CD tools (Travis, CircleCI, and the like) reduced that maintenance problem to an almost negligible effort. In many cases, there is nothing to be done since they will discover the type of the project and “do the right thing”. In some other cases, we have to write a travis.yml, a circle.yml, or a similar file, to give the tool additional instructions. Even in such a case, that file tends to have only a few lines of specifications and resides together with the code, thus making it easy for the project team to manage it. However, these tools do not replace “the first generation” since they tend to work well only on small projects with a very simple pipeline. The “real” continuous delivery/deployment pipeline is much more complex than what those tools are capable of. In other words, we gained low maintenance but lost the power and, in many cases, flexibility.
Today, old-timers like Jenkins, Bamboo, and Team City, continue dominating the market and are recommended tools to use on anything but small projects, while cloud tools like Travis and CircleCI dominate smaller settings. At the same time, the team maintaining Jenkins codebase recognized the need to introduce a few important improvements that will bring it to the next level (I’ll call it the “third generation” of CI/CD tools). They introduced Jenkins Workflow and Jenkinsfile. Together, they bring some very useful and powerful features. With Jenkins Workflow, we can write a whole pipeline using Groovy-based DSL. The process can be written as a single script that utilizes most of the existing Jenkins features. The end result is a huge reduction in code (Workflow scripts are much smaller than traditional Jenkins job definitions in XML) and reduction in jobs (one Workflow job can substitute many traditional Jenkins jobs). This results in much easier management and maintenance. On the other hand, newly introduced Jenkinsfile allows us to define the Workflow script inside the repository together with the code. This means that developers in charge of the project can be in control of the CI/CD pipeline as well. That way, responsibilities are much better divided. Overall Jenkins management is centralized while individual CI/CD pipelines are placed where they belong (together with the code that should be moved through it). Moreover, if we combine all that with the Multibranch Workflow job type, we can even fine tune the pipeline depending on the branch. For example, we might have the full process defined in the Jenkinsfile residing in the master branch and shorter flows in each feature branch. What is put into each Jenkinsfile is up to those maintaining each repository/branch. With the Multibranch Workflow job, Jenkins will create jobs whenever a new branch is created and run whatever is defined in the file. Similarly, it will remove jobs when branches are removed. Finally, Docker Workflow has been introduced as well, making Docker the first class citizen in Jenkins.
All those improvements brought Jenkins to a completely new level confirming its supremacy among CI/CD platforms.
If even more is needed, there is the CloudBees Jenkins Platform – Enterprise Edition that provides amazing features, especially when we need to run Jenkins at scale.
The DevOps 2.0 Toolkit
If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book. Among many other subjects, it explores Jenkins Workflow, Multibranch Workflow, and Jenkinsfile in much more detail.
This book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It’s about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It’s about scaling to any number of servers, design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.
In other words, this book envelops the whole microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We’ll use Docker, Kubernetes, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, and so on. We’ll go through many practices and, even more, tools.
Reference: | The Short History of CI/CD Tools from our JCG partner Viktor Farcic at the Technology conversations blog. |