Continuous Deployment: Introduction
This article is part of the Continuous Integration, Delivery and Deployment series.
Continuous deployment is the ultimate culmination of software craftsmanship. Our skills need to be on such a high level that we have a confidence to continuously and automatically deploy our software to production. It is the natural evolution of continuous integration and delivery. We usually start with continuous integration with software being built and tests executed on every commit to VCS. As we get better with the process we proceed towards continuous delivery with process and, especially tests, so well done that we have the confidence that any version of the software that passed all validation can be deployed to production. We can release the software any time we want with a click of a button. Continuous deployment is accomplished when we get rid of that button and deploy every “green” build to production.
This article will try to explore the goals of the final stage of continuous deployment, the deployment itself. It assumes that static analysis is being performed, unit, functional and stress tests are being run, test code coverage is high enough and that we have the confidence that our software is performing as expected after every commit.
Goals of deployment process are:
- Run often
- Be automatic
- Be fast
- Provide zero-downtime
- Provide ability to rollback
Deploy Often
Software industry is constantly under pressure to provide better quality and deliver faster thus resulting in shorter time to market. More often we deliver without sacrificing quality, sooner we rip benefits of features we delivered. The time spent between having some feature developed and deployed to production is the time wasted. Ideally, we should have new features delivered as soon as they are pushed to the repository. In the old days, we were used to waterfall model of delivering software. We’d plan everything, develop everything, test everything and, finally, deploy the project to production. It was not uncommon for that process to take months or even years. Putting months of work into production often resulted in deployment hell with many people working during weekend to put endless amount of new code into production servers. More often than not, things did not work as initially planned. Since then we learned that there are great benefits working on features instead of projects and delivering them as soon as they are done. Deliver often means deliver as soon as one feature is developed.
Automate everything
Having tasks automated allows us to remove part of “human error” as well as to do things faster. With BDD as a great way to define requirements and validate them as soon as features are pushed to repository and TDD as a way to drive development and continuously validate the code on unit level we can gain the confidence that code is performing as expected. Same automatism should apply to deployment. From building artifacts through provisioning environments until the actual deployment. Building artifacts is the process that is working well for quite some time. Whether it’s Gradle or Maven for Java (more info can be found in the Java Build Tools article), SBT for Scala, Gulp or Grunt for JavaScript or whichever other build tool or programming language you’re using, hardly anyone these days has a problem to build their software. Provisioning environments and deploying build artifacts is a process that still has a lot to be desired. Provisioning tools like Chef and Puppet did not fully deliver on their promise. They are either unreliable or too complex to use. Docker provided a fresh new approach (even though containers are not anything new). With it we can easily deploy containers running our software. Even though some type of provisioning of servers themselves might still be needed, usage of tools like Chef and Puppet is (or will be) greatly reduced if needed at all. With Docker, deploying fully functional software with everything it needs is as easy as executing a single command.
Be fast
The key to continuous deployment is speed. If the process from checking out the code through tests until deployment takes a lot of time, feedback we’re hoping to get is slow. The idea is to have feedback on commits as fast as possible so that, if there are problems, they can be fixed before we move into development of another feature. Context switching is expensive and going back and forth is a big waste of time. The need for speed does not apply only to tests run time but also to deployment. Sooner we deploy, sooner we’ll be able to start integration, regression and stress tests. If those are not run on production servers there will be another deployment after they are successful. On a first look difference between 5 and 10 minutes seems small but things add up. Checkout the code, run static analysis, execute unit, functional, integration and stress tests and finally deployment. Each of those together can sum to hours while, in my opinion, reasonable time for the whole process should not be more than 30 minutes. Run fast, fail fast, deploy quickly and rip benefits from having new features online.
Zero-downtime
Deploying continuously or at least often means that zero-downtime policy is a must. If we would deploy once a month or only several times a year, having the application unavailable during a short period of time might not be such a bad thing. However, if we’re deploying once a day or even many times a day, downtime, however short it might be, is unacceptable. There are different ways to reach zero-time with blue-green deployment being my favorite.
Ability to rollback
No matter how many automated tests we put in place, there is always the possibility that something will go wrong. Option to rollback to the previous version might save us a lot of trouble if something unexpected happens. Actually, we can build our software in the way that we can rollback to any previous version that passed all our tests. Rollback needs to be as easy as push of a button and it needs to be very fast in order to be able to restore expected functioning of the application with minimal negative effect. Downtime or incorrect behavior costs money and trust. Less time it lasts, less money we lose.
Major obstacle in accomplishing rollback is often database. NoSQL tends to handle rollback with more grace. However, the reality is that relational DBs are not going to disappear any time soon and we need to get used to write delta script in a way that all changes are backward compatible.
Summary
Continuous deployment sounds to many as too risky or even impossible. Whether it’s risky depends on the architecture of software we’re building. As a general rule, splitting application into smaller independent elements helps a lot. Microservices is the way to go if possible. Risks aside, in many cases there is no business reason or willingness to adopt continuous delivery. Still, software can be continuously deployed to test servers thus becoming continuous delivery. No matter whether we are doing continuous deployment, delivery, integration or none of those, having automatic and fast deployment with zero downtime and ability to rollback provides great benefits. If for no other reason, because it frees us to do more productive and beneficial tasks. We should design and code our software and let machines do the rest for us.
Next article will continue where we stopped and explore different strategies for deploying software.
Reference: | Continuous Deployment: Introduction from our JCG partner Viktor Farcic at the Technology conversations blog. |
I see the automating everything as very crucial to getting to continuous deployment. As a developer I always try to leverage the tools available to make this happen. The last one “Ability to rollback” is the hardest to complete. In fifteen years in IT I seem to remember the most issues with the database being out of sync with the software. Deploying the old WAR file is simple, we always should think about the database changes if we need to rollback.