Deployment pipeline for dummies


Share it!Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInEmail this to someoneShare on Reddit

Have you ever heard about a deployment or build pipeline? If not, you should definitely dig into the topic a little bit. It is one of the core concepts in recent devops movement and soon it will probably be a common practice across the IT industry (if it is not already). If you would like to learn or refresh the concept of pipelines this post is exactly for you.

Deployment pipeline is about the way the software is transformed from source code to running on the production application in an automated manner. You can simply imagine it as a series of jobs (in other words, a list of steps) on your Continuous Integration server. Each job increases the confidence in the software and brings it closer to the release. The first phases are usually pretty short and provide feedback as fast as possible whilst the last ones can take quite a lot of time and may possibly require human interaction. If any build in the pipeline fails consecutive ones are not triggered. The key point here is automation – the whole process is fully automated and requires almost no human work (except confirmations and one click deployment preparation).

An average deployment pipeline consists of the following stages:

  1. commit stage – its main purpose is to provide fast feedback for developers and prepare application binaries. It usually checks out new sources, compiles them, executes unit tests, executes some simple smoke tests, prepares the application package ready for deploy, executes static code analysis tools, possibly verifies if there are no architecture breaches. This phase is usually very fast, it shouldn’t take more than 10 minutes. Its successful finish states that the application works correctly at the technical level. Here you can read more about handling unit tests.
  2. automated acceptance test stage – its main purpose is to assert that the system works at the functional and nonfunctional level. This stage ensures us that the application delivers value to the customer, in other words – it meets their needs and specifications. To achieve that it usually deploys the full application (possibly several times) and goes through the most important and common user paths. This phase is significantly longer than the commit stage, it might even take hours. If you are wondering how to create a successful acceptance test suite check out this post.
  3. manual test stage – also named user acceptance testing (UAT). This phase involves human verification of the assembled application. This is the time to perform exploratory testing, usability testing and showcases. This stage should involve only activities that cannot be automated and embedded to the automated acceptance test stage. It usually allows testers and other interested parties to deploy the application with one click on a chosen environment. Here the application is possibly deployed to staging environments.
  4. capacity stage – its main purpose is to verify application performance. The phase can be done in parallel to the manual test stage. While it can be almost fully automated, the outcome usually depends on a human decision as to whether the current stage is acceptable or not.
  5. release stage – its aim is to deliver the application to the end users. Depending on the project, it might be upgrading a production environment, sending packaged software or autoupdate on customer devices. Fully implemented Continuous Delivery might perform this phase automatically if all previous ones were successful. However, as each release is strictly connected with the risk of an upgrade it is often human decision whether to release or not. It is important to release the application by an automated process not involving manual activities other than pushing one button.

pipeline

This is a build pipeline concept in a pill. To have a bigger picture, it is worth it to mention several rules that such a sequence of builds should meet:

  • application binaries should be created once, during the commit stage. Each subsequent stage should use them to ensure that the same software is deployed and tested. Compiling sources on different machines might result in small differences which can have a huge impact on the application,
  • application should always be deployed the same way. Developer, testing, staging and production environments should be created with the same mechanism. The differences should be handled by different configurations. However, it should be as close to the production environment as it is possible and is inline with common sense,
  • always smoke test every created deployment. You could possibly embed a smoke test into a tool preparing environments. Its failure should fail the build immediately. Such checks would increase your confidence in the build process and potentially save you a lot of wasted time,
  • propagate the changes through the pipeline as fast as possible. The faster the feedback the faster the response. It is much easier to fix a problem introduced a few minutes or a few hours ago than a few days or even weeks ago. Build results should be available as fast as possible,
  • any broken build within the pipeline is treated with the highest priority (except maybe high impact problems on the production). Whenever such a situation occurs, the whole team is responsible to restore the process back to correct state.

Gains from such an approach are enormous. Fully implemented, it provides you a standard way to get your software directly from the source code to a production environment with as little manual work involved as possible. You should be able to achieve a process which is rapid, repeatable and reliable. I encourage you to dig into the topic further!

Share it!Tweet about this on TwitterShare on FacebookShare on Google+Share on LinkedInEmail this to someoneShare on Reddit