Tuesday, 31 May 2016

Set up a continuous delivery framework with Jenkins

Set up a continuous delivery framework with Jenkins

The article provides you with knowledge about Jenkins and shows you how to set up the continuous delivery framework with Jenkins. The framework can build and scan source code, install the successful build, run tests and send results automatically.

Introduction

The setup of the framework in a continuous delivery process is important. The framework determines DevOps's efficiency and what can be done in the continuous delivery process.
This article contains information on Jenkins and demonstrates how to:

  • Set up the continuous delivery framework with Jenkins.
  • Apply this knowledge to the continuous delivery framework.
  • Implement the continuous delivery framework with Jenkins

Target audience

The intended audience for this article are software engineers who work on continuous delivery or continuous automation testing. To follow the steps in this article, you should understand:
  • Scripting development.
  • The software development process.

Jenkins overview

Jenkins is a continuous integration tool used most often for software development. This automation framework runs repeated jobs. Jenkins can manage and monitor the start of commands on remote systems and can also execute anything that can run via a command line. Jenkins integrates email, TestNG and other tools with supporting plugins.

After installation (see sidebar if you haven't installed Jenkins yet), access Jenkins via your browser at http://yourjenkinsmasterhost:8080.

Jenkins supports the master/slave mode. The workload of building projects are delegated to multiple slave nodes. This allows a single Jenkins installation to host a large number of projects, or to provide different environments needed for builds/tests.

Get Jenkins
  • You need to have Java Runtime Environment (JRE) 1.6 or later
  • Download Jenkins.war
Launch Jenkins
  • Execute java -jar jenkins.war
       or
  • Deploy jenkins.war into a Tomcat container

Set up and enable Jenkins

Before you can use Jenkins, you need to configure it. In this article you will learn how to set up master/slave, install plugins, configure projects and configure variables/properties.

Set up master and slave machines

First, install Jenkins on the master machine (Linux or Windows), then set up the slaves (Windows or Linux) with Jenkins master's help.

Jenkins has a built-in SSH client implementation it uses to communicate with remote sshd and to a slave agent. There are several ways to communicate between the master and slaves:
  • For UNIX slaves, via SSH. You only need SSH and JRE on slaves.
  • For Windows, with the Distributed Component Object Model (DCOM)
  • Use a separate socket connection when master cannot see slaves, via Java Web Start.
Jenkins master

The entry to management/configuration or to run the Jenkins jobs.

Jenkins slaves

The machines that run the jobs and are managed by master.
Slaves are mandatory when the master workload is too heavy or it needs another machine type to run the job.

Install master on Linux

To install master on Linux machine type the commands contained in Listing 1.
Listing 1. Install master on Linux
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install jenkins

Install master on Windows

Type the command in Listing 2 to install master on a Windows machine.
Listing 2. Install master on Windows
java -jar jenkins.war
After you run the command, access http://<hostname>:8080/ then click Manage Jenkins > Install as Windows Service > Install

Set up slaves

Access http://<JenkinsMasterHost>:8080/ > Manage Jenkins > Manage Node > New Node and configure the slave information according to the slave host. Jenkins master helps install Jenkins onto the slave machines.
For the Windows slave, there is an additional command:
Listing 3. Additional command, Windows slave
sc.exe create "<serviceKey>" start= auto binPath= "<path to jenkins-slave.exe>" 
DisplayName= "<service display name>"
The name of the registry key that defines the service is <serviceKey>. <service display name> is the label that identifies the service in the service manager interface.

Manage plugins

Plugins are another important feature in Jenkins. Currently, Jenkins supports more than 1000 plugins. You can divide these plugins into different categories (plugins for source management, for build reports, for build tools, etc.). With plugins, you can monitor, deploy, or configure different jobs in Jenkins.

To manage the plugins, go to https://JenkinsMasterHost:8080 then select Manage Jenkins > Manage Plugins. There are four tabs:
  • Updates: Installed plugins that have updates
  • Available: Plugins available to install
  • Installed: Installed plugins
  • Advanced: Manage the plugins
Install the plugins via the Internet
When Jenkins master can access the Internet, installing plugins is easy. On the Available tab, choose the plugins to install. You can remove the plugins from the Installed tab. Click the uninstall button to remove the plugins.
Install the plugins manually
You can install the plugins manually if Jenkins master can't access the Internet. Find the plugin that you want to install, then save the downloaded *.hpi/*.jpi file into the $JENKINS_HOME/plugins directory. Restart Jenkins to enable the plugins.

Jenkins projects

Jenkins supports four types of projects: free-style, maven, multi-configuration and external job. The free-style project is the central feature of Jenkins. It can be combined in SCM with any build system. You can also use it for something other than a software build.

Configure projects

Go to https://JenkinsMasterHost:8080, select New Item, specify the Item name and Type to create the project.
On the project configuration page, the Item name is also called Project name. There are four item types. You can also choose the option copy existing item, as shown in Figure 1. Click OK to open the project configuration page.
Figure 1. Create new item
Radio buttons on create new item window The information you'll need for the configuration page is:
Project name: If the project name is updated, the item name is also updated.
Description: Job description
Strategy: Log strategy, how many logs to keep
Parameterized: Defines the variables for the project. There are different types of variables (file parameter, text parameter, string parameter, etc.)
Where: Restricts where the project can run
Advance configuration: Further specifications to control how to build the project
The plugins you choose to install affect which categories and functions you'll have in your project. Some categories and functions are:
  • Source code management: The tool that manages the source code.
  • Build triggers: The method to trigger a build.
  • Build: The build is the most important part of a project. You specify the exact steps to run a project. The common steps are DOS commands for Windows, or Shell commands for Linux.
  • Post-build actions: The actions after the build. The common actions after the build are: send e-mails, trigger other builds, or publish the results report.
After you complete the configuration, click Save. You'll find the saved project listed in https://JenkinsMasterHost:8080Jenkins > All.

Trigger projects

With Jenkins you can trigger to build projects manually or automatically. There are different mechanisms to trigger a build. If you choose to trigger the build automatically, you can define the option in Build Triggers when you configure the project. The options available are:
  • Build project after other projects are built: After you configure a project, you can define whether to build other projects after this project. Choose this option if your project is dependent on other projects.
  • Trigger builds remotely (e.g., from scripts): The project build is triggered from other system or hosts. For example, you can trigger the project build via email or submit a build request from scripts.
  • Build periodically: Create a schedule to build the project periodically, as defined in your configuration.
  • Poll SCM: This option builds the project by source changes. With this option, you specify how often Jenkins polls the revision control system. If there are source code changes, the project builds.
Figure 2. Options in Build Triggers
build trigger options

Figure 2. Options in Build Triggers

build trigger options

Project distribution

The exact behavior delegation depends on the configuration of each project. If the project is configured as Restrict where this project can run, the project will run on the specified machine only. Other projects may choose to roam freely between slaves, it all depends on the configuration.
Currently, Jenkins employs these strategies to distribute the projects:
  • If a project is configured as Restrict where this project can run, it is only run there.
  • Jenkins tries to build a project on the same computer that it was previously built.
  • Jenkins tries to move long builds to slaves.
Note: A slave is a computer that is set up to offload build projects from the master. The distribution of tasks is fairly automatic.

Set up variables/properties

Global properties

You'll set up environment variables (defining the property name and value) or tool locations in the global properties: http://<JenkinsMasterHost>:8080 then select Manage Jenkins > configuration. You can use these properties in all Jenkins projects.
Environment variables
You can refer to environment variables in projects. Select the Environment variables box, and define the name and the value for the variables, as shown in Figure 3.
Figure 3. Define environment variables
name and value text fields Tool Locations
Select the Tool Locations check box. Select the tool name from the drop-down and define the home directory for the tool. The tool can be referred to in projects.
Figure 4. Define tool locations
tool locations, name, and home fields Project local properties
Project local properties are only available within a project. When you configure a project, select the option This build is parameterized as shown in Figure 5. Selecting this option enables the function to add parameters as name/value pairs. You can use these parameters as the project local properties.
Figure 5. Set local properties
local properties screenshot

Figure 5. Set local properties

local properties screenshot

Practice: Continuous delivery framework structure and process

Continuous delivery aims to ensure software can be developed, tested, deployed and finally delivered to production in an efficient and high quality approach. With continuous delivery every change to any part of the software system (no matter if it's from the infrastructure, application or customization data level) is continuously applied to the production environment through a specific delivery pipeline.

The process

Continuous delivery requires quick and automatic deployment of change sets. There are several steps to complete a deployment, or delivery. The standard process is:
  • The developer delivers changes
  • Source control tools make builds
  • Run automated tests
  • Install the build
The continuous delivery process is shown in Figure 6. At the scheduled time, the first project in Jenkins starts.
Project 1
The first task in this project, is to download the source code from a source management tool such as IBM® Rational Team Concert™. If the project fails, failed email notifications are sent and all other projects stop. If the project is successful the next project is triggered.
Project 2
IBM® Security AppScan® reviews the source code downloaded from Project 1 for security issues.
Project 3
The build starts after the AppScan project completes.
Project 4
After a successful build, the next project is to install that build in the Build Verification Test (BVT) environment.
Project 5
Run BVT test cases on the BVT environment. If the BVT passes, Jenkins starts to install the build on both development and test environments (Projects 6 and 7).
Project 6
Install builds on development environments and sends email notifications. The project install builds on the development environments prepares the environments so the developers can do their integration or development work.
Project 7
Install builds on test environments. After the install build on the test environments, Jenkins triggers Project 8 to run Functional Verification Test (FVT).
Project 8
FVT is an automation tests list that includes many tests. After the FVT passes, the build is installed on the production environments (Project 8).
Project 9
The production environments can be installed on your customer's local servers either on the Cloud, or IBM® Softlayer®. Access is available to both internal and external users.
Figure 6 shows the continuous delivery framework process. After a project succeeds, the next project is triggered. If a project fails, the process ends and emails are sent to subscribers.
Figure 6. Process for the continuous delivery framework
flow diagram of the continuous delivery framework

Deployment topology for the continuous delivery framework

The left side of Figure 7 shows a traditional development deployment. A developer commits the change sets to the source control server for example Rational Team Concert (RTC), then the build server makes the builds.
The right side of Figure 7 illustrates the continuous delivery process. After Jenkins is added, there is a Jenkins master. The Rational Team Concert buildtoolkit is installed on that server. Jenkins master is installed in a Rational Team Concert plugin. It uses the buildtoolkit to download the source code from Rational Team Concert and also triggers the buildtoolkit to make the build. The AppScan and BVT projects also run on the Jenkins master. The development environments, test environments and production environments all serve as Jenkins slaves. They are controlled by the Jenkins master and they run the installation projects. The test environments run the functional verification test project.
Note:
It's easier to keep track of tasks if you tie the projects to the computers because the different machines have different roles.
Figure 7. Topology for the continuous delivery framework
flow of development and Jenkins

Summary

You now know how to set up a continuous delivery framework with Jenkins. This framework helps to deploy work automatically which saves the developers and testers valuable time. The framework also helps you discover any issues or defects early in the process.


Monday, 30 May 2016

Building pipelines by linking Jenkins jobs

Building pipelines by linking Jenkins jobs


Continuous integration servers have become a corner stone of any professional development environment. By letting a machine integrate and build software, developers can focus on their tasks: fixing bugs and developing new features. With the emergence of trends such as continuous deployment and delivery, the continuous integration server is not limited to integrating your products, but has become a central piece of infrastructure.
However, organizing jobs on the CI server is not always easy.
This blog post describes a couple of strategies for creating dependent tasks with Jenkins (or Hudson).

On keeping things small

To make your continuous server really efficient for your team, you need to give feedback to your development team as fast as possible. After a commit (or a push), we all wait for a notification to make sure we didn’t introduce any obvious bugs (at some point, we’ve all forgotten to add a file to the SCM, or introduced a wrong import statement). However, builds tend to be long for enterprise applications; tests (both unit and smoke) can take hours to execute. So it’s imperative to reduce feedback time and therefore, to divide massive jobs into small and fast units.
For example, a regular build process would generally take the following actions (for each module/component):
  • compile the code
  • run unit tests
  • package the code
  • run integration tests
  • extract quality metric
  • generate reports and documentation
  • deploy the package on a server
It’s clear that carrying out all those tasks on a multi-module project can take a considerable length of time, leaving the development team waiting before they can switch to the next task. It’s not rare to see a Maven build taking (just for the compilation / test / packaging) up to 30 minutes.
Moreover, smaller jobs offer much more flexibility. For example, one can restart from a failing step without restarting the full build from scratch.

A true story : Restarting a test server

Recently, in one of our projects, we had to clean up and re-populate a test server. In the first version, a script was executing the following actions in one massive job:
  • Stop the server
  • Clean up the file system
  • Drop tables
  • Create tables and populate with test data
  • Start the server
The process was taking more or less 10 minutes, and in case of failure didn’t allow restarts from the failing step.
In a second version, each action was executed in its own job. However, jobs were not dependent on each other, which required job/wait cycles of 10 minutes.
Jobs to achieve our second scenario
Finally, we linked jobs together to trigger the whole process by simply starting the first job. To create those dependencies, we tried three approaches of which two were successful (I’ll let you guess about the third one).

One-to-One Relationship

The first approach was pretty simple: one job triggers another one after successful completion. So in our case:
Stop server 
    |-> Clean up filesystem 
           |-> Drop database 
                  |-> Create table and insert data 
                        |-> Start server
1
2
3
4
5
6
7
8
9
10
INFO: Pipeline Blog Post - Stop Server #3 main build action completed: SUCCESS
07.11.2011 15:12:20 hudson.model.Run run
INFO: Pipeline Blog Post - Cleanup #3 main build action completed: SUCCESS
07.11.2011 15:15:27 hudson.model.Run run
INFO: Pipeline Blog Post - Dropping tables #3 main build action 
completed: SUCCESS
07.11.2011 15:17:29 hudson.model.Run run
INFO: Pipeline Blog Post - Creating tables and Inserting data #3 main build action completed: SUCCESS
07.11.2011 15:22:31 hudson.model.Run run
INFO: Pipeline Blog Post - Start server #3 main build action completed: SUCCESS
To achieve these sorts of relationships with Jenkins, you can either configure a post-build action starting the next job (downstream) or configure the trigger of the current build (upstream). Both ways are totally equivalent.

Triggering a job after successful completion of the current job

This first way is probably the most intuitive. It consists of configuring the job triggered after the current job. In our scenario, the ‘stop server’ job triggers, on successful completion, the ‘cleanup’ job. To configure this dependency, we configure a post-build action in the Configure page of the ‘stop server’ job:
Trigger a build once the current job finishes

Starting the current job after completion of another build

With this second way, you configure which job triggers the current job. In our scenario, the ‘cleanup’ job is triggered after the ‘stop server’ job. So in the Configure page of ‘cleanup’, in the Build trigger section, we select the Build after other projects are built and specify the ‘stop server’ job:

Using this one-to-one dependency approach is probably the simplest way to orchestrate jobs on Jenkins. With such easy configuration, decomposing complex activities / build processes is very simple. You can restart the stream from any point, and get feedback after every success or failure.

Über Job: the wrong good idea

One attempt to optimize the previous method was to create a kind of über job, triggering all other jobs. Unfortunately, even if it was a brilliant idea on paper, it doesn’t work. Indeed, there are two issues:
  • Even if a job fails, others are triggered
  • The job execution order is not deterministic
The first point is simple to explain: the jobs are not interconnected, so even if one fails, others are still executed. This can be really annoying if the initial requirement of a job is not set up correctly.
1
2
3
4
5
6
7
8
9
10
11
12
07.11.2011 15:32:51 hudson.model.Run run
INFO: Pipeline Blog Post - Uber Job #6 main build action completed: SUCCESS
07.11.2011 15:32:59 hudson.model.Run run
INFO: Pipeline Blog Post - Stop Server #5 main build action completed: SUCCESS
07.11.2011 15:33:01 hudson.model.Run run
INFO: Pipeline Blog Post - Cleanup #3 main build action completed: SUCCESS
07.11.2011 15:33:03 hudson.model.Run run
INFO: Pipeline Blog Post - Dropping tables #3 main build action completed: FAILURE
07.11.2011 15:33:05 hudson.model.Run run
INFO: Pipeline Blog Post - Creating tables and Inserting data #3 main build action completed: SUCCESS
07.11.2011 15:33:07 hudson.model.Run run
INFO: Pipeline Blog Post - Start server #3 main build action completed: SUCCESS
In this log, the ‘drop tables’ job failed. We would expect the whole scenario to come to a halt at this point, but unfortunately that’s not the case. When this happens, we can’t be sure of the resulting state on our restarted server.
The second point is more tricky. Jenkins is built on an asynchronous model: jobs are scheduled and will be executed later. The order of execution can depend on a lot of different parameters, so the order is hard to plan. In our case, we have seen:
1
2
3
4
5
6
7
8
9
10
11
12
07.11.2011 15:32:51 hudson.model.Run run
INFO: Pipeline Blog Post - Pipeline #18 main build action completed: SUCCESS
07.11.2011 15:32:59 hudson.model.Run run
INFO: Pipeline Blog Post - Stop Server #15 main build action completed: SUCCESS
07.11.2011 15:33:01 hudson.model.Run run
INFO: Pipeline Blog Post - Cleanup #13 main build action completed: SUCCESS
07.11.2011 15:33:03 hudson.model.Run run
INFO: Pipeline Blog Post - Creating tables and Inserting data #13 main build action completed: SUCCESS
07.11.2011 15:33:05 hudson.model.Run run
INFO: Pipeline Blog Post - Dropping tables #13 main build action completed: SUCCESS
07.11.2011 15:33:07 hudson.model.Run run
INFO: Pipeline Blog Post - Start server #13 main build action completed: SUCCESS
You can see that the tables were dropped after the data was inserted. Well… I’ll let you guess the state of the server after this execution.
So, even if this method seemed to be a good idea, it’s actually a pretty bad idea if you want your process executed reliably.

A bit of optimization: Fork and Join

This last method uses a Jenkins plugin named ‘Join plugin’ (see the Join plugin page). In brief, this plugin allows you to configure fork/join patterns: once downstream projects are completed, other projects are triggered.
If you have jobs that can be run in any order, this plugin will reduce the amount of configuration you need. In our scenario, in the ‘stop server’ job, it can be used as follows:

So, the ‘stop server’ job triggers the ‘clean up’ and ‘drop table’ jobs. Once those (independent) jobs are completed, we trigger the data insertion and restart the server. In our experience, these two ‘join’ jobs were always executed in right order (and on the same executor). But we recommend triggering only one join job with a one-to-one dependency:
         
            /-> Cleanup    -\
    Stop server              * ->  Insert data -> Start server
            \-> Drop table -/
This approach reduces the number of builds to configure, but should be used only if your jobs are independent.

A last tip

Especially in the one-to-one approach, the pipeline can become pretty long. Jenkins has a nice plugin to visualize the downstream builds: Downstream buildview plugin.
We recommend using this plugin to track the progress and visualize the result of complex/long pipelines.

Conclusion

Even if the advantages of splitting a long build process into small jobs are obvious, doing so may be more difficult than expected. This post has presented several ways to create dependencies between your jobs for Jenkins/Hudson.

Continuous delivery with Jenkins and SSH

Let’s imagine the following situation. You’re working on an application for a customer. Despite a firm deadline and a roadmap given to your customer, he’d like to check the progress regularly, let’s say weekly or even daily, and actually give you feedback.
So to make everybody happy, you start releasing the application weekly. However, releasing is generally a hard-core process, even for the most automated processes, and requires human actions. As the release date approaches, stress increases. The tension reaches its climax one hour before the deadline when the release process fails, or worse, when the deployment fails. We’ve all been in situations like that, right?
This all-too-common nightmare can be largely avoided. This blog post presents a way to deal with the above situation using Jenkins, Nexus and SSH. The application is deployed continuously and without human intervention on a test environment, which can be checked and tested by the customer. Jenkins, a continuous integration server, is used as the orchestrator of the whole continuous delivery process.

The principles of Continuous Delivery

Continuous Delivery is basically a set of principles to automate the process of software delivery. The overall goal is to continuously deliver new versions of a software system.
It relies on automated testing, continuous integration and automated deployments. The application is packaged and deployed to test and production environments, resulting in the ability to rapidly, reliably and repeatedly push out enhancements and bug fixes to customers at low risk and with minimal manual overhead.
Continuous Delivery is based on the pipeline concept. A delivery pipeline is the process taken by the code from the developer’s machine to production environments, particularly pipelines defined for the testing stages and the deployment process.

A Simple Pipeline



Continuous delivery can be quite hard to set up for complex systems. We recommend starting with a simple configuration. Let’s take a simple web application deployed on an application server such as Tomcat or JBoss.














The journey of the source code from the developer machine to the test environment
The code of our application is hosted on a source code management (SCM) server. It can be Git, Subversion or anything else. It’s the entry point of our pipeline.
Our continuous integration server (Jenkins) pulls the code from the SCM. It builds and tests the application. If all tests are green, the application is packaged and deployed to an artifact repository (Nexus in our context). Then, Jenkins triggers the deployment process.
To achieve this, Jenkins connects to our host machine using SSH, and launches the deployment script. In this example, the deployment script is a simple shell script redeploying and restarting the application.
Finally, when the deployment is done, we check the availability of our web application.

Implementing the pipeline

The presented pipeline is quite simple, but works pretty well for most web / JavaEE applications. To implement it, we need a Jenkins with two specific plugins (the SSH plugin and the Groovy Plugin) and a host machine available through SSH.

Preparing Jenkins

The first thing you need is to install the plugins in Jenkins: – The Jenkins SSH plugin allows you to connect to the host machine by SSH and execute commands; the Hudson Groovy Builder to execute Groovy code. We use Groovy to check the deployment result. However, you can use other options (unit tests, nagios…)
Once those two plugins are installed, you need to configure the connection to the host machine. In the Global Configuration page of Jenkins, scroll down to the SSH Remote Host section and add a host. Enter the machine name or the IP address, and the credentials. You can also use a key file.
Add an SSH connection to Jenkins
Add an SSH connection to Jenkins
Once done and saved, it’s time to create our Jenkins Jobs supporting our continuous delivery process. To keep this example simple, we divide our process in two jobs:
  • The first job compiles, tests, builds and deploys the application. If successful, the new application archive is deployed on our Nexus repository.
  • The second job is triggered upon the success of the first job. It connects to the host machine, and executes a shell script. Once done, it executes a simple Groovy script to check the deployment success. We use Groovy for its simplicity in making HTTP connections and retrieving the result. However, you can use plenty of other ways.
The first job configuration is not specific to the continuous delivery pipeline. It’s generally a regular job deploying the artifacts to a Maven repository. So, a simple Maven Job executing mvn clean deploy upon a source code update is enough. If you want to divide this job into several steps, have a look at this post. In our example, we deploy the application to a Nexus repository.
The second job is more interesting. Create a new freestyle project job. This job is triggered upon successful execution of the first job. So select None as Source Code Management and indicate the previous job name in the Build after other projects are built option.
The build is started when the previous build succeeds
The build is started when the previous build succeeds
In the Build section, add a first build step, and select Execute shell script on remote host using ssh. Select your host, and add the script. You can also use a command directly, and execute a script already present on the host.
Launch the deployment script
Launch the deployment script

Preparing the application host

At this point, we have a Jenkins job connecting to the host and executing some commands. To simplify, we focus only on the application deployment and not on the environment setup. So we consider that the host is ready to be used. The deployment script follows this basic pattern:

1) Stop the application
2) Retrieve the new application
3) Deploy the new application
4) Restart the application

First, if the application runs, it should be stopped (except if your application server supports hot-redeployment). Then, we retrieve the application package. This step consists of downloading the latest version of our application from a repository. Once downloaded, the application is deployed, so either copied to a deploy folder, or unpackaged to a specific location. Finally, we restart the application.
Step 1) and 4) are dependent on your application server, but if you’re using Linux upstart scripts, it should be something like:
1
2
3
stop my_application
...
start my_application
Or if you’re using a service:
1
2
3
/etc/init.d/my_application stop
...
/etc/init.d/my_application start
To retrieve the latest version of our application, we’re relying on the Nexus REST API and a script to download the latest version. This script is available here, and so should be available and made executable on your host (Note: this script requires Curl). With this script, getting the latest version of our application is quite simple:
1
2
3
4
5
6
7
8
9
...
download-artifact-from-nexus.sh \
 -a mycompany:myapplication:LATEST \
 -e war \
 -o /tmp/my_application.war \
 -r public-snapshots \
 -u username -p password
...
We first specify the Maven artifact to download, and so use the GroupId:ArtifactId:Version coordinates. We use LATEST as version to download the latest version (a snapshot in our case). The -e parameter indicates the packaging type. Then, we indicate the output directory. The -r option specifies the Maven repository on which the artifact is hosted (check your Nexus configuration to find this value). The other options set the Nexus url and the credentials.
Deploying the application (step 3) depends on your execution environment. It generally consists of copying the downloaded archive to a specific directory.
So, to sum up, the following script can be a valid deployment script for a web application packaged as a war file executed on a Tomcat server:
1
2
3
4
5
6
7
8
9
10
11
export WEBAPP=Tomcat webapp folder
stop my_application
download-artifact-from-nexus.sh \
 -a mycompany:myapplication:LATEST \
 -e war \
 -o /tmp/my_application.war \
 -r public-snapshots \
 -u username -p password
cp /tmp/my_application.war $WEBAPP
start my_application
So, if everything is configured correctly, once you commit a change to your application, this change should be deployed immediately to your test environment.
First, your application is tested, built and deployed on a Nexus repository. Then, a second Jenkins job connects to the host machine and runs a deployment script. This script retrieves and deploys the latest version of the application.

Checking the deployment

An improvement you could make is to check whether the deployment was performed correctly. For that, you can use Groovy. In the second Jenkins job, add a new build step: Execute Groovy Script. In the text area, just do a simple check like:
1
2
3
4
5
6
7
Thread.sleep(Startup_time)
def address = "Your_URL"
def url = address.toURL()
println "URL : ${url}"
def connection = url.openConnection()
println "Response Code/Message: ${connection.responseCode} / ${connection.responseMessage}"
assert connection.responseCode == 200
This simple Groovy script waits a couple of seconds (until your application is actually started), and connects to your application. If the application response is correct, then everything is fine. If not, the build is marked as failed, and you should have a look. Obviously, this simple script can be improved and adapted to your situation.

That’s it!

This blog post has presented a way to implement continuous delivery for applications using Jenkins, Nexus and a machine accessible using SSH. Obviously other combinations are possible.
Continuous delivery may be hard to achieve in one step, but as illustrated in this post, it can be set up pretty easily to continuously deploy an application to a testing environment.
Thanks to these principles, you make the development more reactive; we can immediately see the changes. Moreover, errors and bugs are detected earlier.
It’s up to you to tune your pipeline to fit your needs. For instance, pushing the application on the test environment nightly, instead of after every change.

akquinet tech@spree is now using continuous delivery principles in several projects. The results are really beneficial. The test campaigns were improved, and thanks to the pipeline, developers can focus on the devleopment of new features and bug fixes, while still seeing their changes immediately.