The new Jenkins pipeline integration in 2.0 is pretty awesome and is a great way to add more automation to Jenkins.In this post we will set up Jenkins so that we can write our own custom libraries for Jenkins to use with the pipeline.
One huge benefit of the new pipeline integration in to the core components of Jenkins is the idea of a Jenkinsfile, which enables a way to automatically execute Jenkins jobs on the repo automatically, similar to the way TravisCI works. Simply place a Jenkinsfile in the root of the repo that you wish to automate, set up your webhook so that events to GitHub automatically trigger the Jenkins job and let Jenkins take care of the build.
There are a few very good guides for getting this functionality setup.
Unfortunately, while these guides/docs are very informative and useful they are somewhat confusing to new users and glaze over a few important steps and details that took me longer than it should have to figure out and understand. Therefore the focus will be on some of the details that the mentioned guides lack, especially setting up the built in Jenkins Git repo for accessing and working with the custom pipeline libraries.
There are many great resources out there already for getting a Jenkins server up and running. For the purposes of this post it will be assumed that you have already create and setup a Jenkins instance, using one of the other Digital Ocean [tutorials](https://www.digitalocean.com/community/tutorials/?q=jenkins).
The first step to getting the pipeline set up, again assuming you have already installed Jenkins 2.0 from one of the guides linked above, is to enable the Jenkins custom Git repo for housing unique pipeline libraries that we will write a little bit later on. There is a section in the workflow plugin tutorial that explains it, but is one of the confusing areas and is buried in the documentation so it will be covered in more detailed below.
In order to be able to clone, read and write to the built in Jenkins Git repo, you will need to add your SSH public key to the server. To do this, the first step will be to configure the server per the SSH plugin and configuring a port to connect to so that the repo can be cloned. This can be found in Jenkins -> Configuration. In this example I used port 2222.
As always security should be a concern, so make sure you have authentication turned on. In this example, I am using GitHub authentication but the process will be similar for other authentication methods. Also make sure you use best practices in locking down any external access by hardening SSH or using other ways of blocking access.
After the Git server has been configured, you will need to add the public key for your user in Jenkins. This was another one of the confusing parts initially. The section to add your personal SSH public key was buried in docs that were glossed over, the page can be found here. The location in Jenkins to add this key is located here,
If you don’t know where your public SSH key is located you can get it from the following command.
Just copy that certificate and paste it into The Public SSH Keys in the Jenkins menu.
Now that you have the built in Jenkins Git repo configured you can clone it via the instructions from the above tutorial.
git clone ssh://<jenkins_username>@jenkins.example.com:2222/workflowLibs.git
Notice that we are using port 2222. This is the port we configured via the SSH plugin from above. The port number can be any port you like, just make sure to keep it consistent in the configuration and here.
Working with the pipeline
With the pipeline library repo cloned, we can write a simple function, and then add it back in to Jenkins. By default the repo is called workflowLibs. The basic structure of the repo is to place your custom functions is the vars directory. Within the vars directory you will create a .groovy file for the function and a matching .txt file any documentation of the command you want to add. In our example let’s create a hello function.
Create a hello.groovy and a hello.txt file. Inside the hello.groovy file, add something similar to the following.
echo ‘Hello world!’
Go ahead and commit the hello.groovy file, don’t worry about the hello.txt file.
git add hello.groovy
git commit -m “Hello world example”
git push (You may need to set your upstream)
Obviously this function won’t do much. Once you have pushed the new function you should be able to view it in the dropdown of pipeline functions in the Jenkins build configuration screen.
NOTE: There is a nice little script testing plugin built in as part of the pipeline now called snippet-generator. If you want to test out small scripts on your Jenkins server first before committing and pushing any local changes you can try out your script first in Jenkins. To do this open up any of your Jenkins job configurations and then click the link that says “Pipeline syntax” towards the bottom of the page.
Here’s what the snippet generator looks like.
From the snippet-generator you can build out quite a bit of the functionality that you might want to use as part of your pipeline.
Configuring the job
There are a few different options for setting up Jenkins pipelines. The most common types are the Pipeline or the Multibranch Pipeline. The Pipeline implements all of the scripting capabilities that have been covered so that the power of the Groovy scripting language can be leveraged in the Jenkins job as well as things like application life cycles (via stages) which makes CI/CD much easier.
The Multibranch Pipeline augments the functionality of the Pipeline by adding in the ability to index branches from SCM so that different branches can be easily built and tested. The Multibranch Pipeline is especially useful for larger projects that have many developers and feature branches being worked on because each one of the branches can be tracked separately so developers don’t need to worry as much about overlapping with others work.
Taking the pipeline functionality one step further, it is possible to create a Jenkinsfile with all of the needed pipeline code inside of a repo that so that it can be built automatically. The Jenkinsfile basically is used as a blueprint used to describe the how and what of a project for its build process and can leverage any custom groovy functions that you have written that are on the Jenkins server.
Using a the combination of a GitHub webhook and a Jenkinsfile in a Git repo it is easy to automatically tell your Jenkins server to kick off a build every time a commit or PR happens in GitHub.
Let’s take a look at what an example Jenkinsfile might look like.
// Checkout logic goes here
// Build logic goes here
// Test logic goes here
// Deploy logic goes here
This Jenkinsfile defines various “stages”, which will run through a set of functions described in each stage every time a commit has been pushed or a PR has been opened for a given project. One workflow, as shown above, is to segment the job into build, test, push and deploy stages. Different projects might require different stages so it is nice to have granular control of what the job is doing on a per repo basis.
Bonus: Github Webhooks
Configuring webhooks in GitHub is pretty easy as well. SCM is fairly standard these days for storing source code and there are a number of different Git management tools so the steps should be very similar if using a tool other than GitHub. Setting up a webhook in GitHub can be configured to trigger a Jenkins pipeline build when either a commit is pushed to a branch, like master, or a PR is created for a branch. The advantages of using webhooks should be pretty apparent, as builds are created automatically and can be configured to output their results to various different communication channels like email or Slack or a number of other chat tools. The webhooks are the last step in automating the new pipeline features.
If you haven’t already, you will need to enable the GitHub plugin (https://wiki.jenkins-ci.org/display/JENKINS/GitHub+Plugin) in order to use the GitHub webhooks. No extra configuration should be needed out of the box after installing the plugin.
To configure the webhook, first make sure there is a Jenkinsfile in the root directory of the project. After the Jenkinsfile is in place you will need to set up the webhook. Navigate to the project settings that you would like to create a webhook for, select ‘Settings’ -> ‘Webhooks & services’ . From here there is a button to add a new webhook.
Change the Payload URL to point at the Jenkins server, update the Content type to application/x-www-form-urlencoded, and leave the secret section blank. All the other defaults should be fine.
After adding the webhook, create or update the associated job in Jenkins. Make sure the new job is configured as either a pipeline or multibranch pipeline type.
In the job configuration point Jenkins at the GitHub URL of the project.
Also make sure to select the build trigger to ‘Build when a change is pushed to GitHub’.
You may need to configure credentials if you are using private GitHub repos. This step can be done in Jenkins by navigating to ‘Manage Jenkins’ -> ‘Credentials’ -> ‘Global’. Then Choose ‘Add Credentials’ and select the SSH key used in conjunction with GitHub. After the credentials have been set up there should be an option when configuring jobs to use the SSH key to authenticate with GitHub.
Writing the Jenkinsfiles and custom libraries can take a little bit of time initially to get the hang of but are very powerful. If you already have experience writing Groovy, then writing these functions and files should be fairly straight forward.
The move towards pipelines brings about a number of nice features. First, you can keep track of your Jenkins job definition simply by adding a Jenkinsfile to a repo, so you get all of the nice benefits of history and version tracking and one central place to keep your build configurations. Because groovy is such a flexible language, pipelines give developers and engineers more options and creativity in terms of what their build jobs can do.
One gotcha of this process is that there isn’t a great workflow yet for working with the library functions, so there is a lot of trial and error to get custom functionality working correctly. One good way to debug is to set up a test job and watch for errors in the console output when you trigger a build for it. The combination of the snippet generator script tester though this process has become much easier.
Another thing that can be tricky is the Groovy sandbox. It is mostly and annoyance and I would not suggest turning it off, just be aware that it exists and often times needs to be worked around.
There are many more features and things that you can do with the Pipeline so I encourage readers to go out and explore some of the possibilities, the docs linked to above are a good place for exploring many of these features. As the pipeline matures, more and more plugins are adding the ability to be configured via the pipeline workflow, so if something isn’t possible right now it probably will be very soon.
Happy Jenkins pipelining!