Chris White
Posted on June 26, 2023
Shared libraries in Jenkins are a great way to organize the build process in modular components. The following repository contains base code that will be used in this tutorial. We'll look at the basics of how shared libraries can be utilized.
Shared Library Preparation
Jenkins shared libraries have a specific structure to work with as follows (taken from the wonderful shared libraries documentation):
(root)
+- src # Groovy source files
| +- org
| +- foo
| +- Bar.groovy # for org.foo.Bar class
+- vars
| +- foo.groovy # for global 'foo' variable
| +- foo.txt # help for 'foo' variable
+- resources # resource files (external libraries only)
| +- org
| +- foo
| +- bar.json # static helper data for org.foo.Bar
src
and resources
are tied together while vars
can act more independently. For simple calls vars
is good enough. src
will let you handle things in a more java class like layout. To get things started I'll be creating a gitolite repo (setup details can be found here) to hold the shared libraries. Under conf/gitolite.conf
in the gitolite-admin
repository:
repo jenkins-shared-library
RW+ = private_git_server
After checking out I'll create a sample directory structure like this:
.
└── vars
└── buildTest.groovy
1 directory, 1 file
Where buildTest.groovy
looks like this:
def call() {
sh 'python3 --version'
}
As indicated by the extension Jenkins shared libraries are primarily written in the Groovy language. It's a JVM based language with a more friendlier syntax in dealing with the scripted like nature of Jenkins pipelines. Finally I'll commit and push the changes to the repository.
Declarative vs Scripted Pipeline
In terms of shared libraries, scripted pipelines are more beneficial if you want to fully utilize them. Declarative pipelines can work with them, it's just that they tend to be more verbose when written out than scripted pipelines. More so if your end goal is to use libraries for standardizing the build process for a large number of Jenkinsfiles by keeping things compact. With this in mind I'll mostly be utilizing scripted pipelines but will point out any gotchas if they come up.
Defining and Using Shared Libraries
Now Jenkins by default doesn't know this shared library exists so we have to setup the instance for that. First go to the Jenkins top page then:
- Click on "Manage Jenkins"
- Click on "Configure System" under "System Configuration" towards the top
- Scroll down until you see "Global Pipeline Libraries"
- Click "Add"
- Enter an appropriate Name which will be used to identify the library for import purposes (JenkinsShared as an example)
- "Default Version" in this case is what git identifier will be used, in this case I'll put "main" which is my default HEAD identifier for the repo
- I'll ignore the other options for now and instead fill out "Project Repository" which in my case will be
git@gitserver:jenkins-shared-library
- For "Credentials" I'll select my appropriate git connection credentials
- Then I'll click "Save" at the bottom since there's nothing else to do
After coming back to the settings again Default Version
should show a banner under it that looks like:
"Currently maps to revision: 5e7125183cc7525ae1669ca721349277824dbdde"
Showing that your repository mapping looks okay. Now it's time to actually use this code. I'll use this Jenkinsfile
for that purpose:
@Library('JenkinsShared') _
node('jenkins-agent')
{
buildTest()
}
Note: The _
is being used as a replacement for a long import statement and if left out will cause a syntax error
The @Library('JenkinsShared') _
is pulling in the library for use and buildTest()
is calling the actually code in the library. I'll now commit and push this code, which will proc the build via gitolite hooks:
> git fetch --no-tags --force --progress -- git@gitserver:jenkins-shared-library +refs/heads/*:refs/remotes/origin/* # timeout=10
Checking out Revision 5e7125183cc7525ae1669ca721349277824dbdde (main)
> git config core.sparsecheckout # timeout=10
> git checkout -f 5e7125183cc7525ae1669ca721349277824dbdde # timeout=10
Commit message: "Initial Commit"
> git rev-list --no-walk 5e7125183cc7525ae1669ca721349277824dbdde # timeout=10
<snip>
Checking out Revision a2b59c5987014dbf40fbda6e1be19dc3ca68b7e9 (master)
> /usr/bin/git config core.sparsecheckout # timeout=10
> /usr/bin/git checkout -f a2b59c5987014dbf40fbda6e1be19dc3ca68b7e9 # timeout=10
Commit message: "Fix steps missing"
> /usr/bin/git rev-list --no-walk 9d4dd034d22e307970102045f6de7c71c6b507c5 # timeout=10
[Pipeline] }
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Build)
[Pipeline] node
Running on Jenkins Agent-0007fndg4vl6o on docker in /home/jenkins/agent/workspace/GitoliteTest_master
[Pipeline] {
[Pipeline] sh
+ python3 --version
Python 3.9.2
The two important things are that Jenkins is both pulling the shared library from its repo using the latest version, and the appropriate code is being executed from the shared library. Now this kind of library import method is good if you want more control over the version of the library you want to use or be more specific on imports. If you're just using the default though the imports are less valuable. It turns out that Jenkins has an option in the Global Pipeline config section called "Load implicitly", which as the help describes "If checked, scripts will automatically have access to this library without needing to request it via @Library". So after selecting this option, the @Library
header can be removed and still work:
node('jenkins-agent')
{
buildTest()
}
Working With vars
The vars directory holds what are often known as global variables. These provide a more imperative experience when working with pipelines and the most straightforward to work with. Now there are two ways that you can work with vars: encapsulation of methods and as calls. Looking at the first method:
vars/python.groovy
(shared libs)
def checkoutCode() {
checkout scm
}
def poetryInstall() {
sh '''
pip install poetry
~/.local/bin/poetry install
'''
}
def pytestRun() {
sh '~/.local/bin/poetry run python -m pytest'
}
Jenkinsfile
(project)
node('jenkins-agent')
{
python.checkoutCode()
python.poetryInstall()
python.pytestRun()
}
In this case, python
represents an encapsulation of poetryInstall
and pytestRun
. The one issue for declarative pipelines is that they must be encapsulated in a script{}
block to work properly:
pipeline {
agent {
label 'jenkins-agent'
}
stages {
stage('Build') {
steps {
script {
python.poetryInstall()
}
}
}
stage('Test') {
steps {
script {
python.pytestRun()
}
}
}
}
}
That said, declarative pipelines also don't require the checkoutCode()
portion to work properly as that's done by default. The advantage here is that this method allows for a more namespace like format if you wish that level of explicit declaration. On the other hand it's also requiring slightly more work being done on the Jenkinsfile
side. Now let's take a look at an alternative:
vars/pythonBuild.groovy
(shared libs)
def call() {
checkout scm
sh '''
pip install poetry
~/.local/bin/poetry install
~/.local/bin/poetry run python -m pytest
'''
}
Note: Realistically poetry should be part of the docker agent image, but explaining that would have complicated the article a bit so I decided to simply use this workaround
Jenkinsfile
node('jenkins-agent')
{
pythonBuild()
}
In this case the step separation is missing (something we'll get into fixing later) and the install and test steps are no longer broken out in the Jenkinsfile
. The script{}
block is no longer needed as well when doing declarative pipelines:
pipeline {
agent {
label 'jenkins-agent'
}
stages {
stage('Build') {
steps {
pythonBuild()
}
}
}
}
This makes things a lot more compact. We can also separate out the install and build process on the shared libs side:
def call() {
checkoutCode()
installPoetry()
runTests()
}
def checkoutCode() {
checkout scm
}
def installPoetry() {
sh '''
pip install poetry
~/.local/bin/poetry install
'''
}
def runTests() {
sh '~/.local/bin/poetry run python -m pytest'
}
This is a lot more cleaner on the organization side.
src Libraries
Another method is using a more class style method similar to how traditional Java imports work. It also allows multiple instances of a class along with inheritance versus the singleton nature of the vars solution. Simple classes not related to pipeline steps are rather simple:
src/org/foo/Human.groovy
package org.foo
class Human {
String name
}
Jenkinsfile
node('jenkins-agent')
{
human = new org.foo.Human()
human.setName('John Smith')
sh "echo ${human.name}"
}
Sample output of the result:
[Pipeline] {
[Pipeline] sh
+ echo John Smith
John Smith
Now if I try to abstract out the sh
line to the library:
src/org/foo/Human.groovy
package org.foo
class Human {
String name
void sayName() {
sh "echo ${name}"
}
}
Jenkinsfile
node('jenkins-agent')
{
human = new org.foo.Human()
human.setName('John Smith')
human.sayName()
}
I'll get this nasty error:
Also: org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: de622af5-c724-46ba-af61-e3a70a709322
hudson.remoting.ProxyException: groovy.lang.MissingMethodException: No signature of method: org.foo.Human.sh() is applicable for argument types: (java.lang.String) values: [$name]
This is because the class as is isn't aware of steps and thinks you're trying to call an sh
method of the Human
class. To fix this, we'll change the class as follows:
package org.foo
class Human implements Serializable {
def steps
String name
Human(steps) {this.steps = steps}
def sayName() {
steps.sh "echo ${name}"
}
}
Serialization is a method of taking an object into something that can be written externally to a text file, passed over the network, and still produce the object in question. Jenkins needs this as it needs the ability to suspend and resume tasks, which having a serializable form of an object enables this. From there this
is passed into the constructor which contains the script environment to allow access to the steps. This roundabout way of doing things is again why I recommend using the global variable method as much as possible (it's also easier to reason with if you're used to CircleCI/GitHub actions workflows).
Environment Variables
Environment variables contain a wealth of information about several useful properties of the system. For example:
- A user's home directory on the local system
- A custom set environment
- The name of a git branch that triggered a build
These variables also can come from several sources:
- OS level
- Project level environment variables
- Cloud/node agent level environment variables
- Global environment variables
- Environment variables defined via
withEnv()
- Environment variables exposed via Jenkins plugins
This means that what environment variables are available to you will be up to the Jenkins setup and other factors. You may need to have additional plugins installed as well. Accessing an environment variable is fairly simple:
sh "echo ${env.HOME}"
You can also get an environment variable via a key known at runtime via:
def myEnvVariable = env."${someKeyHere}"
Using withEnv
is also very useful, for example the situation with poetry PATH:
newPath = env.PATH + ':' + env.HOME + '/.local/bin'
withEnv(["PATH=${newPath}"]) {
sh '''
pip install poetry
poetry install
'''
}
Here path has the home directory (/home/jenkins
in this case) which is an OS variable, followed by /.local/bin
where poetry lies. Now poetry
can be used as-is without needing to prepend the path. As withEnv takes a list of strings, you can set multiple ones if you want (though that might be better handled at the project/global/agent level declaration).
Steps and Stages
It's possible with scripted pipelines to declare stages within shared libraries. This is useful when checking the overall build to see how long things are taking and make logs easier to work with. Without that your stage view will throw back something like this:
which isn't very useful. Stage declaration is pretty straightforward though, much like how a standard Jenkins pipeline would define it. For example:
vars/pythonBuild.groovy
def call() {
stage('Checkout') {
checkoutCode()
}
newPath = env.PATH + ':' + env.HOME + '/.local/bin'
withEnv(["PATH=${newPath}"]) {
stage('Install Poetry') {
installPoetry()
}
stage('Run Tests') {
runTests()
}
}
}
def installPoetry() {
sh '''
pip install poetry
poetry install
'''
}
def checkoutCode() {
checkout scm
}
def runTests() {
sh 'poetry run python -m pytest'
}
Here different stages are declared for checking out the code, installing poetry, and running tests. This gives a nice visual overview of the different processes:
Logs also get broken up so there's less to search:
While it's possible to return an entire declarative pipeline for handling steps, the result can be a bit noisy and you still have to do it at the level of what's declaring the pipeline. You also can't declare pipelines once. With that in mind it's recommended to stick with scripted pipelines if you plan to use
An interesting feature is the ability to have a custom step like setup. As an example:
vars/poetry.groovy
def call(Closure body) {
newPath = env.PATH + ':' + env.HOME + '/.local/bin'
withEnv(["PATH=${newPath}"]) {
sh '''
pip install poetry
poetry install
'''
body()
}
}
So a few things are going on here. First is the Closure body
being passed to the call. This will execute the closure passed in when body()
is called. Then from here I add poetry's binary location to PATH
using withEnv
. This means I no longer need to run poetry via ~/.local/bin/poetry
, including everything in the Closure passed in. Before calling that though a poetry installation and poetry install run are executed, so the closure statements won't have to worry about that. As an example use:
vars/pythonBuild.groovy
def call() {
checkoutCode()
poetry {
runTests()
}
}
def checkoutCode() {
checkout scm
}
def runTests() {
sh 'poetry run python -m pytest'
}
In this case the Closure that will be passed in to the poetry.groovy
call is:
{
runTests()
}
If you'll notice runTests
runs without needing the ~/.local/bin/poetry
call. Poetry's install is also abstracted out by the poetry block's backend code.
Parameters
Global variables declared via the call syntax are also able to utilize parameters. For example:
vars/pythonBuild.groovy
def call(boolean shouldRunTests) {
stage('Checkout') {
checkoutCode()
}
newPath = env.PATH + ':' + env.HOME + '/.local/bin'
withEnv(["PATH=${newPath}"]) {
stage('Install Poetry') {
installPoetry()
}
if( shouldRunTests ) {
stage('Run Tests') {
runTests()
}
}
}
}
def installPoetry() {
sh '''
pip install poetry
poetry install
'''
}
def checkoutCode() {
checkout scm
}
def runTests() {
sh 'poetry run python -m pytest'
}
Jenkinsfile
node('jenkins-agent')
{
pythonBuild(false)
}
In this case, a boolean parameter is available to indicate whether or not tests should be run. This might be useful to break out unit tests from trunk branches. The stage view shows that the "Run Tests" stage is clearly skipped:
It's also not uncommon for parameters to be passed in as maps. This way another config can validate them for required parameters and set default ones. For example:
vars/pythonBuild.groovy
def call(Map config = [:]) {
validateConfig(config)
stage('Checkout') {
checkoutCode()
}
newPath = env.PATH + ':' + env.HOME + '/.local/bin'
withEnv(["PATH=${newPath}"]) {
stage('Install Poetry') {
installPoetry()
}
if( config.runTests ) {
stage('Run Tests') {
runTests()
}
}
}
}
def validateConfig(Map config) {
requiredKeys = ['runTests']
for ( requiredKey in requiredKeys ) {
if ( ! config.containsKey(requiredKey) ) {
throw new Exception("Missing required key: ${requiredKey}")
}
}
}
def installPoetry() {
sh '''
pip install poetry
poetry install
'''
}
def checkoutCode() {
checkout scm
}
def runTests() {
sh 'poetry run python -m pytest'
}
Jenkinsfile
node('jenkins-agent')
{
pythonBuild()
}
I now have a check to ensure that a map is passed in with runTests
defined. If I run this without any parameters:
[Pipeline] End of Pipeline
Also: org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 533cc6df-bcde-47ff-91d5-da8b1537eca4
java.lang.Exception: Missing required key: runTests
Providing the parameter will make this work:
node('jenkins-agent')
{
pythonBuild(['runTests': true])
}
I can also do it without providing the map by having validation support a default for the parameter:
vars/pythonBuild.groovy
def call(Map config = [:]) {
config = validateConfig(config)
stage('Checkout') {
checkoutCode()
}
newPath = env.PATH + ':' + env.HOME + '/.local/bin'
withEnv(["PATH=${newPath}"]) {
stage('Install Poetry') {
installPoetry()
}
if( config.runTests ) {
stage('Run Tests') {
runTests()
}
}
}
}
def validateConfig(Map config) {
requiredKeys = ['runTests']
defaultKeys = ['runTests': true]
for ( requiredKey in requiredKeys ) {
if ( ! config.containsKey(requiredKey) ) {
if ( defaultKeys.containsKey(requiredKey) ) {
config[requiredKey] = defaultKeys[requiredKey]
}
else {
throw new Exception("Missing required key: ${requiredKey}")
}
}
}
return config
}
def installPoetry() {
sh '''
pip install poetry
poetry install
'''
}
def checkoutCode() {
checkout scm
}
def runTests() {
sh 'poetry run python -m pytest'
}
This now allows for pythonBuild()
to be called without an error as the defaults will set it to true. The Map config = [:]
sets an empty default for the map so we don't have to validate if the map is empty or not.
Resources
Resource are fairly simple so this section will be brief. Such resources are stored in the resources
folder of the shared library root. They can then be accessed via the libraryResource
call which returns text data for the resource. As an example:
resources/util/shell/test.sh
echo "Hello World"
echo "Hello Again"
vars/buildTest.groovy
def call() {
sh libraryResource('util/shell/test.sh')
}
Jenkinsfile
node('jenkins-agent)
{
buildTest()
}
When called this will print out both "Hello World" and "Hello Again". This is also very useful when bundled with the Pipeline Utility Steps plugin which works with several file formats. As an example with JSON:
resources/util/json/test.json
{
"foo": "bar"
}
vars/buildTest.groovy
def call() {
def result = readJSON text: libraryResource('util/json/test.json')
sh "echo ${result.foo}"
}
Conclusion
This concludes a look into the power of Jenkins shared library in managing code. I recommend working with scripted pipelines for easy of use and flexibility with stage declarations, and global variables to handle the backend code. You can also find a summary of some of these points for quick reference and the Jenkins documentation page for shared libraries.
Posted on June 26, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.