Tai Kedzierski
Posted on December 22, 2020
I've been banging my head against Jenkins pipelines over the last few days - the more I try to work to set up a simple example of re-usable code, the more I feel that, in this respect, Jenkins Pipelines and Groovy are conspiring to prevent you from practicing clean coding and proper IaC.
Base Note
Note that if you create a pipeline job in Jenkins and point it at a main.groovy
file, it will run it in the Groovy interpeter. So the following:
// File ./main.groovy
println "Hello world!"
... is a valid file for the Jenkins pipeline definition. It will just print its message and exit as a successful build.
For all intents and purposes, the requirements on the file is just to be a Groovy script.
Then comes the fun part.
Imports
In any sane programming language, importing a file in the same folder as your script should be easy and obvious , especially if the language purports to be a scripting language.
In plain Groovy, this is actually not too hard - you just need to remember that it is kin with Java, and some prerequisites need to be taken care of. The following works fine:
// File ./main.groovy
import zoo.Cat
kitty = Cat()
kitty.sound()
// File ./zoo/Animal.groovy
package zoo
def sound() { println this.my_sound ; }
// File ./zoo/Cat.groovy
package zoo
import zoo.Animal
class Cat extends Animal { // Animal is implicitly a class!
def my_sound = "meow"
}
If you run groovy main.groovy
the import is successful.
Try running that in a pipeline job, and it will fail, complaining that it cannot resolve import zoo.Cat
. The reason for this, of course, is because the CLASSPATH
is that of the parent environment - which has no fore-knowledge of our dynamically (from the running process's point of view) loaded Groovy script.
So that avenue is fscked.
Parse. Evaluate. Fail.
For the simple question "how to import another groovy file" there are myriad, insane, suggestions on StackOverflow, most of which I chose to ignore because they're patently bat ship crazy (or the platform is pathologically senseless). The practice is necessary however, to get around the classpath issue.
There are two that seem reasonable enough. The first I came across was this:
GroovyShell shell = new GroovyShell()
def script = shell.parse(new File('zoo/Cat.groovy'))
script.method()
I am sad to report that this does not work in Jenkins, because it nerfs the use of GroovyShell
. On the one hand, this is probably a security feature to prevent running arbitrary code on your Jenkins instances ; on the other hand, what IS our code if not arbitrary from Jenkin's point of view? And to be perfectly honest, I could very easily just run curl $url | bash
as a step and bypass Jenkins's security restrictions altogether. So this definitely feels like a nerfing.
On my setup, I do get a message by which the restriction can be "approved" by administrators (the rights of which I do have) but to no avail.
A similar situation arises with the other tentative solution which I would have hoped to be the "standard" way of doing things:
evaluate(new File('zoo/Cat.groovy'))
Needless to say, these are failed routes as well, and I'm really starting to get cheesed off, and that's rude to the cheese.
Load? Load where?
In the Jenkins Groovy environment, there is a load()
function that allows you to do a similar thing to importing. This can work:
// File ./main.groovy
node('') {
stage("Load a file") {
kitty = load("zoo/Cat.groovy")
kitty.meow()
}
}
// File ./zoo/Cat.groovy
def meow() {
println "Miaow."
}
return this
Note the return this
at the end - this auto-instantiates an object of an implicit Cat
class (name taken from the file's name).
This, however, does not work:
// File ./main.groovy
kitty = load("zoo/Cat.groovy")
kitty.meow()
Instead, it fails with an error:
org.jenkinsci.plugins.workflow.steps.MissingContextVariableException: Required context class hudson.FilePath is missing
which is Jenkins whining at us that, unless it's encapsulated inside of a Jenkins node, importing a file cannot be done. Which is pants, because Jenkins itself already peeping well did a repo checkout to get this pipeline file in the first place!
We need therefore to use a "pipeline controller" node (for lack of a better name) which, incidentally, cannot be a node that you intend to use for actually processing builds (unless you want to risk your pipeline waiting on a queued build, which is waiting for.... your pipeline control job to stop running, which is waiting on ...)
Which means we actually need to do this:
// File ./main.groovy
node('controllers') {
stage("Load a file") {
checkout([
$class: 'GitSCM',
branches: [[name: "master"]],
userRemoteConfigs: [[
credentialsId: 'github-pat',
url: "https://github.com/org/reponame"
]]
])
kitty = load("zoo/Cat.groovy")
kitty.meow()
}
}
(the prior example I said "can work" only does when re-using a workspace)
Incidentally, that unweildy checkout block cannot be wrapped in an external function in its own file - because we need to check out the repo on the node before we can access the file! Yes there is a shorter notation for simple use-cases, but when you have to take into account custom settings... if you find yourself re-using this frequently it is repeated code in every pipeline definition. BAD.
But it is what it is. Using load()
and resigning myself to needing a pipeline controller node, I can finally get my file separation. I can even do this:
// File ./zoo/Cat.groovy
def meow() {
node("farm") { // Run somewhere other than the controller node
stage("Sound the farm") {
println "Miaow."
}
}
}
return this
... which effectively allows me to dynamically add stages as I go along.
But you can Share your Libraries!
Listen, the Shared Library
concept they're peddling sounds like a good idea on the surface, but do you really expect me to farm out a subset of files another repo, go through the Jenkins GUI to add it in with a custom name, the linke between the two being in the platofrm instead of in the code when the files are meant to be right there next to eachother like in any sane development project??
Declarative Pipelines? Don't declare victory
All this is good and well, but what if we want to be a bit cleaner and not use imperative programming, but instead use the actual Declarative implementation that Jenkins is really wants us to use?
Well you're stuffed.
You can only run script code inside script{}
blocks, which can only exist inside stage{}
blocks in stages{}
blocks in a pipeline{}
block. So farming out the pipeline stages is not possible at all, you can only isolate the actual build script stuff, by which point you're writing in shell or Makefile or whatever anyway so why bother ducking around with Groovy.
Note that the parameter {}
declaration is a no-op when not used in a declarative pipeline as well, so that's all back to the GUI unless you like repeating yourself.
And at this point, I give up.
- There should be a native DSL expression for pipelines that allow stashing a subsection of pipelines in a separate file
- WITHOUT having to create ANOTHER repo, and adding it MANUALLY (what the flap is IaC for anyway?) in the administrative interface
- And TBH it should NOT need to tie up another instance, thereby requiring extra hardware because a language is half-baked.
So I'm stuck with Jenkins and a homebrew import solution without declarative goodness, just so I can have clean code because Jenkins doesn't seem to realise that I don't actually WANT to maintain several copies of my code, NOR fork my pipelines out to an extra repo, NOR use its administrative GUI to load libraries. Oh, and you still set up jobs via obscene amounts of GUI configuration. In pursuit of IaC.
Thank Chuck it's the holidays and I can step away from this for a couple of weeks.
Posted on December 22, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.