Deploying Jenkins on Kubernetes with Helm and Terraform

I needed to bring up a Jenkins instance on a Kubernetes cluster for a project I’m currently working on at Enova. Before we get started, let me provide a little background.

I only needed a few jobs configured, and my uptime requirements were soft; I didn’t want to over-build it. I also wanted it to be fully managed through code (no clicking around on a UI if the instance needs to be rebuilt, and it’d also be great if this was reusable for similar future use-cases). I did consider using native Kubernetes jobs, but I wanted the user-friendly features that Jenkins offers — like console streaming, retries, easily-visible history and a UI if things ever need to be investigated manually. I decided to try and bring Jenkins up from a Helm chart, applied through Terraform and configured only through Jenkins Configuration-as-Code (JCasC). Let’s get started!

Installing the Helm chart

If you’re unfamiliar with Helm, it’s a package manager for Kubernetes that allows you to quickly install full, pre-defined applications into a Kubernetes cluster. Jenkins happens to have an official Helm chart, so I began by just trying to install it onto a local Kubernetes cluster. I was using Docker Desktop’s Kubernetes, but you could also use Minikube.

$ helm repo add jenkins https://charts.jenkins.io
$ helm repo update

We needed to create a values.yaml file to pass to Helm as configuration for the chart. There are defaults for all values defined in the chart’s values.yaml, which apply unless we override them in our own values file. I looked through the chart’s values and decided on a small set of my own values to start with:

controller:
  adminUsername: admin
  adminPassword: admin

  JCasC:
    defaultConfig: true
    configScripts:
      welcome-message: |
        jenkins:
          systemMessage: Welcome to Kube-Jenkins!

  # LOCAL ONLY:
  serviceType: NodePort

The last section (LOCAL ONLY) is there because my local Kubernetes cluster didn’t have an ingress controller installed. If you aren’t familiar with ingresses and ingress controllers, a quick summary is that ingresses are a Kubernetes resource that define how a service should be exposed at a hostname, and ingress controllers actually make it happen. The NodePort service type will bypass the ingress and just expose the service at a port directly. When I deploy this to a real cluster, I remove this section and use the default ClusterIP type (which does not expose the service directly) and configure the ingress properly.

By the way, if you aren’t familiar with the syntax, the pipe (“|”) after welcome-message signifies that the nested part below is actually going to be loaded as plaintext, not as YAML. It will, of course, end up being loaded as YAML later, but it won’t be parsed as YAML syntax for this document itself. It’s a common pattern in Kubernetes these days that can be confusing, and small typos can lead to hard-to-debug errors (for example, I spent a few hours just the other day debugging a case where em dashes (—) were used instead of hyphens (-))!

Let’s try it out:

$ helm install jenkins/jenkins --generate-name -f values.yaml
NAME: jenkins-1632774650
LAST DEPLOYED: Mon Sep 27 15:30:51 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get your 'admin' user password by running:
kubectl exec --namespace default -it svc/jenkins-1632774650 -c jenkins -- /bin/cat /run/secrets/chart-admin-password && echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services jenkins-1632774650)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT/login

3. Login with the password from step 1 and the username: admin
4. Configure security realm and authorization strategy
5. Use Jenkins Configuration as Code by specifying configScripts in your values.yaml file, see documentation: http:///configuration-as-code and examples: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos

For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine

For more information about Jenkins Configuration as Code, visit:
https://jenkins.io/projects/jcasc/


NOTE: Consider using a custom image with pre-installed plugins

Give it a few minutes for the underlying resources to actually come up; behind the scenes, Kubernetes will still need to download images and spin up containers. Then, we run those commands to get the IP Jenkins can be reached at, which for me evaluates to http://192.168.65.4:30155/login. That doesn’t actually load for me because the IP is wrong — I’m on OS X where Docker has to run containers inside a VM,  so it’s returning the IP as the VM that’s running the k8s pods sees it. In your environment it might work as-is, but for me, I needed to hit http://localhost:30155/login, to load:

"Welcome to Jenkins" login page

Log in with my extra-secret password and:

Jenkins UI home page that shows our custom welcome message

Not a bad start! Note that, for most use cases, you’ll want to configure Jenkins with an actual authentication system (and not a single administrator username/password), but I won’t explore that in this post.

Defining a job

We need Jenkins to actually run some jobs, and we don’t want to define them by hand. Let’s use configuration-as-code to define a job.

I found a reference for JCasC which contains some example jobs in the Groovy DSL, which is the modern way to define Jenkins jobs: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos/jobs. However, as the docs mention, it needs the jobs-dsl-plugin installed to work correctly. We can add that to the helm chart through the additionalPlugins key, we won’t need to override all the default plugins too.

I also added another customScript entry for the new job, which I copied from the shortest example in that repo above:

controller:
  adminUsername: admin
  adminPassword: admin

  additionalPlugins:
    - job-dsl:1.77

  JCasC:
    defaultConfig: true
    configScripts:
      welcome-message: |
        jenkins:
          systemMessage: Welcome to Kube-Jenkins!
      job-config: |
        jobs:
          - script: >
              folder('testjobs')
          - script: >
              pipelineJob('testjobs/default-agent') {
                definition {
                  cps {
                    script("""\
                      pipeline {
                        agent any
                        stages {
                          stage ('test') {
                            steps {
                              echo "hello"
                            }
                          }
                        }
                      }""".stripIndent())
                  }
                }
              }

  # LOCAL ONLY:
  serviceType: NodePort

Note that the job-config key name there is arbitrary. It just defines a fragment of yaml text, again passed in plaintext with the pipe syntax, that will get used as the configuration-as-code.

To apply the new configuration, upgrade the Helm release:

$ helm upgrade jenkins-1632774650 jenkins/jenkins -f values.yaml

This defines the job successfully, but I get a permission error when it actually executes:

Jenkins UI showing the testjobs/default-agent job listed successfully. Jenkins UI showing a UnapprovedUsageException error.

This is caused by script security settings that limit what functions the Groovy DSL can actually execute. You can click through the admin panel to enable the functions, but we’re trying to do this all through code and don’t want any manual steps. The easy option is just to turn that setting off, with the permissive-script-security plugin. That is not a good idea if your pipeline is open for editing by others, but here, everything will be controlled through code. This plugin also needs to be enabled through a JVM option flag, which luckily the Jenkins chart exposes as well.

I also found that we needed our job to be run through the Groovy sandbox, in order for the in-line script security (which that plugin can bypass) to apply. Adding a sandbox() call inside the cps { } block of the job definition accomplishes that. Here’s what my values file looked like after those updates:

controller:
  adminUsername: admin
  adminPassword: admin

  additionalPlugins:
    - job-dsl:1.7
    - permissivescript-security:0.6

  javaOpts: '-Dpermissive-script-security.enabled=true'

  JCasC:
    defaultConfig: true
    configScripts:
      welcome-message: |
        jenkins:
          systemMessage: Welcome to Kube-Jenkins!
      job-config: |
        jobs:
          - script: >
              folder('testjobs')
          - script: >
              pipelineJob('testjobs/default-agent') {
                definition {
                  cps {
                    script("""\
                      pipeline {
                        agent any
                        stages {
                          stage ('test') {
                            steps {
                              echo "hello"
                            }
                          }
                        }
                      }""".stripIndent())
                    sandbox()
                  }
                }
              }

  # LOCAL ONLY:
  serviceType: NodePort

Debugging note: if the config is invalid, the helm upgrade will still succeed. You’ll need to check the node’s logs to see if this config applied correctly:

$ kubectl logs jenkins-1632774650-0 jenkins

With that changed, I upgrade the helm release again and my test job successfully ran! The Kubernetes plugin spun up a new pod to execute that little script. The pods it brought up, called agents, run a different Docker container than the controller does, which brings me to the last section of this post…

Customizing the agent pod

You may want to do more than what the base Jenkins agent container can do. There are multiple ways to customize your job, including changing the agent image itself, but it might be easiest to bring up a second (or third, or fourth) container using an image of your choice. That’s what I’m going to do to get my job up and running. The Job DSL gives us a way to specify podTemplates to customize the additional containers coming up, through normal k8s yaml:

jobs:
  - script: >
      folder('testjobs')
  - script: >
      pipelineJob('testjobs/default-agent') {
        definition {
          cps {
            script("""\
            pipeline {
              agent {
                kubernetes {
                  yaml '''
                    apiVersion: v1
                    kind: Pod
                    metadata:
                      labels:
                        purpose: jenkins-agent
                    spec:
                      containers:
                      - name: postgres
                        image: postgres:latest
                        command:
                        - cat
                        tty: true
                    '''
                }
              }
              stages {
                stage('Test it out') {
                  steps {
                    container('postgres') {
                      sh 'pwd'
                    }
                  }
                }
              }
            }
            """.stripIndent())
            sandbox()
          }
        }
      }

This runs successfully, and the console output tells us that a pod is coming up with both the Jenkins container and our custom container:

Agent testjobs-default-agent-12-n95l2-3dff8-1fdv7 is provisioned from template testjobs_default-agent_12-n95l2-3dff8
---
apiVersion: "v1"
kind: "Pod"
metadata:
  annotations:
    buildUrl: "http://jenkins-1632774650.default.svc.cluster.local:8080/job/testjobs/job/default-agent/12/"
    runUrl: "job/testjobs/job/default-agent/12/"
  labels:
    purpose: "jenkins-agent"
    jenkins/jenkins-1632774650-jenkins-agent: "true"
    jenkins/label-digest: "f1e72b5350a98da18dd1a3d9055a3c1f34abbec8"
    jenkins/label: "testjobs_default-agent_12-n95l2"
  name: "testjobs-default-agent-12-n95l2-3dff8-1fdv7"
spec:
  containers:
  - command:
    - "cat"
    image: "postgres:latest"
    name: "postgres"
    tty: true
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  - env:
    - name: "JENKINS_SECRET"
      value: "********"
    - name: "JENKINS_TUNNEL"
      value: "jenkins-1632774650-agent.default.svc.cluster.local:50000"
    - name: "JENKINS_AGENT_NAME"
      value: "testjobs-default-agent-12-n95l2-3dff8-1fdv7"
    - name: "JENKINS_NAME"
      value: "testjobs-default-agent-12-n95l2-3dff8-1fdv7"
    - name: "JENKINS_AGENT_WORKDIR"
      value: "/home/jenkins/agent"
    - name: "JENKINS_URL"
      value: "http://jenkins-1632774650.default.svc.cluster.local:8080/"
    image: "jenkins/inbound-agent:4.3-4"
    name: "jnlp"
    resources:
      limits: {}
      requests:
        memory: "256Mi"
        cpu: "100m"
    volumeMounts:
    - mountPath: "/home/jenkins/agent"
      name: "workspace-volume"
      readOnly: false
  nodeSelector:
    kubernetes.io/os: "linux"
  restartPolicy: "Never"
  volumes:
  - emptyDir:
      medium: ""
    name: "workspace-volume"

The job does exactly what I told it to (prints out the pwd), which is fine for today’s example. I’m free to configure my job exactly how I need it, using any Docker images I need to get it done. Let’s move on to deploying this to a real k8s cluster!

Deploying to a live Kubernetes cluster

At Enova, we use Terraform to manage as much of our infrastructure as possible, including Kubernetes. There’s a Helm provider for Terraform that I’ll use to install the Helm chart I just crafted. Before I get to that, however, I can test the Helm install in a separate namespace to make sure it’ll work correctly (specifically, I’m probably going to need more ingress configuration).

So, I switch my kubectl context to the cluster, remove the last “LOCAL ONLY” section from values.yaml, create a new namespace `jenkins-test`, and install the Jenkins chart:

$ kubectl create namespace jenkins-test
$ helm install jenkins/jenkins -n jenkins-test --generate-name -f values.yaml

Like I suspected, the ingress does not come up, so we just need to add a section to values.yaml to configure it:

ingress:
  enabled: true
  hostName: "jenkins.jenkins-test.yourcluster.com"

With that, Jenkins comes up on the host specified and works like a charm. This requires the external-dns controller to be running in your cluster. If you don’t want to set that up, you can use the NodePort or LoadBalancer service type and access Jenkins that way.

For the Terraform I’ll put my values.yaml in its own folder and include it into the Helm chart with the `file` helper. I will be provisioning just two resources here: the Helm release and the Kubernetes namespace.

resource "helm_release" "jenkins" {
  provider = helm.primary

  name       = "jenkins"
  repository = "https://charts.jenkins.io"
  chart      = "jenkins"
  version    = "3.6.0"
  namespace  = "jenkins"
  timeout    = 600
  values = [
    file("values.yaml"),
  ]

  depends_on = [
    kubernetes_namespace.jenkins,
  ]
}

resource "kubernetes_namespace" "jenkins" {
  provider = kubernetes.primary
  metadata {
    name = "jenkins"

    labels = {
      name        = "jenkins"
      description = "jenkins"
    }
  }
}

This applies smoothly (thanks, Atlantis) and my fully-defined-in-code, miniature Jenkins instance is ready to go.

 

If you enjoy working with Kubernetes or solving challenging problems, Enova is hiring!

 

P.S. For reference, here’s my final values.yaml file:

controller:
  adminUsername: admin
  adminPassword: admin

  ingress:
    enabled: true
    hostName: "jenkins.yourcluster.com"

  additionalPlugins:
    - job-dsl:1.77
    - permissive-script-security:0.6

  javaOpts: '-Dpermissive-script-security.enabled=true'

  JCasC:
    defaultConfig: true
    configScripts:
      welcome-message: |
        jenkins:
          systemMessage: Welcome to Kube-Jenkins!
      job-config: |
        jobs:
          - script: >
              folder('testjobs')
          - script: >
              pipelineJob('testjobs/default-agent') {
                definition {
                  cps {
                    script("""\
                    pipeline {
                      agent {
                        kubernetes {
                          yaml '''
                            apiVersion: v1
                            kind: Pod
                            metadata:
                              labels:
                                purpose: jenkins-agent
                            spec:
                              containers:
                              - name: postgres
                                image: postgres:latest
                                command:
                                - cat
                                tty: true
                            '''
                        }
                      }
                      stages {
                        stage('Test it out') {
                          steps {
                            container('postgres') {
                              sh 'pwd'
                            }
                          }
                        }
                      }
                    }
                    """.stripIndent())
                    sandbox()
                  }
                }
              }