Jenkins integration example

Runperf allows to integrate with Jenkins via xunit results and also can deliver html results with great level of details especially for regression testing. Let me present one example which is not meant to be copy&pasted to your environment, but can serve as an inspiration for integration. An overview might look like this:

_images/diagram.png

Which might generate following html results (slightly outdated version that compares a different setting on host and guest): here

Let’s create imagine we have example.org machine, we can create rp-example jpb to run regression testing, then rp-example-manual job to allow testing of changes or custom params. For each of these we might want to create $name-ident jobs to allow cherry-picking and analyzing of results in order to create models to easier evaluate the expected results.

A useful addition is the rp-analysis job to allow running custom compare queries without the need to download and run them from your machines and the rp-prune-artifacts to automatically remove big tarballs with full results and only keep the json results that are small and suffice for compare-perf usecases.

All of these can be easily defined via Jenkins Job Builder:

##############################################################################
# Default configuration
##############################################################################
- defaults:
    name: "global"
    auth-token: "ADD-YOUR-TOKEN-HERE"
    mailto: ""
    wrappers:
        - ansicolor
        - timestamps
        - workspace-cleanup
    build-discarder:
        days-to-keep: 365
        artifact-num-to-keep: 60
    # Default runperf params
    param-distro: ''
    param-guest-distro: ''
    # By default we use those 3 tests (the quotation is required)
    param-tests: "'fio:{{\"targets\": \"/fio\"}}' 'uperf:{{\"protocols\": \"tcp\"}}' 'uperf:{{\"protocols\": \"udp\", \"test-types\": \"rr\"}}'"
    # By default we use these profiles
    param-profiles: "Localhost DefaultLibvirt TunedLibvirt"
    # Set the default tolerances
    param-cmp-tolerance: 5
    param-cmp-stddev-tolerance: 10
    # Don't use any model by default
    param-cmp-model-job: ''
    param-cmp-model-build: ''
    # Use 20 reference builds by default
    param-no-reference-builds: 20
    # By default run daily around 21:xx
    trigger-on: "H 21 * * *"


##############################################################################
# Definition for the run-perf execution job
##############################################################################
- job-template:
    name: "{name}"
    triggers:
        - timed: "{trigger-on}"
    project-type: pipeline
    parameters:
        - string:
            name: DISTRO
            description: 'Distribution to be installed/is installed (Fedora-31), when empty latest el8 nightly build is obtained from bkr'
            default: "{param-distro}"
        - string:
            name: GUEST_DISTRO
            description: 'Distribution to be installed on guest, when empty "distro" is used'
            default: "{param-guest-distro}"
        - string:
            name: MACHINE
            description: 'Machine to be provisioned and tested'
            default: "{param-machine}"
        - string:
            name: ARCH
            description: 'Target machine architecture'
            default: "{param-arch}"
        - string:
            name: TESTS
            description: 'Space separated list of tests to be executed'
            default: "{param-tests}"
        - string:
            name: PROFILES
            description: 'Space separated list of profiles to be applied'
            default: "{param-profiles}"
        - string:
            name: SRC_BUILD
            description: 'Base build to compare with'
            default: "{param-src-build}"
        - string:
            name: CMP_MODEL_JOB
            description: 'Job to copy linear "model.json" from'
            default: "{param-cmp-model-job}"
        - string:
            name: CMP_MODEL_BUILD
            description: 'Build to copy linear "model.json" from (-1 means lastSuccessful)'
            default: "{param-cmp-model-build}"
        - string:
            name: CMP_TOLERANCE
            description: Tolerance for mean values
            default: "{param-cmp-tolerance}"
        - string:
            name: CMP_STDDEV_TOLERANCE
            description: Tolerance for standard deviation values
            default: "{param-cmp-stddev-tolerance}"
        - string:
            name: HOST_KERNEL_ARGS
            description: Add custom kernel arguments on host
            default: ""
        - string:
            name: HOST_BKR_LINKS
            description: 'Space separated list of urls to be grepped for "http.*$arch\\.rpm" and "http.*noarch\\.rpm", filtered by HOST_BKR_LINKS_FILTER and then set to be installed on host with "--allowerase". Intended usage is to use koji/beaker link like: https://koji.fedoraproject.org/koji/buildinfo?buildID=1534744'
            default: ""
        - string:
            name: HOST_BKR_LINKS_FILTER
            description: 'Filter to be split and applied via "grep -v -e EXPR1 -e EXPR2 ..."'
            default: "debug"
        - string:
            name: GUEST_KERNEL_ARGS
            description: Add custom kernel argsuments on workers/guests
            default: ""
        - string:
            name: GUEST_BKR_LINKS
            description: 'Space separated list of urls to be grepped for "http.*$arch\\.rpm" and "http.*noarch\\.rpm" filtered by GUEST_BKR_LINKS_FILTER and then set to be installed on guest/workers with "--allowerase". Intended usage is to use koji/beaker link like: https://koji.fedoraproject.org/koji/buildinfo?buildID=1534744'
            default: ""
        - string:
            name: GUEST_BKR_LINKS_FILTER
            description: 'Filter to be split and applied via "grep -v -e EXPR1 -e EXPR2 ..."'
            default: "debug"
        - bool:
            name: PBENCH_PUBLISH
            description: 'Push the pbench results to company pbench server'
            default: "{param-pbench-publish}"
        - string:
            name: DESCRIPTION_PREFIX
            description: Description prefix (describe the difference from default)
            default: ""
        - string:
            name: NO_REFERENCE_BUILDS
            description: "Number of reference builds for comparison"
            default: "{param-no-reference-builds}"
    sandbox: true
    pipeline-scm:
        scm:
            - git:
                url: git://PATH_TO_YOUR_REPO_WITH_PIPELINES.git
                branches:
                    - master
        script-path: "runperf.groovy"
        lightweight-checkout: true


##############################################################################
# Definition of the compare-perf only job
##############################################################################
- job-template:
    name: "rp-analysis-{user}"
    project-type: pipeline
    concurrent: false
    description: |
        This job allows to cherry-pick results from runperf job and redo the analysis. It is
        not thread-safe, therefor it is advised to copy this job with user-suffix and run
        the analysis in series storing the graphs manually before submitting next comparison.
    parameters:
        - string:
            name: SRC_JOB
            default: "{param-src-job}"
            desciption: Source jenkins job
        - string:
            name: BUILDS
            default: ""
            description: "List of space separated build numbers to be analyzed, first build is used as source build (not included in graphs)"
        - string:
            name: DESCRIPTION
            default: ""
            description: Description of this analysis
        - string:
            name: CMP_MODEL_JOB
            description: 'Job to copy linear "model.json" from'
            default: "{param-cmp-model-job}"
        - string:
            name: CMP_MODEL_BUILD
            description: 'Build to copy linear "model.json" from (-1 means lastSuccessful)'
            default: "{param-cmp-model-build}"
        - string:
            name: CMP_TOLERANCE
            description: Tolerance for mean values
            default: "{param-cmp-tolerance}"
        - string:
            name: CMP_STDDEV_TOLERANCE
            description: Tolerance for standard deviation values
            default: "{param-cmp-stddev-tolerance}"
    sandbox: true
    pipeline-scm:
        scm:
            - git:
                url: git://PATH_TO_YOUR_REPO_WITH_PIPELINES.git
                branches:
                    - master
        script-path: "compareperf.groovy"
        lightweight-checkout: true


##############################################################################
# Definition of the analyze-perf job
##############################################################################
- job-template:
    name: "{name}-identify"
    project-type: pipeline
    description: |
        This job uses analyze-perf script to create model that can be used to better
        evaluate run-perf results.
    parameters:
        - string:
            name: SRC_JOB
            default: "{name}"
            desciption: Source jenkins job
        - string:
            name: BUILDS
            default: ""
            description: "List of space separated build numbers to be used"
        - string:
            name: DESCRIPTION
            default: ""
            description: Free-form description
        - string:
            name: EXTRA_ARGS
            default: ""
            description: Additional analyze-perf arguments, for example -t to override default tolerance
    sandbox: true
    pipeline-scm:
        scm:
            - git:
                url: git://PATH_TO_YOUR_REPO_WITH_PIPELINES.git
                branches:
                    - master
        script-path: "identify.groovy"
        lightweight-checkout: true


##############################################################################
# Project to define jobs for automated regression jobs on example.org machine
##############################################################################
- project:
    name: rp-example
    param-machine: "example.org"
    param-arch: "x86_64"
    param-src-build: 1
    param-cmp-model-job: "{name}-identify"
    param-cmp-model-build: -1
    param-pbench-publish: true
    jobs:
        - "{name}"
        - "{name}-identify"


##############################################################################
# Project to define manual jobs for example.org machine
##############################################################################
- project:
    name: rp-example-manual
    param-machine: "example.org"
    param-arch: "x86_64"
    param-distro: "YOUR STABLE RELEASE"
    param-src-build: 1
    param-cmp-model-job: "rp-example-manual-identify"
    param-cmp-model-build: 1
    param-pbench-publish: false
    trigger-on: ""
    jobs:
        - "{name}"
        - "{name}-identify"


##############################################################################
# Project to allow users to run custom queries out of existing results
##############################################################################
- project:
    name: rp-analysis
    user:
        - virt
    param-src-job: "rp-example-manual"
    param-cmp-model-job: "rp-example-manual-identify"
    param-cmp-model-build: 1
    jobs:
        - "rp-analysis-{user}"

##############################################################################
# Prune artifacts after 14 days, hopefully we would notice and mark/move
# them when full details are needed.
##############################################################################
- project:
    name: rp-prune-artifacts
    param-age: 14
    jobs:
        - "rp-prune-artifacts"

Now let’s have a look at the runperf.groovy pipeline:

// Pipeline to run runperf and compare to given results
// Following `params` have to be defined in job (eg. via jenkins-job-builder)

// Machine to be provisioned and tested
def machine = params.MACHINE
// target machine's architecture
def arch = params.ARCH
// Distribution to be installed/is installed (Fedora-32)
// when empty it will pick the latest available nightly el8
def distro = params.DISTRO
// Distribution to be installed on guest, when empty "distro" is used
def guest_distro = params.GUEST_DISTRO
// Space separated list of tests to be executed
def tests = params.TESTS
// Space separated list of profiles to be applied
def profiles = params.PROFILES
// Base build to compare with
def src_build = params.SRC_BUILD
// Compareperf tollerances
def cmp_model_job = params.CMP_MODEL_JOB
def cmp_model_build = params.CMP_MODEL_BUILD
def cmp_tolerance = params.CMP_TOLERANCE
def cmp_stddev_tolerance = params.CMP_STDDEV_TOLERANCE
// Add custom kernel arguments on host
host_kernel_args = params.HOST_KERNEL_ARGS
// Install rpms from (beaker) urls
host_bkr_links = params.HOST_BKR_LINKS
// filters for host_bkr_links
host_bkr_links_filter = params.HOST_BKR_LINKS_FILTER
// Add custom kernel argsuments on workers/guests
guest_kernel_args = params.GUEST_KERNEL_ARGS
// Install rpms from (beaker) urls
guest_bkr_links = GUEST_BKR_LINKS
// filters for guest_bkr_links
guest_bkr_links_filter = params.GUEST_BKR_LINKS_FILTER
// How many builds to include in the plots
def plot_builds = params.PLOT_BUILDS
// Description prefix (describe the difference from default)
def description_prefix = params.DESCRIPTION_PREFIX
// Number of reference builds
def no_reference_builds = params.NO_REFERENCE_BUILDS.toInteger()
// Pbench-publish related options
def pbench_publish = params.PBENCH_PUBLISH

// Extra variables
// Provisioner machine
def worker_node = 'runperf-slave1'
// runperf git branch
def git_branch = 'master'
// extra runperf arguments
def extra_args = ""

node(worker_node) {
    stage('Preprocess') {
        // User-defined distro or use bkr to get latest RHEL-8.0*
        if (distro) {
            echo "Using distro ${distro} from params"
        } else {
            distro = sh(returnStdout: true, script: 'echo -n $(bkr distro-trees-list --arch x86_64 --name="%8.0%.n.%" --family RedHatEnterpriseLinux8 --limit 1 --labcontroller $ENTER_LAB_CONTROLLER_URL | grep Name: | cut -d":" -f2 | xargs | cut -d" " -f1)')
            echo "Using latest distro ${distro} from bkr"
        }
        if (! guest_distro) {
            guest_distro == distro
        }
        if (guest_distro == distro) {
            echo "Using the same guest distro ${distro}"
        } else {
            echo "Using different guest distro: ${guest_distro} from host: ${distro}"
        }

    }

    stage('Measure') {
        git branch: git_branch, url: 'https://github.com/distributed-system-analysis/run-perf.git'
        // This way we add downstream plugins and other configuration
        dir("downstream_config") {
            git branch: 'master', url: 'git://PATH_TO_YOUR_REPO_WITH_PIPELINES/runperf_config.git'
            sh 'python3 setup.py develop --user'
        }
        // Remove files that might have been left behind
        sh '\\rm -Rf result* src_result* reference_builds html'
        sh "mkdir html"
        sh 'python3 setup.py develop --user'
        def host_script = ''
        def guest_script = ''
        def metadata = ''
        // Use grubby to update default args on host
        if (host_kernel_args) {
            host_script += "\ngrubby --args '${host_kernel_args}' --update-kernel=\$(grubby --default-kernel)"
        }
        // Ugly way of installing all arch's rpms from a site, allowing a filter
        // this is usually used on koji/brew to allow updating certain packages
        // warning: It does not work when the url rpm is older.
        if (host_bkr_links) {
            host_script += "\nfor url in ${host_bkr_links}; do dnf install -y --allowerasing \$(curl -k \$url | grep -o -e \"http.*${arch}\\.rpm\" -e \"http.*noarch\\.rpm\" | grep -v \$(for expr in ${host_bkr_links_filter}; do echo -n \" -e \$expr\"; done)); done"
        }
        // The same on guest
        if (guest_kernel_args) {
            guest_script += "\ngrubby --args '${guest_kernel_args}' --update-kernel=\$(grubby --default-kernel)"
        }
        // The same on guest
        if (guest_bkr_links) {
            guest_script += "\nfor url in ${guest_bkr_links}; do dnf install -y --allowerasing \$(curl -k \$url | grep -o -e \"http.*${arch}\\.rpm\" -e \"http.*noarch\\.rpm\" | grep -v \$(for expr in ${guest_bkr_links_filter}; do echo -n \" -e \$expr\"; done)); done"
        }
        if (host_script) {
            writeFile file: 'host_script', text: host_script
            extra_args += " --host-setup-script host_script --host-setup-script-reboot"
        }
        if (guest_script) {
            writeFile file: 'worker_script', text: guest_script
            extra_args += " --worker-setup-script worker_script"
        }
        if (pbench_publish) {
            metadata += " pbench_server_publish=yes"
        }
        // Using jenkins locking to prevent multiple access to a single machine
        lock(machine) {
            sh '$KINIT'
            sh "python3 scripts/run-perf ${extra_args} -vvv --hosts ${machine} --distro ${distro} --provisioner Beaker --default-password YOUR_DEFAULT_PASSWORD --profiles ${profiles} --paths ./downstream_config --metadata 'build=${currentBuild.number}${description_prefix}' 'url=${currentBuild.absoluteUrl}' 'project=YOUR_PROJECT_ID ${currentBuild.projectName}' 'pbench_server=YOUR_PBENCH_SERVER_URL' ${metadata} -- ${tests}"
            sh "echo >> \$(echo -n result*)/RUNPERF_METADATA"       // Add new-line after runperf output
        }
    }

    stage('Archive results') {
        // Archive only "result_*" as we don't want to archive "resultsNoArchive"
        sh returnStatus: true, script: 'tar cf - result_* | xz -T2 -7e - > "$(echo result_*)".tar.xz'
        archiveArtifacts allowEmptyArchive: true, artifacts: 'result_*.tar.xz'
        archiveArtifacts allowEmptyArchive: true, artifacts: 'result*/*/*/*/*.json'
        archiveArtifacts allowEmptyArchive: true, artifacts: 'result*/RUNPERF_METADATA'
    }

    stage('Compare') {
        // Get up to no_reference_builds json results to use as a reference
        reference_builds = []
        latestBuild = Jenkins.instance.getItem(env.JOB_NAME).lastSuccessfulBuild.number
        for (i=latestBuild; i > 0; i--) {
            copyArtifacts filter: 'result*/**/*.json,result*/RUNPERF_METADATA', optional: true, fingerprintArtifacts: true, projectName: env.JOB_NAME, selector: specific("$i"), target: "reference_builds/${i}/"
            if (fileExists("reference_builds/${i}")) {
                reference_builds.add("${i}")
                if (reference_builds.size() >= no_reference_builds) {
                    break
                }
            }
        }
        // Get src build's json results to compare against
        copyArtifacts filter: 'result*/**/*.json,result*/RUNPERF_METADATA', optional: true, fingerprintArtifacts: true, projectName: env.JOB_NAME, selector: specific(src_build), target: 'src_result/'
        // If model build set get the model from it's job
        if (cmp_model_build) {
            if (cmp_model_build == '-1') {
                copyArtifacts filter: 'model.json', optional: false, fingerprintArtifacts: true, projectName: cmp_model_job, selector: lastSuccessful(), target: '.'
            } else {
                copyArtifacts filter: 'model.json', optional: false, fingerprintArtifacts: true, projectName: cmp_model_job, selector: specific(cmp_model_build), target: '.'
            }
            cmp_extra = "--model-linear-regression model.json"
        } else {
            cmp_extra = ''
        }
        if (reference_builds.size() > 0) {
            cmp_extra += " --references "
            for (i in reference_builds.reverse()) {
                cmp_extra += " ${i}:"
                cmp_extra += sh(returnStdout: true, script: "echo reference_builds/${i}/*").trim()
            }
        }
        // Compare the results and generate html as well as xunit results
        def status = sh returnStatus: true, script:  "python3 scripts/compare-perf -vvv --tolerance " + cmp_tolerance + " --stddev-tolerance " + cmp_stddev_tolerance + ' --xunit result.xml --html html/index.html ' + cmp_extra + ' -- src_result/* $(find . -maxdepth 1 -type d ! -name "*.tar.*" -name "result*")'
        if (fileExists('result.xml')) {
            if (status) {
                // This could mean there were no tests to compare or other failures, interrupt the build
                echo "Non-zero exit status: ${status}"
            }
        } else {
            currentBuild.result = 'FAILED'
            error "Missing result.xml, exit code: ${status}"
        }
    }

    stage('Postprocess̈́') {
        // Build description
        currentBuild.description = "${description_prefix}${src_build} ${currentBuild.number} ${distro}"
        // Store and publish html results
        archiveArtifacts allowEmptyArchive: true, artifacts: 'html/index.html'
        if (fileExists('html')) {
            publishHTML([allowMissing: true, alwaysLinkToLastBuild: false, keepAll: true, reportDir: 'html', reportFiles: 'index.html', reportName: 'HTML Report', reportTitles: ''])
        }
        // Junit results
        junit allowEmptyResults: true, testResults: 'result.xml'
        // Remove the unnecessary big files
        sh '\\rm -Rf result* src_result* reference_builds'
        // Run cleanup on older artifacts
        build (job: "rp-prune-artifacts",
               parameters: [string(name: 'JOB', value: env.JOB_NAME)],
               quietPeriod: 0,
               wait: false)
    }
}

Following compareperf.groovy pipeline is extremely useful for later analysis, or extra comparison of manual pipelines:

// Pipeline to create comparison of previously generated runperf results
// Following `params` have to be defined in job (eg. via jenkins-job-builder)

// Source jenkins job
def src_job = params.SRC_JOB
// List of space separated build numbers to be analyzed, first build is used
// as source build (not included in graphs)
def builds = params.BUILDS.split().toList()
// Description of this analysis
def description = params.DESCRIPTION
// Compareperf tollerances
def cmp_model_job = params.CMP_MODEL_JOB
def cmp_model_build = params.CMP_MODEL_BUILD
def cmp_tolerance = params.CMP_TOLERANCE
def cmp_stddev_tolerance = params.CMP_STDDEV_TOLERANCE

// Extra variables
// Provisioner machine
def worker_node = 'runperf-slave1'
// runperf git branch
def git_branch = 'master'

stage('Analyze') {
    node (worker_node) {
        assert builds.size() >= 2
        git branch: git_branch, url: 'https://github.com/distributed-system-analysis/run-perf.git'
        sh '\\rm -Rf result* src_result* reference_builds html'
        sh 'mkdir html'
        def reference_builds = []
        // Get all the reference builds (second to second-to-last ones)
        if (builds.size() > 2) {
            for (build in builds[1..-2]) {
                copyArtifacts filter: 'result*/**/*.json,result*/RUNPERF_METADATA', optional: true, fingerprintArtifacts: true, projectName: src_job, selector: specific(build), target: "reference_builds/${build}/"
                if (fileExists("reference_builds/${build}")) {
                    reference_builds.add("${build}")
                } else {
                    echo "Skipping reference build ${build}, failed to copy artifacts."
                }
            }
        }
        // Get the source build
        copyArtifacts filter: 'result*/**/*.json,result*/RUNPERF_METADATA', optional: false, fingerprintArtifacts: true, projectName: src_job, selector: specific(builds[0]), target: 'src_result/'
        // Get the destination build
        copyArtifacts filter: 'result*/**/*.json,result*/RUNPERF_METADATA', optional: false, fingerprintArtifacts: true, projectName: src_job, selector: specific(builds[-1]), target: '.'
        // Get the model
        if (cmp_model_build) {
            if (cmp_model_build == '-1') {
                copyArtifacts filter: 'model.json', optional: false, fingerprintArtifacts: true, projectName: cmp_model_job, selector: lastSuccessful(), target: '.'
            } else {
                copyArtifacts filter: 'model.json', optional: false, fingerprintArtifacts: true, projectName: cmp_model_job, selector: specific(cmp_model_build), target: '.'
            }
            cmp_extra = "--model-linear-regression model.json"
        } else {
            cmp_extra = ''
        }
        if (reference_builds.size() > 0) {
            cmp_extra += " --references "
            for (i in reference_builds) {
                cmp_extra += " ${i}:"
                cmp_extra += sh(returnStdout: true, script: "echo reference_builds/${i}/*").trim()
            }
        }
        def status = 0
        lock (worker_node) {
            // Avoid modifying worker_node's environment while executing compareperf
            // TODO: Use venv
            sh 'python3 setup.py develop --user'
            status = sh returnStatus: true, script:  "python3 scripts/compare-perf -vvv --tolerance " + cmp_tolerance + " --stddev-tolerance " + cmp_stddev_tolerance + ' --xunit result.xml --html html/index.html ' + cmp_extra + ' -- src_result/* $(find . -maxdepth 1 -type d ! -name "*.tar.*" -name "result*")'
        }
        if (fileExists('result.xml')) {
            if (status) {
                // This could mean there were no tests to compare or other failures, interrupt the build
                echo "Non-zero exit status: ${status}"
            }
        } else {
            currentBuild.result = 'FAILED'
            error "Missing result.xml, exit code: ${status}"
        }
        currentBuild.description = "${description}${builds} ${src_job}"
        archiveArtifacts allowEmptyArchive: true, artifacts: 'html/index.html'
        junit allowEmptyResults: true, testResults: 'result.xml'
        if (fileExists('html')) {
            publishHTML([allowMissing: true, alwaysLinkToLastBuild: false, keepAll: true, reportDir: 'html', reportFiles: 'index.html', reportName: 'HTML Report', reportTitles: ''])
        }
        // Remove the unnecessary big files
        sh '\\rm -Rf result* src_result* reference_builds'
    }
}

And the identify.groovy to allow creating linear models:

// Pipeline to create comparison of previously generated runperf results
// Following `params` have to be defined in job (eg. via jenkins-job-builder)

// Source jenkins job
def src_job = params.SRC_JOB
// List of space separated build numbers to be analyzed, first build is used
// as source build (not included in graphs)
def builds = params.BUILDS.split().toList()
// Description of this analysis
def description = params.DESCRIPTION
// Extra AnalyzePerf arguments
def extra_args = params.EXTRA_ARGS

// Extra variables
// Provisioner machine
def worker_node = 'runperf-slave1'
// runperf git branch
def git_branch = 'master'

stage('Analyze') {
    node (worker_node) {
        git branch: git_branch, url: 'https://github.com/distributed-system-analysis/run-perf.git'
        sh '\\rm -Rf results* model.json'
        // Get all the specified builds
        for (build in builds) {
            copyArtifacts filter: 'result*/**/*.json', optional: false, fingerprintArtifacts: true, projectName: src_job, selector: specific(build), target: 'results/'
        }
        def status = 0
        lock (worker_node) {
            // Avoid modifying worker_node's environment while executing compareperf
            // TODO: Use venv
            sh 'python3 setup.py develop --user'
            status = sh returnStatus: true, script:  "python3 scripts/analyze-perf -vvv -l model.json " + extra_args + " -- results/*"
        }
        if (fileExists('model.json')) {
            // This could mean there were no tests to compare or other failures, interrupt the build
            if (status) {
                echo "Non-zero exit status: ${status}"
            }
        } else {
            currentBuild.result = 'FAILED'
            error "Missing model.json, exit code: ${status}"
        }
        currentBuild.description = builds.join(' ')
        archiveArtifacts allowEmptyArchive: true, artifacts: 'model.json'
        sh '\\rm -Rf results*'
    }
}

Last but not least let’s have a look at the prune_artifacts.py:

#!/bin/env python3
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
#
# See LICENSE for more details.
#
# Copyright: Red Hat Inc. 2020
# Author: Lukas Doktor <ldoktor@redhat.com>
"""
When executed on a jenkins master it allows to walk the results and remove
"*.tar.*" files on older builds that are not manually marked as
keep-for-infinity.
"""

import time
import glob
import os
import re

JENKINS_DIR = "/var/lib/jenkins/jobs/"


def prune_result(path, before):
    """
    Prune result if older than age and keep forever not set
    """
    build_path = os.path.join(path, "build.xml")
    if not os.path.exists(build_path):
        print("KEEP  %s - no build.xml" % path)
        return
    treated_path = os.path.join(path, "ld_artifact_pruned")
    if os.path.exists(treated_path):
        print("SKIP  %s - already treated" % path)
        return
    with open(build_path) as build_fd:
        build_xml = build_fd.read()
    if "<keepLog>false</keepLog>" not in build_xml:
        print("KEEP  %s - keep forever set" % path)
        return
    match = re.findall(r"<startTime>(\d+)</startTime>", build_xml)
    if not match:
        print("KEEP  %s - no startTime\n%s" % (path, build_xml))
        return
    start_time = int(match[-1])
    if start_time > before:
        print("KEEP  %s - younger than %s (%s)" % (path, before, start_time))
        return
    print("PRUNE %s (%s)" % (path, start_time))
    for pth in glob.glob(os.path.join(path, "archive", "*.tar.*")):
        os.unlink(pth)
    with open(treated_path, 'wb'):
        pass


def prune_results(job, age):
    """
    Walk job's builds and prune them
    """
    if not job:
        print("No job specified, returning")
        return
    # Jenkins stores startTime * 1000
    before = int((time.time() - age) * 1000)
    print("Pruning %s builds older than %s" % (job, before))
    builds = glob.glob(os.path.join(JENKINS_DIR, job, "builds", "*"))
    for build in builds:
        prune_result(build, before)
    print("Done")


if __name__ == '__main__':
    prune_results(os.environ.get('JOB'),
                  int(os.environ.get('AGE', 14)) * 86400)