runperf package

Submodules

runperf.exceptions module

exception runperf.exceptions.RebootRequest(hosts, interrupted_action)

Bases: Exception

Exception used when reboot is requested

exception runperf.exceptions.StepFailed

Bases: RuntimeError

Exception used to mark failed steps

exception runperf.exceptions.TestSkip

Bases: RuntimeWarning

Exception used to mark skipped tests

runperf.html_report module

class runperf.html_report.KnownItems

Bases: object

Class to keep track of known items

add(value)

Add item to the list of known items

get_short(value)

Get a short representation of this value (A, B, AA, …)

runperf.html_report.anonymize_test_params(lines)

Tweaks to remove dynamic data from test-params

runperf.html_report.generate_report(path, results, with_charts=False, small_file=False)

Generate html report from results

Parameters:
  • path – Path to the output html file
  • results – Results container from runperf.result.ResultsContainer
  • with_charts – Whether to generate graphs

runperf.machine module

class runperf.machine.BaseMachine(log, name, distro, default_passwords=None)

Bases: object

Basic machine interaction

copy_from(src, dst)

Copy file from the machine

Warning:This won’t check/setup keys
copy_to(src, dst)

Copy file(s) to the machine

Warning:This won’t check/setup keys
fetch_logs(path)

Fetch logs from this machine

get_addr()

Get addr/hostname

get_fullname()

Return full host name

get_host_addr()

Get addr/hostname of the host (or self)

get_info()

Report basic info about this machine

get_session(timeout=60, hop=None)

Get session to this machine

Parameters:
  • timeout – timeout
  • hop (BaseMachine) – ssh proxy machine
Returns:

aexpect shell session

get_session_cont(timeout=60, hop=None)

Get session to this machine suitable for “with” usage

Parameters:
  • timeout – timeout
  • hop (BaseMachine) – ssh proxy machine
Returns:

aexpect shell session

get_ssh_cmd(hop=None)

Get session

Parameters:hop – Use hop as ssh proxy
ssh_copy_id(hop=None)

Copy default id to remote host

class runperf.machine.Controller(args, log)

Bases: object

Object allowing to interact with multiple hosts

apply_profile(profile, extra)

Apply profile on each host, report list of lists of workers

cleanup()

Post-testing cleanup

fetch_logs(path)

Fetch logs from all hosts

static for_each_host(hosts, method, args=(), kwargs=None)

Perform action in parallel on each host, signal RebootRequest if necessary.

Parameters:
  • hosts – List of hosts to run the tasks on
  • method – host.$method to be performed per each host
  • args – positional arguments forwarded to the called methods
  • kwargs – key word arguments forwarded to the called methods
Raises:

exceptions.RebootRequest – When any of the actions report non-zero return.

for_each_host_retry(attempts, hosts, method, args=(), kwargs=None)

Perform action in parallel on each host while allowing re-try if available.

This is useful for tasks that might fail/require reboot.

Parameters:
  • attempts – How many attempts per-host
  • hosts – List of hosts to run the tasks on
  • method – host.$method to be performed per each host
  • args – positional arguments forwarded to the called methods
  • kwargs – key word arguments forwarded to the called methods
Raises:

exceptions.RebootRequest – When any of the actions report non-zero return.

revert_profile()

Revert profile

run_test(test_class, workers, extra)

Run a test

Parameters:
  • test_class – class to be instantiated and executed via this controller
  • workers – list of workers to be made available for execution
setup()

Basic setup like ssh keys, pbench installation and such

write_metadata(key, value)

Append the key:value to the RUNPERF_METADATA file

class runperf.machine.Host(parent_log, name, addr, distro, args, hop=None)

Bases: runperf.machine.BaseMachine

Base object to leverage a machine

apply_profile(profile, extra, setup_script, rp_paths)

Apply profile and set new workers

Parameters:
  • profile – name of the requested profile
  • setup_script – setup script to be executed on each worker setup
  • paths – paths to runperf assets
cleanup()

Cleanup after testing

fetch_logs(path)

Fetch important logs

generate_ssh_key()

Generate/reuse ssh key in ~/.ssh/id_rsa

get_addr()

Return addr as they are static

get_fullname()

Return full host name

get_host_addr()

Return our addr as we are the host

get_info()

Report basic info about this machine

get_ssh_cmd(hop=None)

By default use self.hop as the default hop

provision(provisioner)

Provision the machine

reboot()

Gracefully reboot the machine

revert_profile()

Revert profile if any profile set

run_script(script, timeout=600)

Runs a script on the machine

setup()

Prepare host

class runperf.machine.LibvirtGuest(host, name, distro, base_image, smp, mem, default_passwords=None, extra_params=None)

Bases: runperf.machine.BaseMachine

Object representing libvirt guests

Parameters:
  • host – Host on which to define the VM
  • name – Name of the VM
  • distro – OS version installed on the image
  • image – Path to guest image
  • smp – Number of CPUs to be used by VM
  • mem – Amount of memory to be used by VM
XML_FILTERS = ((re.compile('<uuid>[^<]+</uuid>'), 'UUID'), (re.compile('<mac address=[^/]+/>'), 'MAC'), (re.compile('[\\"\']/var/lib/libvirt/[^\\"\']+[\\"\']'), 'PATH'), (re.compile('<seclabel.*?</seclabel>', re.DOTALL), 'SECLABEL'), (re.compile('portid=[\\"\'][^\\"\']+[\\"\']'), 'PORTID'), (re.compile('[\\"\']/dev/pts[^\\"\']*[\\"\']'), 'PTS'), (re.compile('\\sid=[\'\\"]\\d+[\'\\"]'), ' ID'))
cleanup()

Destroy the machine and close host connection

get_addr()

Get addr/hostname

get_fullname()

Return full host name

get_host_addr()

Get addr/hostname of the host (or self)

get_host_session()

Get and cache host session.

This session will be cleaned automatically on “.cleanup()”

get_info()

Report basic info about this machine

get_ssh_cmd(hop=None)

By default use self.hop as the default hop

is_defined()

Whether VM is defined (not necessary running)

is_running()

Whether VM is running

start()

Define and start the VM

runperf.machine.get_distro_info(machine)

Various basic sysinfo

runperf.profiles module

class runperf.profiles.BaseProfile(host, rp_paths, extra)

Bases: object

Base class to define profiles

Base profile that defines basic handling

Supported extra params:
  • __NAME__: Set the name of this profile
  • __KEEP_ASSETS__: Keep files that would be otherwise removed by the _path_to_be_removed feature (eg. pristine imgs)
Parameters:
  • host – Host machine to apply profile on
  • rp_paths – list of runperf paths
apply(setup_script)

Apply the profile and create the workers

Returns:True - when reboot is required; [worker1, worker2, …] - on success
fetch_logs(path)

Fetch useful data from all workers as well as host.

get_info()

Useful information that should clearly identify the current profile setting.

Returns:dict of per-category information about how this profile affected the machine.
name = ''
revert()

Revert the profile

Returns:True - when the machine needs to be rebooted False - when everything is reverted properly
class runperf.profiles.DefaultLibvirt(host, rp_paths, extra)

Bases: runperf.profiles.PersistentProfile

Use libvirt defaults to create one VM leaving some free CPUs

extra params: * force_guest_cpus - override guest_cpus * force_guest_mem - override guest_mem * force_no_vms - override no vms * qemu_bin - custom qemu bin location

deps = 'tuned libvirt libguestfs-tools-c virt-install'
get_info()

Useful information that should clearly identify the current profile setting.

Returns:dict of per-category information about how this profile affected the machine.
img_base = '/var/lib/libvirt/images'
name = 'DefaultLibvirt'
class runperf.profiles.DefaultLibvirtMulti(host, rp_paths, extra)

Bases: runperf.profiles.DefaultLibvirt

Runs multiple DefaultLibvirt VMS to fill guest_cpus.

By default it uses 2 CPUs per VM but can be tweaked using force_guest_cpus extra parameter.

name = 'DefaultLibvirtMulti'
class runperf.profiles.Localhost(host, rp_paths, extra)

Bases: runperf.profiles.BaseProfile

Run on localhost

Base profile that defines basic handling

Supported extra params:
  • __NAME__: Set the name of this profile
  • __KEEP_ASSETS__: Keep files that would be otherwise removed by the _path_to_be_removed feature (eg. pristine imgs)
Parameters:
  • host – Host machine to apply profile on
  • rp_paths – list of runperf paths
name = 'Localhost'
class runperf.profiles.Overcommit1p5(host, rp_paths, extra)

Bases: runperf.profiles.DefaultLibvirt

CPU host overcommit profile to use 1.5 host cpus using multiple guests

name = 'Overcommit1_5'
class runperf.profiles.PersistentProfile(host, rp_paths, extra)

Bases: runperf.profiles.BaseProfile

Base profile for handling persistent setup

The “_apply” is modified to check for “persistent_setup_expected” setup which can be used to signal and verify that all persistent setup tasks were performed.

There are also some features like grub_args, rc_local and tuned_adm_profile modules that can be handled automatically.

extra params: * irqbalance - enable/disable irqbalance

Parameters:
  • host – Host machine to apply profile on
  • rp_paths – list of runperf paths
  • skip_init_call – Skip call to super class (in case of multiple inheritance)
get_info()

Useful information that should clearly identify the current profile setting.

Returns:dict of per-category information about how this profile affected the machine.
class runperf.profiles.TunedLibvirt(host, rp_paths, extra)

Bases: runperf.profiles.DefaultLibvirt

Use a single guest defined by $host-$suffix.xml libvirt definition

  • hugepages on host
  • strictly pinned numa
  • host-passhtough cpu model and cache
  • pin ioports to unused cpus
  • grub: nosoftlockup nohz=on
  • use cgroups to move most processes to the unused cpus

extra params: * xml - override full xml path * xml_suffix - suffix to xml path [“-tuned”]

name = 'TunedLibvirt'
runperf.profiles.get(profile, extra, host, paths)

Get initialized/started guests object matching the definition

Parameters:
  • host – Host OS instance (host.Host)
  • guest – Guest definition (str)
  • tmpdir – Temporary directory for resources
Returns:

Initialized and started guests instance (BaseGuests)

runperf.provisioners module

class runperf.provisioners.Beaker(controller, extra)

Bases: object

Beaker provisioner

Uses current machine to execute “bkr” client

Beaker provisioner is stateless neither it supports extra params

name = 'Beaker'
static provision(machine)

Perform the provisioning

runperf.result module

class runperf.result.AveragesModifier(weight)

Bases: runperf.result.Modifier

Model that calculates averages of all builds

COEFFICIENT = 2
add_result(result)

Add reference result

Parameters:resultresult.Result result to be processed
check_result(result)

Add this result and perform additional checks, reporting the findings

Parameters:resultresult.Result result to be processed
Returns:[(check_name, difference, weight, source value), …] where source_value is an optional value correcting the source value
class runperf.result.Model

Bases: object

Model base-class

check_result(test_name, src, dst)

Apply model to a test_name

Parameters:
  • test_name – Name of the current check
  • src – Original source score
  • dst – Original destination score
  • primary – Whether the check is primary
Returns:

[(check_name, difference, weight, source value), …] where source_value is an optional value correcting the source value

identify(data)

Set/train the model based on provided data

Parameters:data – dict of {result: [value, value, value]}
mean_tolerance = None
processing_dst_results = False
stddev_tolerance = None
class runperf.result.ModelLinearRegression(mean_tolerance, stddev_tolerance, model=None)

Bases: runperf.result.Model

Simple linear regression model

TOO_STRICT_COEFFICIENT = 1.1
check_result(test_name, src, dst)

Apply model to a test_name

Parameters:
  • test_name – Name of the current check
  • src – Original source score
  • dst – Original destination score
  • primary – Whether the check is primary
Returns:

[(check_name, difference, weight, source value), …] where source_value is an optional value correcting the source value

identify(data)

Identify model based on data

Parameters:data – dict of {result: [value, value, value]}
Note:currently uses self.mean_tolerance for all tolerances
rebase(data)

Rebase the model to a new average raw values while keeping the acceptable deviation.

Parameters:data – dict of {result: [value, value, value]}
class runperf.result.ModelStdev(mean_tolerance, stddev_tolerance, model=None)

Bases: runperf.result.ModelLinearRegression

Simple linear regression model using 3*stddev as error

ERROR_COEFICIENT = 3
TOO_STRICT_COEFFICIENT = 1.1
identify(data)

Identify model based on data

Parameters:data – dict of {result: [value, value, value]}
Note:currently uses self.mean_tolerance for all tolerances
class runperf.result.Modifier

Bases: object

Base class for post-analysis modification of results.

For every reference build the add_result is called and the final result is then checked using the check_result method and appended as a new metrics to the test result.

add_result(result)

Add reference result

Parameters:resultresult.Result result to be processed
check_result(result)

Add this result and perform additional checks, reporting the findings

Parameters:resultresult.Result result to be processed
Returns:[(check_name, difference, weight, source value), …] where source_value is an optional value correcting the source value
weight = 0
class runperf.result.NOutOfResultsModifier(weight, allowed_failures)

Bases: runperf.result.Modifier

A model that allows N out of reference builds to fail

It uses the allowed_failures to scale the current result’s tolerance values to indicate in how many builds the current result failed.

add_result(result)

Add reference result

Parameters:resultresult.Result result to be processed
check_result(result)

Add this result and perform additional checks, reporting the findings

Parameters:resultresult.Result result to be processed
Returns:[(check_name, difference, weight, source value), …] where source_value is an optional value correcting the source value
class runperf.result.RelativeResults(log, mean_tolerance, stddev_tolerance, models, modifiers, metadata)

Bases: object

Object to calculate and evaluate entries between two results.

compute_statistics(all_means, all_stddevs)

Calculate statistics for given means/stddevs

evaluate()

Process a default set of statistic on the results

expand_grouped_results(last=False)

Calculate pre-defined grouped results

finish()

Evaluate processed results and report the status

Returns:0 when everything is alright 2 when there are any failures (or group failures) 3 when no comparisons were performed (eg. all tests were skipped)
get_xunit()

Log the header (execute last when dynamic number of tests)

Parameters:total_tests – Amount of executed tests (None=get from recrods)
per_type_stats(merge=None, primary_only=True)

Generate stats using merged results (eg. merge all fio-read tests)

record(result, grouped=False)

Insert result into database

record_broken(test_name, details=None, primary=True, params=None)

Insert broken/corrupted result

record_result(test_name, src, dst, primary=False, grouped=False, difference=None, tolerance=None, params=None, last=False)

Process result and insert it into database

sum_stats(primary_only=True)

Generate summary stats (min/median/max/average…)

class runperf.result.Result(test, dst, tolerance, primary=False, params=None)

Bases: object

XUnitResult object

add(suffix, name, difference, weight, src=None)

Add individual result

add_bad(suffix, name, details, difference, weight, src=None)

Add a bad result

agg_diffs
agg_weights
big
classname
details

Description of the result status

dst
error
get_merged_name(merge)

Report full test name but replace parts specified in “merge” wiht ‘*’

good
is_error()

Whether this result is a runtime error

is_stddev()

Whether this result is “stddev” result (or mean)

name

Full test name

params
primary
score

Result score

small
srcs
status

Result status

testname
tolerance
class runperf.result.ResultsContainer(log, tolerance, stddev_tolerance, models, src_name, src_path, modifiers)

Bases: object

Container to store multiple RelativeResults and provide various stats

add_result_by_path(name, path, last=False, skip_incorrect=True)

Insert test result according to path hierarchy

runperf.result.closest_result(src_path, dst_path_groups, flatten_coefficient=1)

Compare results and find the one that has more results closer to the src one

Parameters:
  • src_path – Path to the src result
  • dst_paths – List of paths to results we are comparing to
runperf.result.get_uncertainty(no_samples)

Return uncertainty coefficient based on the number of no_samples

runperf.result.iter_results(path, skip_incorrect=False)

Process runperf results and yield individual results

Parameters:
  • path – base path to runperf results
  • skip_incorrect – don’t yield incorrect results
Yield result:

tuple(test_name, score, is_primary)

runperf.result.iter_results_errors(path)

Process runperf results and yield the dirs with runperf errors

runperf.result.iter_results_jsons(path, skip_incorrect=False)

Process runperf results and yield the result.json files

runperf.tests module

class runperf.tests.BaseTest(host, workers, base_output_path, metadata, extra)

Bases: object

Base implementation of a Test class

cleanup()

Cleanup the environment; is always executed even for SKIP tests

inject_metadata(session, path)

Add our “RUNPERF_METADATA.json” to the dirname($path) in order to preserve our extended data (especially profile, workers and such…)

Parameters:
  • session – Session to the worker
  • path – Path where the results should be located
min_groups = 1
name = ''
run()

Run the testing

setup()

Allow extra steps before test execution

class runperf.tests.DummyTest(host, workers, base_output_path, metadata, extra)

Bases: runperf.tests.BaseTest

Dummy test intended for selftesting

name = 'DummyTest'
class runperf.tests.Linpack(host, workers, base_output_path, metadata, extra)

Bases: runperf.tests.PBenchTest

linpack test

default_args = (('samples', 3),)
name = 'linpack'
test = 'linpack'
class runperf.tests.PBenchFio(host, workers, base_output_path, metadata, extra)

Bases: runperf.tests.PBenchTest

Default fio benchmark (read)

default_args = (('test-types', 'read,write,rw'), ('ramptime', 10), ('runtime', 180), ('samples', 3))
name = 'fio'
test = 'fio'
class runperf.tests.PBenchNBD(host, workers, base_output_path, metadata, extra)

Bases: runperf.tests.PBenchFio

Executes PBenchFio with a custom job to test nbd

By default it creates and distributes the job-file using “nbd-check.fio” from assets but you can override the job-file path and distribute your own version. In such case you have to make sure to use the right paths and format.

base_path = '/var/lib/runperf/runperf-nbd/'
cleanup()

Cleanup the environment; is always executed even for SKIP tests

default_args = (('numjobs', 4), ('job-file', '/var/lib/runperf/runperf-nbd/nbd.fio'))
name = 'fio-nbd'
setup()

Allow extra steps before test execution

class runperf.tests.PBenchTest(host, workers, base_output_path, metadata, extra)

Bases: runperf.tests.BaseTest

Pbench test

Metadata: pbench_server - set the pbench-server-url Metadata: pbench_server_publish - publish results to pbench server

args = ''
default_args = ()
setup()

Allow extra steps before test execution

test = ''
timeout = 172800
class runperf.tests.UPerf(host, workers, base_output_path, metadata, extra)

Bases: runperf.tests.PBenchTest

Uperf test

By default executes tcp stream test. If you need to test udp we strongly suggest also setting type=rr, otherwise it’s not guaranteed the packets are not plainly dropped.

default_args = (('test-types', 'stream'), ('runtime', 60), ('samples', 3), ('protocols', 'tcp'), ('message-sizes', '1,64,16384'))
name = 'uperf'
test = 'uperf'
runperf.tests.get(name, extra)

Get list of test classes based on test name

Parameters:test_name – Test name optionally followed by ‘:’ and extra params
Returns:instance that allow performing the test and extra params

runperf.version module

The purpose of this implementation is to return the right version for installed as well as ‘make develop’ deployments.

runperf.version.get_version()

Attempt to get the version from git or fallback to pkg_resources

Module contents

class runperf.AnalyzePerf

Bases: object

Class to allow result analysis/model creation

class runperf.ComparePerf

Bases: object

Compares run-perf results. With multiple ones it adjusts the limits according to their spread.

class runperf.DictAction(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)

Bases: argparse.Action

Split items by ‘=’ and store them as a single dictionary

class runperf.DiffPerf

Bases: object

Compares multipl run-perf and reports the index of the closest one.

class runperf.StripPerf

Bases: object

Class to cherry-pick only the data used by run-perf tools useful for later analysis.

static process_result_json(src_path, dst_base)

Gather result.json data

static process_result_metadata(src_path, dst_path)

Gather RUNPERF_METADATA.json

static process_sysinfo(src_path, dst_path)

Gather __sysinfo*__ files (global and profile)

runperf.create_metadata(output_dir, args)

Generate RUNPERF_METADATA in this directory

runperf.get_abs_path(path)

Return absolute path to a given location

runperf.item_with_params(item)

Deserialize item with optional params argument

runperf.logging_argparse(parser)

Define logging argparse arguments

runperf.logging_setup(args, fmt=None)

Setup logging according to args

runperf.main()

A tool to execute the same tasks on pre-defined scenarios/ profiles and store the results together with metadata in a suitable structure for compare-perf to compare them.

runperf.parse_host(host)

Go through hosts and split them by ‘:’ to get name:addr

When name not supplied, uses first part of the provided addr

runperf.profile_test_defs(profile_args, default_set)

Process profile args and return suitable test set

Parameters:
  • profile_args – profile arguments
  • default_set – default set of test definitions
Returns:

list of test definitions