runperf package

Submodules

runperf.exceptions module

exception runperf.exceptions.RebootRequest(hosts, interrupted_action)

Bases: Exception

Exception used when reboot is requested

exception runperf.exceptions.TestSkip

Bases: RuntimeWarning

Exception used to mark skipped tests

runperf.html_report module

runperf.html_report.anonymize_test_params(lines)

Tweaks to remove dynamic data from test-params

runperf.html_report.generate_report(path, results, with_charts=False)

Generate html report from results

Parameters:
  • path – Path to the output html file
  • results – Results container from runperf.result.ResultsContainer
  • with_charts – Whether to generate graphs
runperf.html_report.num2char(num)

Convert number to char (A,B,C, …,BA, BB, …)

runperf.machine module

class runperf.machine.BaseMachine(log, name, distro, default_passwords=None)

Bases: object

Basic machine interaction

copy_from(src, dst)

Copy file from the machine

Warning:This won’t check/setup keys
get_addr()

Get addr/hostname

get_host_addr()

Get addr/hostname of the host (or self)

get_info()

Report basic info about this machine

get_session(timeout=60, hop=None)

Get session to this machine

Parameters:
  • timeout – timeout
  • hop (BaseMachine) – ssh proxy machine
Returns:

aexpect shell session

get_session_cont(timeout=60, hop=None)

Get session to this machine suitable for “with” usage

Parameters:
  • timeout – timeout
  • hop (BaseMachine) – ssh proxy machine
Returns:

aexpect shell session

get_ssh_cmd(hop=None)

Get session

Parameters:hop – Use hop as ssh proxy
ssh_copy_id(hop=None)

Copy default id to remote host

class runperf.machine.Controller(args, log)

Bases: object

Object allowing to interact with multiple hosts

apply_profile(profile)

Apply profile on each host, report list of lists of workers

cleanup()

Post-testing cleanup

static for_each_host(hosts, method, args=(), kwargs=None)

Perform action in parallel on each host, signal RebootRequest if necessary.

Parameters:
  • hosts – List of hosts to run the tasks on
  • method – host.$method to be performed per each host
  • args – positional arguments forwarded to the called methods
  • kwargs – key word arguments forwarded to the called methods
Raises:

exceptions.RebootRequest – When any of the actions report non-zero return.

for_each_host_retry(attempts, hosts, method, args=(), kwargs=None)

Perform action in parallel on each host while allowing re-try if available.

This is useful for tasks that might fail/require reboot.

Parameters:
  • attempts – How many attempts per-host
  • hosts – List of hosts to run the tasks on
  • method – host.$method to be performed per each host
  • args – positional arguments forwarded to the called methods
  • kwargs – key word arguments forwarded to the called methods
Raises:

exceptions.RebootRequest – When any of the actions report non-zero return.

revert_profile()

Revert profile

run_test(test_class, workers, extra)

Run a test

Parameters:
  • test_class – class to be instantiated and executed via this controller
  • workers – list of workers to be made available for execution
setup()

Basic setup like ssh keys, pbench installation and such

write_metadata(key, value)

Append the key:value to the RUNPERF_METADATA file

class runperf.machine.Host(parent_log, name, addr, distro, args, hop=None)

Bases: runperf.machine.BaseMachine

Base object to leverage a machine

apply_profile(profile, setup_script, rp_paths)

Apply profile and set new workers

Parameters:
  • profile – name of the requested profile
  • setup_script – setup script to be executed on each worker setup
  • paths – paths to runperf assets
cleanup()

Cleanup after testing

generate_ssh_key()

Generate/reuse ssh key in ~/.ssh/id_rsa

get_addr()

Return addr as they are static

get_host_addr()

Return our addr as we are the host

get_info()

Report basic info about this machine

provision(provisioner)

Provision the machine

reboot()

Gracefully reboot the machine

revert_profile()

Revert profile if any profile set

run_script(script, timeout=600)

Runs a script on the machine

setup()

Prepare host

class runperf.machine.LibvirtGuest(host, name, distro, base_image, smp, mem, default_passwords=None, extra_params=None)

Bases: runperf.machine.BaseMachine

Object representing libvirt guests

Parameters:
  • host – Host on which to define the VM
  • name – Name of the VM
  • distro – OS version installed on the image
  • image – Path to guest image
  • smp – Number of CPUs to be used by VM
  • mem – Amount of memory to be used by VM
XML_FILTERS = ((re.compile('<uuid>[^<]+</uuid>'), 'UUID'), (re.compile('<mac address=[^/]+/>'), 'MAC'), (re.compile('[\\"\']/var/lib/libvirt/[^\\"\']+[\\"\']'), 'PATH'), (re.compile('<seclabel.*?</seclabel>', re.DOTALL), 'SECLABEL'), (re.compile('portid=[\\"\'][^\\"\']+[\\"\']'), 'PORTID'), (re.compile('[\\"\']/dev/pts[^\\"\']*[\\"\']'), 'PTS'), (re.compile('\\sid=[\'\\"]\\d+[\'\\"]'), ' ID'))
cleanup()

Destroy the machine and close hsot connection

get_addr()

Get addr/hostname

get_host_addr()

Get addr/hostname of the host (or self)

get_host_session()

Get and cache host session.

This session will be cleaned automatically on “.cleanup()”

get_info()

Report basic info about this machine

get_session(timeout=60, hop=None)

Get session to this machine

Parameters:
  • timeout – timeout
  • hop (BaseMachine) – ssh proxy machine
Returns:

aexpect shell session

is_defined()

Whether VM is defined (not necessary running)

is_running()

Whether VM is running

start()

Define and start the VM

class runperf.machine.ShellSession(*args, **kwargs)

Bases: aexpect.client.ShellSession

Mute-able aexpect.ShellSession

runperf.machine.get_distro_info(machine)

Various basic sysinfo

runperf.profiles module

class runperf.profiles.BaseProfile(host, rp_paths)

Bases: object

Base class to define profiles

Parameters:
  • host – Host machine to apply profile on
  • rp_paths – list of runperf paths
apply(setup_script)

Apply the profile and create the workers

get_info()

Useful information that should clearly identify the current profile setting.

Returns:dict of per-category information about how this profile affected the machine.
profile = ''
revert()

Revert the profile

class runperf.profiles.DefaultLibvirt(host, rp_paths, extra_params=None)

Bases: runperf.profiles.BaseProfile

Use libvirt defaults to create one VM leaving some free CPUs

default_password = 'redhat'
deps = 'libvirt libguestfs-tools-c virt-install'
get_info()

Useful information that should clearly identify the current profile setting.

Returns:dict of per-category information about how this profile affected the machine.
img_base = '/var/lib/libvirt/images'
no_vms = 1
profile = 'DefaultLibvirt'
class runperf.profiles.Localhost(host, rp_paths)

Bases: runperf.profiles.BaseProfile

Run on localhost

Parameters:
  • host – Host machine to apply profile on
  • rp_paths – list of runperf paths
profile = 'Localhost'
class runperf.profiles.Overcommit1p5(host, rp_paths, extra_params=None)

Bases: runperf.profiles.DefaultLibvirt

CPU host overcommit profile to use 1.5 host cpus using multiple guests

profile = 'Overcommit1_5'
class runperf.profiles.PersistentProfile(host, rp_paths, skip_init_call=False)

Bases: runperf.profiles.BaseProfile

Base profile for handling persistent setup

The “_apply” is modified to check for “persistent_setup_expected” setup which can be used to signal and verify that all persistent setup tasks were performed.

There are also some features like grub_args, rc_local and tuned_adm_profile modules that can be handled automatically.

Parameters:
  • host – Host machine to apply profile on
  • rp_paths – list of runperf paths
  • skip_init_call – Skip call to super class (in case of multiple inheritance)
get_info()

Useful information that should clearly identify the current profile setting.

Returns:dict of per-category information about how this profile affected the machine.
class runperf.profiles.TunedLibvirt(host, rp_paths)

Bases: runperf.profiles.DefaultLibvirt, runperf.profiles.PersistentProfile

Use a single guest defined by $host-tuned.xml libvirt definition

  • hugepages on host
  • strictly pinned numa
  • host-passhtough cpu model and cache
  • pin ioports to unused cpus
  • grub: nosoftlockup nohz=on
  • use cgroups to move most processes to the unused cpus
get_info()

Useful information that should clearly identify the current profile setting.

Returns:dict of per-category information about how this profile affected the machine.
profile = 'TunedLibvirt'
runperf.profiles.get(profile, host, paths)

Get initialized/started guests object matching the definition

Parameters:
  • host – Host OS instance (host.Host)
  • guest – Guest definition (str)
  • tmpdir – Temporary directory for resources
Returns:

Initialized and started guests instance (BaseGuests)

runperf.provisioners module

class runperf.provisioners.Beaker(controller, extra)

Bases: object

Beaker provisioner

Uses current machine to execute “bkr” client

Beaker provisioner is stateless neither it supports extra params

static provision(machine)

Perform the provisioning

runperf.result module

class runperf.result.Model

Bases: object

check_result(test_name, src, dst, primary=False)

Check whether src-dst distance is within limits

class runperf.result.ModelLinearRegression(mean_tolerance, stddev_tolerance, model=None)

Bases: runperf.result.Model

Simple linear regression model

TOO_STRICT_COEFFICIENT = 1.1
check_result(test_name, src, dst, primary=False)

Check whether src-dst distance is within limits

identify(data)

Identify model based on data

Parameters:data – dict of {result: [value, value, value]}
Note:currently uses self.mean_tolerance for all tolerances
class runperf.result.RelativeResults(log, mean_tolerance, stddev_tolerance, models, metadata)

Bases: object

Object to calculate and evaluate entries between two results.

compute_statistics(all_means, all_stddevs)

Calculate statistics for given means/stddevs

evaluate()

Process a default set of statistic on the results

expand_grouped_results()

Calculate pre-defined grouped results

finish()

Evaluate processed results and report the status

Returns:0 when everything is alright 2 when there are any failures (or group failures) 3 when no comparisons were performed (eg. all tests were skipped)
get_xunit()

Log the header (execute last when dynamic number of tests)

Parameters:total_tests – Amount of executed tests (None=get from recrods)
per_type_stats(merge=None, primary_only=True)

Generate stats using merged results (eg. merge all fio-read tests)

record(result, grouped=False)

Insert result into database

record_broken(test_name, details=None, primary=True, params=None)

Insert broken/corrupted result

record_result(test_name, src, dst, primary=False, grouped=False, raw_difference=None, raw_tolerance=None, params=None)

Process result and insert it into database

sum_stats(primary_only=True)

Generate summary stats (min/median/max/average…)

class runperf.result.Result(status, score, test, src, dst, details=None, primary=False, params=None)

Bases: object

XUnitResult object

classname
details
dst
get_merged_name(merge)

Report full test name but replace parts specified in “merge” wiht ‘*’

is_stddev()

Whether this result is “stddev” result (or mean)

name

Full test name

params
primary
score
src
status
testname
class runperf.result.ResultsContainer(log, tolerance, stddev_tolerance, models, src_name, src_path)

Bases: object

Container to store multiple RelativeResults and provide various stats

add_result_by_path(name, path)

Insert test result according to path hierarchy

runperf.result.iter_results(path, skip_incorrect=False)

Process runperf results and yield individual results

Parameters:
  • path – base path to runperf results
  • skip_incorrect – don’t yield incorrect results
Yield result:

tuple(test_name, score, is_primary)

runperf.tests module

class runperf.tests.BaseTest(host, workers, base_output_path, metadata, extra)

Bases: object

Base implementation of a Test class

cleanup()

Cleanup the environment; is always executed even for SKIP tests

inject_metadata(session, path)

Inject our metadata into pbench-like json results or in a “RUNPERF_METADATA.json” file within the dirname(path).

It injects the self.metadata into each:

[:]["iteration_data"]["parameters"]["user"].append()

creating the “user” list if not exists, skipping the iteration when the previous items don’t exist.

In case the $path file does not exists and dirname($path) does it creates a “RUNPERF_METADATA.json” file and dumps the content there.

Parameters:
  • session – Session to the worker
  • path – Path where the results should be located
min_groups = 1
name = ''
run()

Run the testing

setup()

Allow extra steps before test execution

class runperf.tests.DummyTest(host, workers, base_output_path, metadata, extra)

Bases: runperf.tests.BaseTest

name = 'DummyTest'
class runperf.tests.Linpack(host, workers, base_output_path, metadata, extra)

Bases: runperf.tests.PBenchTest

linpack test

default_args = (('run-samples', 3),)
name = 'linpack'
test = 'linpack'
class runperf.tests.PBenchFio(host, workers, base_output_path, metadata, extra)

Bases: runperf.tests.PBenchTest

Default fio benchmark (read)

default_args = (('test-types', 'read,write,rw'), ('ramptime', 10), ('runtime', 180), ('samples', 3))
name = 'fio'
test = 'fio'
class runperf.tests.PBenchNBD(host, workers, base_output_path, metadata, extra)

Bases: runperf.tests.PBenchFio

Executes PBenchFio with a custom job to test nbd

By default it creates and distributes the job-file using “nbd-check.fio” from assets but you can override the job-file path and distribute your own version. In such case you have to make sure to use the right paths and format.

base_path = '/var/lib/runperf/runperf-nbd/'
cleanup()

Cleanup the environment; is always executed even for SKIP tests

default_args = (('numjobs', 4), ('job-file', '/var/lib/runperf/runperf-nbd/nbd.fio'))
name = 'fio-nbd'
setup()

Allow extra steps before test execution

class runperf.tests.PBenchTest(host, workers, base_output_path, metadata, extra)

Bases: runperf.tests.BaseTest

Pbench test

Metadata: pbench_server - set the pbench-server-url Metadata: pbench_server_publish - publish results to pbench server

static add_metadata(session, key, value)

Appends key=value to standard location in current directory in provided session.

args = ''
default_args = ()
setup()

Allow extra steps before test execution

test = ''
timeout = 172800
class runperf.tests.UPerf(host, workers, base_output_path, metadata, extra)

Bases: runperf.tests.PBenchTest

Uperf test

By default executes tcp stream test. If you need to test udp we strongly suggest also setting type=rr, otherwise it’s not guaranteed the packets are not plainly dropped.

default_args = (('test-types', 'stream'), ('runtime', 60), ('samples', 3), ('protocols', 'tcp'), ('message-sizes', '1,64,16384'))
name = 'uperf'
test = 'uperf'
runperf.tests.get(name)

Get list of test classes based on test name

Parameters:test_name – Test name optionally followed by ‘:’ and extra params
Returns:instance that allow performing the test and extra params

runperf.version module

The purpose of this implementation is to return the right version for installed as well as ‘make develop’ deployments.

runperf.version.get_version()

Attempt to get the version from git or fallback to pkg_resources

Module contents

class runperf.AnalyzePerf

Bases: object

Class to allow result analysis/model creation

class runperf.ComparePerf

Bases: object

Compares run-perf results. With multiple ones it adjusts the limits according to their spread.

class runperf.DictAction(option_strings, dest, nargs=None, const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)

Bases: argparse.Action

Split items by ‘=’ and store them as a single dictionary

runperf.create_metadata(output_dir, args)

Generate RUNPERF_METADATA in this directory

runperf.get_abs_path(path)

Return absolute path to a given location

runperf.main()

A tool to execute the same tasks on pre-defined scenarios/ profiles and store the results together with metadata in a suitable structure for compare-perf to compare them.

runperf.parse_host(host)

Go through hosts and split them by ‘:’ to get name:addr

When name not supplied, uses first part of the provided addr

runperf.setup_logging(verbosity_arg, fmt=None)

Setup logging according to -v arg