Your feedback is important to keep improving our website and offer you a more reliable experience.

lkp-tests - A Linux* Kernel Performance Test and Analysis Tool

BY Julie Du, Aaron Lu, Kemi Wang, Ying Huang ON Dec 19, 2017

1   Introduction

The Linux kernel is used in a wide variety of devices, from small IOT devices to cloud servers. The performance of the Linux kernel is often critical for the products using it. lkp-tests (lkp stands for Linux Kernel Performance) is a tool developed by the Linux kernel performance team at Intel to test and analyze Linux kernel performance. lkp-tests is an open source tool that allows developers to evaluate their patches in a thorough way. The lkp-tests tool integrates approximately 80 popular industry open source test suites which include 30+ functional test suites and 50+ performance benchmarks. In addition, it provides nearly 40 monitors for comparing and analyzing performance statistics data and around 30 setups to ensure the benchmark is executed with the defined system resource. lkp-tests provides a standard interface to do installation, execution, and results analysis. Besides the output from the benchmarks themselves, it has been customized to collect data from every aspect of the system such as vmstat, iostat, turbostat, and so on, which helps to assist performance analysis and root-causing issues.

lkp-tests is integrated in 0-Day CI (Continuous Integration) for Linux kernel regression testing. On the performance testing side, 0-Day CI runs more than 50 performance benchmarks with more than 1400 test parameter combinations on more than 40 test machines. It is used in our performance optimization work for performance analysis, regression checking, and more. The LKP test system is used to test Linux kernel community patches too. Since 2016, around 20 Linux kernel performance regressions and improvement reports have been sent to the Linux kernel community each month. Below figure shows the number of major regressions reported each month in 2017.

Figure 1: Number of major regressions reported in 2017

The LKP test system was used to analyze the performance gaps, identify hot spots, and test the performance impact thoroughly.

lkp-tests is:

1)    A flexible framework to run benchmarks

2)    A framework for performance analysis

3)    Used to run benchmarks and reproduction tools in CI systems, such as 0-Day CI

1.1   System Requirements

The install command provided by lkp-tests tool can install all the software dependencies when it is executed. Users don’t need to worry about any software dependencies.

For some special test cases, such as file system test needing a particular partition, or a test with particular requirement (e.g. CPU number, memory size, etc), users can configure them in the host file (details are in a subsequent chapter).

CAUTION: Backup the useful data for any file system test because some tests may format the related partition.

Here is an example of a host file:

nr_cpu: 4
memory: 8G
nr_ssd_partitions: 1
ssd_partitions: /dev/sdc1

For network test, only loopback mode is supported and related “lo” network interfaces need to be configured properly.

1.2   Performance Benchmark

lkp-tests integrates more than 50 performance benchmarks from open source sources for Linux kernel performance regression checks and Linux kernel performance analysis. Most of them are micro-benchmarks, with a few macro-benchmarks (refer to the Appendix for a list of supported benchmarks). Most of the workloads are for servers. 

The benchmarks cover the following subsystems:

1)    Scheduler

2)    File System I/O

3)    Network

4)    Scalability

5)    Memory management

6)    Database

7)    OS Noise (related to interrupts, timer, kthread, and so on. It is not for workload directly)

8)    High-Performance Computing (HPC)

9)    Workload emulation

1.3   Functionality Test Suites

lkp-tests also integrates some functionality tests for the following subsystems:

1)    Memory management

2)    File system

3)    Virtualization

4)    Network

5)    Other kernel features like CPU hotplug, ftrace, and others

1.4   Monitors

In addition to the benchmark score, lkp-tests also captures performance statistics, which are called monitors. For example, perf stat, perf profile, vmstat, and others compare or analyze all performance benchmarks. The monitors can be used to get additional information while the benchmark is running, which is important for performance analysis and regression root causing.

lkp-tests currently has integrated approximately 40 monitors (See the Appendix for a list of monitors that are supported). Most of the monitors are from open source and some of them were developed by the lkp-tests team.

1.5   Setups

To run the benchmark, we configure the “setup” scripts in lkp-tests to prepare the test environment. This includes formatting the test disk with specified file systems, changing the cpu frequency governor or IO scheduler, and so on. The setup scripts are run before the benchmark is run to prepare the test environment for the benchmark.

1.6   Job

A “job” is the basic unit of test execution in lkp-tests. The job file is defined to execute a job. A job file includes the following information:

●     Job file identification header: the basic information of the job, such as which benchmark/test suite it executes, either a performance benchmark or function test.

●     Benchmark and its options: the parameters the job executes, how many threads it will use, etc.

●     Monitor: the monitors the job uses.  By using monitors, the test can capture performance statistics while running the benchmark.

●     Host information: the system configuration requirement about the host machine on which the job will be running, such as the number of CPUs required, how much memory, which disk partition, etc.

●     Setup: it tells what kind of setup to be run before running the benchmark.

2   Typical Usage Models

As illustrated above, lkp-tests is a powerful tool that helps developers test their patches. Here are some typical models that describe when to use lkp-tests and which lkp-test can help developers.

2.1   Reproduce, Root-cause Regressions, and Verify Fix

lkp-tests integrates many benchmarks that are popular in the industry. It is also used by 0-Day CI to run the benchmark for performance regression checks. 0-Day CI sends the regression report to developers. Developers can use lkp-tests to reproduce, debug the regressions, and then verify the fix.

In the email that contains the performance regression report, the kernel configuration file and the job file (See Job file for detail about it) are attached for reproduction. Follow the steps in “3 Use lkp-tests” to set up the environment and run the selected benchmark.

After you run the benchmark, lkp-tests provides a mechanism to compare the reported commits to its parent. See “3.2.3 Performance Regression Profiling” for usage.

To root-cause the regression, kernel developers may need to analyze the performance test results as described in the next subsection “2.2 Performance Analysis”. To verify the fix, developers can run the job again in the fixed kernel and compare the test results to the previous results.

2.2   Performance Analysis

lkp-tests can be used as a tool to help kernel developers analyze performance. lkp-tests makes it much easier to run benchmarks. Kernel developers can then use the test results to evaluate the performance impact of their changes. In addition to running the benchmark itself, lkp-tests can run a set of monitors at the same time to collect various performance statistics to help analyze the performance, determine the hot spot, and so on. After running the benchmark, a symbolic link named “result” will be created in the current directory, which points to the result directory of the current run. An example of the contents of the result directory is shown below:


The example is the result of running the vm-scalability benchmark. There are many files generated, depending on various monitors. The .json file and .json.gz file are the files with the parsed result (in key -> value format). The matrix.json.gz and stats.json are the aggregation of all parsed result and related statistic information. The remaining files are the raw result files. For example, the above vm-scalability benchmark ran with monitors diskstats, latency_stats, meminfo, perf-profile, perf-stat, proc-vmstat, vm-scalability, and vmstat. The file without postfix .json or .json.gz is the related monitor’s raw data and corresponding .json or .json.gz is the parsed result. The matrix.json.gz and stats.json are the aggregation of all parsed result and related statistic information.

The following is part of the stats.json file for perf-profile results,

"perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.__add_to_swap_cache.add_to_swap_cache.add_to_swap.shrink_page_list": 20.62,
"perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg": 20.12,
"perf-profile.calltrace.cycles-pp._raw_spin_lock.swap_info_get.page_swapcount.try_to_free_swap.swap_writepage": 3.72,

Let’s take the first line "perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.__add_to_swap_cache.add_to_swap_cache.add_to_swap.shrink_page_list": 20.62 as an example to explain the above data format.





The monitor name


Indicates that the statistics are for the function call chain instead of function itself


The PMU event used


The function call chain


Indicates the percentage of cycles of the above function call chain in all cycles during the program run


From the above perf-profile result, kernel developers can find that there are heavy lock contentions for a spin lock (line 1 and 2). To determine which spin lock, the function call chain information and kernel source code can be checked. It turns out, it is the spin lock that protects the swap cache radix tree. Based on this data, the developers may want to consider how to reduce the lock contention. 

3   Use lkp-tests Tool

3.1   Setup Environment 

The lkp-tests source code can be downloaded from: It supports these OS distributions: Debian*, Ubuntu*, CentOS*, Archlinux*, and Oracle* Linux*.

The steps to setup the environment are shown below.

Step 1: Pre-install the required tools for lkp-tests.

# apt-get install git

Step 2: Clone the lkp-tests repo to your local directory.

# git clone

Step 3: Go to your lkp-tests installation directory.

# cd lkp-tests (Or to your installation directory)

Step 4: Create a soft link for the lkp command to make it easy to use.

# make install

Step 5: Install basic software packages before running lkp-tests.

# lkp install

3.2   Use the Benchmark/Test Suites in lkp-tests

lkp-tests can be easily used for benchmark installation, benchmark execution, and performance regression profiling.

3.2.1   Benchmark Installation

Step 1: Go to your lkp-tests installation directory.

# cd lkp-tests (Or to your installation directory)

Step 2: Select your desired benchmark (e.g. ebizzy) in the “jobs/” directory to install.

# lkp install jobs/ebizzy-200%-100x-10s.yaml

3.2.2   Run the Benchmark

Step 1: Split a jobfile, e.g. ebizzy.yaml (ignore this step if you want to run all the sub-jobs under ebizzy).

#lkp split jobs/ebizzy.yaml

jobs/ebizzy.yaml => ./ebizzy-200%-100x-10s.yaml

Step 2: Run the benchmark (e.g. ebizzy).

# lkp run  ebizzy-200%-100x-10s.yaml

Sample result:

Iteration: 1
2017-09-07 15:19:03 ./ebizzy -t 32 -S 10
257405 records/s 7762 8072 7682 7476 8713 8234 8401 8992 8467 8030 7482 8356 7721 9148 7721 7776 7870 7554 8389 7707 7524 7844 7692 7168 8479 8288 7632 7724 8534 8969 7722 8262
real 10.00 s
user 157.88 s
sys   0.03 s


●     The first step helps to split multiple sub-jobs in a jobfile (see section 4.3 for details), and if you want to run all the sub-jobs, please skip the first step and run the jobfile directly like lkp run jobs/ebizzy.yaml .

●     All the statistics data can be found under your result root directory, such as: /result/ebizzy/200%-100x-10s/kemi-desktop/ubuntu/x86_64-rhel-7.2/gcc-6/4.13.0-rc5

3.2.3   Performance Regression Profiling

The lkp ncompare command is very useful to help profile benchmark output results between two different commit IDs, and find out the potential performance regression.

Usage: lkp ncompare -s commit=<parent> -o commit=<commit>.

For example:

# lkp ncompare -s commit=9e66317d3c92ddaab330c125dfe9d06eee268aff -o commit=a11c3148ba6b8b4cac4876b7269d570260d0bd41

Here is the sample output:



       v4.14-rc3 a11c3148ba6b8b4cac4876b726
---------------- --------------------------
       fail:runs  %reproduction  fail:runs
           |                |                   |   
          2:4          -50%            :4     kmsg.DHCP/BOOTP:Ignoring_fragmented_reply
           :4           25%           1:4     kmsg.DHCP/BOOTP:Reply_not_for_us_on_eth#,op[#]xid[#]
    %stddev     %change      %stddev
           \                  |                \ 
     15431            +3.3%      15936
    388.97            -3.2%     376.67        aim7.time.elapsed_time
    388.97            -3.2%     376.67        aim7.time.elapsed_time.max
     32157            -3.3%      31090        aim7.time.system_time
      0.07 ±  7%      +0.0        0.08 ±  5%  mpstat.cpu.iowait%
     36021           -28.5%      25771 ± 15%  softirqs.NET_RX
    230.15            -1.1%     227.55        turbostat.PkgWatt
      7125            +2.8%       7327        vmstat.system.cs
    560.00 ± 61%     -66.1%     189.75 ± 11%  numa-meminfo.node1.Mlocked
    560.00 ± 61%     -66.1%     189.75 ± 11%  numa-meminfo.node1.Unevictable
    139.50 ± 61%     -65.9%      47.50 ± 10%  numa-vmstat.node1.nr_mlock
    139.50 ± 61%     -65.9%      47.50 ± 10%  numa-vmstat.node1.nr_unevictable
    139.50 ± 61%     -65.9%      47.50 ± 10%  numa-vmstat.node1.nr_zone_unevictable
    370.58 ±  7%     +21.6%     450.44 ±  8%  sched_debug.cfs_rq:/.exec_clock.stddev
      2267 ± 75%    +107.6%       4706 ± 24%  sched_debug.cfs_rq:/.load.min

3.3   Add New Benchmark to lkp-tests

Developers can add other test cases or benchmarks if the existing ones cannot meet the test requirement or to better leverage lkp-tests framework. lkp-tests is an open source tool, and developers are welcome to contribute new benchmarks to this framework.

Let’s take [netperf] as an example to illustrate how to add a new benchmark to lkp-test. [netperf] is a benchmark tool used to measure various aspect of networking performance.

As explained in previous examples, lkp-tests provides the following: automatic benchmark installation, automatic benchmark execution with customized options, and multi-dimensional performance analysis. Let’s see how to add the new benchmark to lkp-tests to achieve this.

3.3.1   Benchmark Installation                                                                              

The makepkg mechanism (makepkg is a script to automate building a package) from Arch Linux is used to install benchmark packages automatically in lkp-tests. First, we need to create a configuration file named PKGBUILD under the pkg/”benchmark_name” directory. Arch Linux has provided PKGBUILD for some benchmarks. They can be downloaded from [AUR].                                           

The sample below shows how to write a PKGBUILD configuration file to automate building a package. This file includes a link for downloading the benchmark source code and the way to build and install the packages. 

arch=('i686' 'x86_64')                                                       

    cd "$srcdir/$pkgname-$pkgver"                                         
    cp /usr/share/misc/config.{guess,sub} .                               
    ./configure $CONFIGURE_FLAGS                                          

package() {                                                               
    cd "$srcdir/$pkgname-$pkgver"                                         
    make DESTDIR="$pkgdir/" install                                       

3.3.2   Prepare for Benchmark Execution

To ensure that the benchmark executes smoothly, three key component elements need to be created:

  • A job file under the jobs/ directory is required for setting system running environment and specifying kinds of options for the benchmark
  • A host file under the hosts/ directory is required to describe machine profile parameters (e.g. cpu/memory)
  • A test script is required to run the benchmark according to the previous setup.   Job File

The job file in lkp-tests is based on key->value pairs in yaml mode, and it includes four parts: a) jobfile identification header (mandatory), b) benchmark options (mandatory), c) system setup  (optional), d) monitor (optional).

●     Jobfile identification header

At the beginning of each job file, some basic description information is listed. Below is an example for netperf.yaml. The suite and testcase fields should be set to the benchmark name, and the category field is set to benchmark generally. (Some other categories include function, and noise. Different categories will bring different default monitors. The benchmark category has the most default monitors)

suite: netperf
testcase: netperf
category: benchmark

●     Benchmark options

A variety of options are specified as command-line arguments when running a benchmark. We can select parts of important ones to constitute key->value pairs in job file. Let’s take netperf as an example. Netperf is a benchmark based on a client-server mode and is used to measure unidirectional throughput and end-to-end latency.

The example below shows how to use netperf independently from lkp-tests:

# start netserver to listen incoming traffic.
$ netserver
# command running result:
Starting netserver with host 'IN(6)ADDR_ANY' port '12865' and family AF_UNSPEC

# Start netperf client to send packets
$ netperf -t TCP_STREAM -c -C -l 10 -- -m 1024
# command running result:
MIGRATED TCP STREAM TEST from ( port 0 AF_INET to localhost () port 0 AF_INET : demo
Recv    Send    Send                                 Utilization   Service    Demand
Socket  Socket  Message  Elapsed                     Send     Recv     Send    Recv
Size    Size    Size     Time         Throughput     local    remote   local   remote
bytes   bytes   bytes    secs.        10^6bits/s       %S     % S      us/KB   us/KB

 87380  16384   1024     10.00        14954.83        10.09   10.09      0.885   0.885

Here is an explanation of the options used in the example command above.




Specifies the test type (TCP_STREAM, tcp traffic)


Specifies test duration (s)


Specifies the sending packet size


Report local cpu usage


Report remote cpu usage

For the full usage of netperf, please visit the netperf homepage or man netperf.

With this information, three key-value pairs can be created as major benchmark options in the job file:

  test: TCP_STREAM
  runtime: 10s
  send_size: 1024

Additionally, by using the lkp split command, lkp-tests can define a set of tests with different option values in one job file. For example, if you also want to test UDP_STREAM with packet size 2048, the job file can be defined like this:

  runtime: 10s
  -  1024
  -  2048

After splitting the job file, you will get four sub-job files: TCP_STREAM+1024s, TCP_STREAM+2048s, UDP_STREAM+1024s, UDP_STREAM+2048s.


For other general options, such as thread number, they can be defined as an independent key-value pair at the same level of hierarchy as the key-value pair of benchmark options in the job file, e.g.

  runtime: 10s
  -  1024
  -  2048
nr_threads: 10

●     Running environment setup

Sometimes, additional system setup is required before running a benchmark. For example, you may want to add the control group v2 setup to ensure smooth benchmark execution with enough system resources or to get more statistics data, which can be achieved by adding a key-value pair in the job file.

cgroup2:   #cgroup2 is parsed by setup/cgroup2 executable script
    memory.high: 90%
    memory.low: 50%
    memory.max: max
    pids.max: 10000

lkp-tests provides approximately 30 setup scripts. See the Appendix or check the setup/ directory.


lkp-tests supports importing external job files. Therefore, the job file for running netperf with cgroup2 setup can be simply written like this:

<< : jobs/cgroup2.yaml

<< : jobs/netperf.yaml

●     Monitors

Monitors are used to capture performance statistics while running benchmarks. They are important for performance analysis and regression root causing. At the time of this writing, there are 38 monitors (See Appendix) currently enabled in lkp-tests and the number is still growing. Similar to benchmark options, you can also do monitor setting (parameters, running configurations, etc) by creating key-value pairs in the job file. lkp-tests provides default monitors and their corresponding settings when the “lkp split yourjob.yaml” command is run. Check the include/category/”category_value” file for details on key-value pairs of monitors. If the default monitor setting does not meet the requirement, developers can define the new setting in the job file. Here is an example to specify the statistic interval as five seconds for proc-vmstat monitor.

       interval: 5


Be sure to execute “lkp split” command before running a benchmark.   Host File

The host file under the hosts/ directory is required to specify system attribute parameters, such as node number, cpu number, memory size, and so on. These are general key-value pairs for system attribution that are used for each benchmark running on the system. Similar to monitors, these key-values pairs can also be merged into a sub job file after running “lkp split” command. The file should be named by the output of `hostname`. Here is an example of a host file for a system running on a four-socket Skylake platform, with 192 CPUs, 768G memory, Intel® SSD Data Center S3710 Series 1.2TB and Intel® SSD DC S3710 Series 400G.

model: Skylake-4S
nr_cpu: 192
memory: 768G
nr_ssd_partitions: 1
ssd_partitions: /dev/disk/by-id/ata-INTEL_SSDSC2BA400G4_BTHV634503K8400NGN-part1
swap_partitions: /dev/disk/by-id/ata-INTEL_SSDSC2BA012T4_BTHV549404SV1P2PGN-part3
rootfs_partition: /dev/disk/by-id/ata-INTEL_SSDSC2BA012T4_BTHV549404SV1P2PGN-part4
kernel_cmdline: acpi_rsdp=0x69252014   Test Script

The executable test script named by the original benchmark name is placed under the tests/ directory. It is used to run a benchmark according to the key-value pairs of benchmark options specified in the job file. Recall that the options on test type, test duration, and packet size specified in the netperf.yaml job file, which is a simple netperf executable script, looks like the example shown below. (Note that the actual netperf test script in lkp-tests is somewhat more complex, since it implements multi-threads and adaptations.):

[ -n "$send_size" ] && test_options="-- -m $send_size"
netperf  -t $test -c -C -l $runtime $test_options

3.4   Result Parser

The result parser is a script used to convert the output of a benchmark to a standard .json format result. Though, it is not a mandatory step, we strongly suggest you add it in the benchmark, because only json format results can be used by the lkp ncompare command to calculate average, stddev, and change percentage among different kernel versions. This is extremely useful to figure out the root cause of performance regression.

Let’s take stat/netperf as an example (Refer to for the output of netperf). Usually, developers only care about the result of throughput. Then with the result parser, the output will be converted to netperf.json under the result root directory. An example of the result is shown below:

  "netperf.Throughput_tps": [

The netperf result parser is actually a script that converts the netperf output to the format shown above. Check the file lkp-tests/stats/netperf for netperf result parser.

4   Summary

lkp-tests is a convenient open source tool that allows developers to evaluate their patches for both functional test and performance analysis. With the standard interface provided by lkp-tests, developers can do installation, execution, and result analysis conveniently. In benchmark testing, besides the outputs from benchmark itself, the lkp-tests tool is specialized to collect data from every aspect of the system to assist performance analysis and root causing issues. For any question or issues on lkp-tests tool, contact

5   Appendix

5.1   Supported Benchmarks




AIM7 is a benchmark that exercises a different aspect of the operating system, such as disk-file operations, process creation, user virtual memory operations, pipe I/O, and compute-bound arithmetic loops.


AIM9 is a benchmark that exercises and times each component of a UNIX computer system.


ApacheBench (ab) is a single-threaded command line computer program for measuring the performance of HTTP web servers.


Blogbench is a portable filesystem benchmark that tries to reproduce the load of a real-world busy file server.


A small program to stress the shared memory exit path.


A program that emulates web browser workload to see the performance of the swap subsystem.


Stress CPU online/offline to test if CPU hotplug works as expected.


DBENCH is a tool used to generate I/O workloads to either a filesystem or to a networked CIFS or NFS server.


Test writeback performance.


Ebizzy is designed to generate a workload that resemblen common web application server workloads.


See how quickly a process’s memory is freed when it exits.


This test script was originally developed by the ext4 maintainer to reproduce an ext4 bug that NULL pointer dereference in ext4_ext_remove_space on kernel 3.5.1.


A filesystem-level benchmark provided by sysbench.


fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.


The fsmark benchmark tests synchronous write workloads.


FTQ (fixed time quanta) measures hardware and software interference or ‘noise’ on a node from the applications perspective.


Like FTQ, FWQ (fixed work quanta) is also used to measure noise.


GFXBench is a free, cross-platform, and cross-API 3D graphics benchmark that measures graphics performance, long-term performance stability, render quality, and power consumption with a single, easy-to-use application.


Hackbench is both a benchmark and a stress test for the Linux kernel scheduler.


Test the Linux hibernation functionality.


HPC Challenge is a benchmark suite that measures a range memory access patterns.


Automated hostapd/wpa_supplicant testing with mac80211_hwsim.


Test idle injection functionality.


Test CPU idle.


IOzone is a filesystem benchmark tool. The benchmark generates and measures a variety of file operations.


Iperf is a tool for active measurements of the maximum achievable bandwidth on IP networks.


Kernel build workload.


The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.


It is to measure latency in the Linux network stack between the kernel and user space.


Unit tests for kvm.


Self tests for libhugetlbfs.


The Linpack benchmark is a measure of a computer’s floating-point rate of execution.


Locktorture is a kernel module to test lock.


The LTP testsuite contains a collection of tools for testing the Linux kernel and related features.


Makepkg is a script that automates the building of packages.


Self tests for mcelog.


The MCE test suite is a collection of tools and test scripts for testing the Linux RAS related features, including CPU/Memory error containment and recovery, ACPI/APEI support etc.


Self tests for mdadm.


It is a framework to run users’ own customized workloads that are not supported by lkp-tests.


Unit test for ndctl.


Nepim stands for network pipemeter, a tool for measuring available bandwidth between hosts.


Netperf is a benchmark that can be used to measure various aspect of networking performance.


IOzone is a filesystem benchmark tool.


Nuttcp is a network performance measurement tool intended for use by network and system managers.


Non-Volatile Memory Library unit test.


A simple MPI-based testing environment to stress and validate OCFS2.


On-Line Transaction Processing (OLTP) workload provided by TPC.


The packetdrill scripting tool enables quick, precise tests for entire TCP/UDP/IPv4/IPv6 network stacks, from the system call layer down to the NIC hardware.


Pbzip2, which stands for parallel implementation of bzip2, is a fully functional replacement for bzip2 that exploits multiple processors and multiple cores to the hilt when compressing data.


Benchmarks provided by perf: futex, sched-pipe, and others.


Page fault test microbenchmark.


The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.


Piglit is a collection of automated tests for OpenGL implementations.


See pbzip2.


See pbzip2.


See pbzip2.


File system benchmark from NetApp.


See pbzip2.


Qperf measures bandwidth and latency between two nodes.


Rcutorture is a kernel module provided by kernel selftest.


REAIM is an updated and improved version of AIM 7 benchmark.


Siege is an http load testing and benchmarking utility.


Sockperf is a tool for network performance measurement.


Stress-ng will stress test a computer system in various selectable ways.


Page fault and allocation latency benchmark.


Linux suspend functionality test.


Benchmark performance for Linux swapin.


Tbench produces the network load to simulate the network part of the load of the commercial Netbench benchmark.


Tcrypt is a kernel module that is used to test different crypto algorithm from the kernel space.


A Linux system-call fuzz tester.


UnixBench is the original BYTE UNIX benchmark suite.


Vm-scalability is a test suite used to determine the scalability of various VM subsystems.


Will-it-scale takes a testcase and runs it from one through n parallel copies to see if the testcase will scale.


Regression test suite for xfs and other filesystems.

5.2   Supported Monitors




/proc/buddyinfo. Periodically record the proc file or record it once at beginning of the test and once at the end of the test. This is the same for below all /proc/XXX monitors.


CPU frequency stats.


CPU idle stats.




Energy consumption as reported by perf stat.


Statistics about ethernet card as provided by ethtool.


Capture ftrace log.




Disk IO information as provided by iostat utility.


Used to capture if there is any OS noise in interrupt handling.


Record /proc/kmsg.






Md stats from under /sys/dev/block/9:0/md.






Report processors-related statistics by the utility mpstat.


List NFS statistics by utility nfsstat.


NUMA-related information under the /proc pseudo filesystem.




Records various CPU performance counter events by perf stat.


Records performance data for post analysis by perf record.


Record power meter readings during the test.


Record various PMU counters during the test.


/proc/stat and /proc/vmstat










Record syscall trace event using ftrace mechanism.


Record thermal zone and cooling_device stats during the test.


Output from the top utility.


Show CPU C state and package C state information during the test.


Show the boot and idle time information from the uptime utility.


Output from the vmstat utility. Run at a predefined interval.



5.3   Setup Scripts




Sets scaling_governor for CPU when scaling driver is acpi-cpufreq.


Sets various parameters under the sysfs queue directory for a block device.


Manipulates control group-related settings.


Same as above, but using control group2 interface.


Sets the real-time scheduler parameter for the test.


Manipulates various settings under a sysfs cooling_device directory.


Sets cpu_max_freq for CPUs.


Sets scaling governor for online CPUs.


Adds string to a test’s result root.


Fetches vmlinux and modules to be used by a monitor called perf-report-srcline.


Sets DAX mode for PMEM -(persistent memory) device.


Sets VM’s dirty related parameters like dirty_ratio, dirty_bytes, etc.


Specifies the disk to be used for the test.


Drop all VM caches.


Consumes some memory before the test starts.


Prepares filesystem for a test.


Settings for intel_pstate.


Set scheduler for a block device.


Setup software RAID for a test.


Various mysql-related setup and starts mysqld.


Start a test with numactl by adding numactl options.


Used to offline some CPU.


Used to offline the other thread CPU of all CPUs.


Sets /proc/sys/fs/pipe-user-pages-soft.


Setup ramdisk to be used as disk and partitions by a test.


Make use of scsi_debug module to emulate a scsi disk.


Use the scsi_debug module created scsi disk as swap device.


Stops the ssh daemon.


Use systemctl to stop the specified service.


Used to mkswap and swap on a partition.


Allows the user to set essentially all sysctl parameters.


Similar to numactl, this uses taskset to start the test with some of taskset’s switches.


Set transparent hugepage-related params for a test.


Specific setup script for swapin test case.


Specific setup script for fio test case.


Specific setup script for mutilate test case.


Specific setup script for pgbench test case.


Specific setup script for oltp test case.