Sorry, you need to enable JavaScript to visit this website.

Linux Kernel Performance

Linux development evolves rapidly. The performance and scalability of the OS kernel has been a key part of its success. However, discussions have appeared on LKML (Linux Kernel Mailing List) regarding large performance regression between kernel versions. These discussions underscore the need for a systematic and disciplined way to characterize, improve, and test Linux kernel performance. Our goal is to work with the Linux community to further enhance the Linux kernel with consistent performance increases (avoiding degradations) across releases. The information available on this site gives community members better information about what 0-Day and LKP (Linux Kernel Performance) are doing to preserve performance integrity of the kernel.

0-Day CI Linux Kernel Performance Report (v5.5)

BY Rong Chen ON Feb 27, 2020
  1. Introduction

0-Day CI is an automated Linux kernel test service that provides comprehensive test coverage of the Linux kernel. It covers kernel build, static analysis, boot, functional, performance and power tests. This report shows the recent observations of kernel performance status on IA platform based on the test results from 0-Day CI service. It is structured in the following manner:

  • Section 2, merged regressions and improvements in v5.5 release candidates

  • Section 3, test parameter description

  • Section 4, captured regressions and improvements by shift-left testing during developers’ and maintainers’ tree during v5.5 release cycle

  • Section 5, performance comparison among different kernel releases

  • Section 6, test machine list

 

  1. Test parameters

Here are the descriptions for each parameter/field used in the tests. 

 

Classification

Name

Description

General

runtime

Run the test case within a certain time period (seconds or minutes)

 

nr_task

If it is an integer, which means the number of processes/threads (to run the workload) of this job. Default is 1.

If it is a percentage, e.g. 200% means the number of processes/threads is double of cpu number

 

nr_threads

Alias of nr_task

 

iterations

Number to repeat this job

 

test_size

Test disk size or memory size

 

set_nic_irq_affinity

Set NIC interrupt affinity

 

disable_latency_stats

Latency_stats may introduce too much noise if there are too many context switches, allow to disable it

 

transparent_hugepage

Set transparent hugepage policy (/sys/kernel/mm/transparent_hugepage)

 

boot_params:bp1_memmap

Boot parameters of memmap

 

disk:nr_pmem

number of pmem partitions used by test

 

swap:priority

Priority means the  priority of the swap device. priority is a value between -1 and 32767, the default is -1 and higher priority with higher value. 

Test Machine

model

Name of Intel processor microarchitecture

 

brand

Brand name of cpu

 

cpu_number

Number of cpu

 

memory

Size of memory


 

  1. Linux Kernel v5.5 Release Test

The 5.5 release of the Linux kernel was on January 27, 2020. In the end, Linus decided to release the 5.5 kernel rather than going for another prepatch. "So despite the slight worry that the holidays might have affected the schedule, 5.5 ended up with the regular rc cadence and is out now." Some of the significant features in this release are iopl() emulation, many new io_uring commands, live-patch state tracking, type checking for BPF tracepoint programs, a new CPU load-balancing algorithm, the KUnit unit-testing framework, airtime queue limits for WiFi, and much more. See the KernelNewbies 5.5 changelog for more information.

 

0-Day CI monitored the release closely to trace down the performance status on IA platform. 0-Day observed 5 regressions and 6 improvements during feature development phase for v5.5. We will share more detailed information together with correlated patches that led to the results. Note that the assessment is limited by the test coverage 0-Day has now. The list is summarized in the observation summary section.

    1. Observation Summary

0-Day CI observed 5 regressions and 6 improvements during feature development phase for v5.5, which is in the time frame from v5.5-rc1 to v5.5 release. 

Test Indicator

Report

Test Scenario

Test Machine

Development Base

Status

aim7.jobs-per-min

[xfs] fdbb8c5b80: 3.9% improvement

disk: 1BRD_48G

fs: xfs

test: disk_wrt

load: 3000

cpufreq_governor: performance

lkp-skl-2sp7

v5.4-rc3


 

merged at v5.5-rc1 and v5.5

aim7.jobs-per-min

[f2fs] fe1897eaa6: -60.9% regression

disk: 4BRD_12G

md: RAID1

fs: f2fs

test: sync_disk_rw

load: 200

cpufreq_governor: performance

lkp-ivb-2ep1

v5.4-rc1

merged at v5.5-rc1 and v5.5, no response from author yet

filebench.sum_bytes_mb/s

[LKP] [ext4] b1b4705d54: -20.2% regression

disk: 1HDD

fs: ext4

test: fivestreamreaddirect.f

cpufreq_governor: performance

lkp-hsw-d01

v5.4-rc3

merged at v5.5-rc1 and v5.5, some developers are working on it

hackbench.throughput

[sched/fair] b0fb1eb4f0: 1.4% improvement

nr_threads: 50%

mode: process

ipc: socket

cpufreq_governor: performance

lkp-bdw-ep6

v5.4-rc1

merged at v5.5-rc1 and v5.5

lmbench3.PIPE.bandwidth.MB/sec

[pipe] 3c0edea9b2: -17.0% regression

test_memory_size: 50%

nr_threads: 100%

mode: development

test: PIPE

cpufreq_governor: performance

lkp-bdw-de1

v5.4-rc2

merged at v5.5-rc1 and v5.5, the fix is merged at v5.5-rc2

netperf.Throughput_total_tps

[perf/core] 66d258c5b0: 16.7% improvement

ip: ipv4

runtime: 300s

nr_threads: 1

cluster: cs-localhost

test: TCP_CRR

cpufreq_governor: performance

lkp-csl-2ap3

v5.4-rc5

merged at v5.5-rc1 and v5.5

phoronix-test-suite.glmark2.0.score

[x86/mm/pat] 8d04a5f97a: -23.7% regression

need_x: true

test: glmark2-1.1.0

cpufreq_governor: performance

lkp-csl-2sp8

v5.4-rc8

merged at v5.5-rc1 and v5.5, the fix patch has been verified and target kernel version is v5.6

stress-ng.icache.ops_per_sec

[rcu] ed93dfc6bc: -15.0% regression

nr_threads: 100%

disk: 1HDD

testtime: 1s

class: cpu-cache

cpufreq_governor: performance

lkp-csl-2sp5

v5.4-rc1

merged at v5.5-rc1 and v5.5, the fix patch has been verified and target kernel version is v5.7

vm-scalability.median

[sched/fair] 0b0695f2b3: 3.1% improvement

runtime: 300s

size: 8T

test: anon-cow-seq

cpufreq_governor: performance

lkp-skl-fpga01

v5.4-rc1

merged at v5.5-rc1 and v5.5

will-it-scale.per_process_ops

[mm/hugetlb] c77c0a8ac4: 15.9% improvement

nr_task: 100%

mode: process

test: mmap1

cpufreq_governor: performance

lkp-csl-2ap3

v5.5-rc4

merged at v5.5-rc5 and v5.5

will-it-scale.per_thread_ops

[sched/core] 5d7d605642: 2.0% improvement

nr_task: 16

mode: thread

test: sched_yield

cpufreq_governor: performance

lkp-bdw-ep6

v5.4-rc7

merged at v5.5-rc1 and v5.5

    1. aim7.jobs-per-min

aim7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of a multiuser system. 

 

      1. scenario: sync_disk_rw test on f2fs

 

Commit fe1897eaa6 was reported to have -60.9% regression of aim7.jobs-per-min when comparing to v5.4-rc1. It was merged to mainline at v5.5-rc1.

 

Correlated commits

fe1897eaa6

f2fs: fix to update time in lazytime mode

branch

linus/master

report

[LKP] [f2fs] fe1897eaa6: -60.9% regression

test scenario

disk: 4BRD_12G

md: RAID1

fs: f2fs

test: sync_disk_rw

load: 200

cpufreq_governor: performance

test machine

lkp-ivb-2ep1

status

merged at v5.5-rc1 and v5.5, no response from author yet

 

    1. filebench.sum_bytes_mb/s

Filebench is a file system and storage benchmark that can generate a large variety of workloads. Unlike typical benchmarks it is extremely flexible and allows to specify application's I/O behavior using its extensive Workload Model Language (WML). Users can either describe desired workloads from scratch or use (with or without modifications) workload personalities shipped with Filebench (e.g., mail-, web-, file-, and database-server workloads). Filebench is equally good for micro- and macro-benchmarking, quick to setup, and relatively easy to use.

    1. scenario: fivestreamreaddirect test

 

 

Commit b1b4705d54 was reported to have -20.2% of filebench.sum_bytes_mb/s when comparing to v5.4-rc3. It was merged to mainline at v5.5-rc1.

 

Correlated commits

b1b4705d54

ext4: introduce direct I/O read using iomap infrastructure

branch

linus/master

report

[LKP] [ext4] b1b4705d54: -20.2% regression

test scenario

disk: 1HDD

fs: ext4

test: fivestreamreaddirect.f

cpufreq_governor: performance

test machine

lkp-hsw-d01

status

merged at v5.5-rc1 and v5.5, some developers are working on it

 

  1. lmbench3.PIPE.bandwidth.MB/sec

lmbench is a suite of simple, portable, ANSI/C microbenchmarks for UNIX/POSIX. In general, it measures two key features (latency and bandwidth). lmbench is intended to give system developers insight into basic costs of key operations.

 

Scenario: PIPE test

 

 

Commit 3c0edea9b2 was reported to have -17.0% of lmbench3.PIPE.bandwidth.MB/sec when comparing to v5.4-rc2. It was merged to mainline at v5.5-rc1.

 

Correlated commits

3c0edea9b2

pipe: Remove sync on wake_ups

branch

linus/master

report

[LKP] [pipe] 3c0edea9b2: -17.0% regression

test scenario

test_memory_size: 50%

nr_threads: 100%

mode: development

test: PIPE

cpufreq_governor: performance

test machine

lkp-bdw-de1

status

merged at v5.5-rc1 and v5.5, the regression has been fixed

 

    1. will-it-scale.per_process_ops

Will-it-scale takes a test case and runs it from 1 through to n parallel copies to see if the test case will scale. It builds both process and threads based tests in order to see any differences between the two.

Scenario: process mmap1

 

 

Commit c77c0a8ac4 was reported to have 15.9% of will-it-scale.per_process_ops when comparing to v5.5-rc4. It was merged to mainline at v5.5-rc5.

 

Correlated commits

c77c0a8ac4

mm/hugetlb: defer freeing of huge pages if in non-task context

branch

linus/master

report

[LKP] [mm/hugetlb] c77c0a8ac4: 15.9% improvement

test scenario

nr_task: 100%

mode: process

test: mmap1

cpufreq_governor: performance

test machine

lkp-csl-2ap3

status

merged at v5.5-rc5 and v5.5


 

  1. Shift-Left Testing

Beyond testing trees in the upstream kernel, 0-Day CI also tests developers’ and maintainers’ trees, which can catch issues earlier and reduce wider impact. We call it “shift-left” testing. During the v5.5 release cycle, 0-Day CI had reported 7 major performance regressions and 4 major improvements by doing shift-left testing. We will share more detailed information together with possible code changes that led to this result for some of these, though the assessment is limited by the test coverage we have now. The whole list is summarized in the report summary section.

    1. Report Summary

0-Day CI had reported 7 performance regressions and 4 improvements by doing shift-left testing on developer and maintainer repos.

 

Test Indicator

Mail

Test Scenario

Test Machine

Status

aim7.jobs-per-min

[LKP] [mm/lru] 0bc395a878: -50.4% regression

test: disk_wrt

load: 8000

cpufreq_governor: performance

lkp-bdw-ep6

currently not merged, no response from author yet

aim7.jobs-per-min

[LKP] [locking/qspinlock] 19c4227b3b: 44.1% improvement

disk: 4BRD_12G

md: RAID0

fs: btrfs

test: disk_cp

load: 1500

cpufreq_governor: performance

lkp-skl-2sp7

currently not merged

apachebench.requests_per_second

[LKP] [tcp] abda73240d: 14.3% improvement

runtime: 300s

concurrency: 8000

cluster: cs-localhost

cpufreq_governor: performance

lkp-bdw-de1

currently not merged

filebench.sum_bytes_mb/s

[LKP] [xfs] bbffdbb4a4: -47.7% regression

disk: 1HDD

fs: xfs

test: fileserver.f

cpufreq_governor: performance

lkp-hsw-d01

currently not merged, no response from author yet

fsmark.files_per_sec

[LKP] [intel_idle] bf5a9506c5: 12.6% improvement

iterations: 4

disk: 1SSD

nr_threads: 4

fs: ext4

fs2: nfsv4

filesize: 8K

test_size: 20G

sync_method: fsyncBeforeClose

nr_directories: 16d

nr_files_per_directory: 256fpd

cpufreq_governor: performance

lkp-hsw-ep4

currently not merged

reaim.jobs_per_min

[LKP] [mm/lru] 3f8db6a891: -35.2% regression

runtime: 300s

nr_task: 1000t

test: mem_rtns_1

cpufreq_governor: performance

lkp-ivb-2ep1

currently not merged, no response from author yet

unixbench.score

[LKP] [mm/lru] 5470521fac: -2.3% regression

runtime: 300s

nr_task: 1

test: shell8

cpufreq_governor: performance

lkp-skl-fpga01

currently not merged

unixbench.score

[LKP] [mm/memcg] bcc1e930b6: -1.4% regression

runtime: 300s

nr_task: 30%

test: shell1

cpufreq_governor: performance

lkp-ivb-d01

currently not merged

vm-scalability.median

[LKP] 3e05ad861b: -86.3% regression

thp_enabled: never

thp_defrag: never

nr_task: 8

nr_pmem: 4

priority: 1

test: swap-w-seq

cpufreq_governor: performance

lkp-hsw-4ex1

currently not merged, no response from author yet

will-it-scale.per_thread_ops

[LKP] [xdp] 332f22a60e: -11.4% regression

nr_task: 100%

mode: thread

test: dup1

cpufreq_governor: performance

lkp-knm01

currently not merged, no response from author yet

will-it-scale.per_thread_ops

[LKP] [locking/qspinlock] ae9a1c0ae4: 132.6% improvement

nr_task: 100%

mode: thread

test: unlink2

cpufreq_governor: performance

lkp-skl-fpga01

currently not merged

    1. aim7.jobs-per-min

aim7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of a multiuser system. 

 

      1. scenario: sync_disk_rw test on f2fs

 

Commit 0bc395a878 was reported to have -50.4% of aim7.jobs-per-min when comparing to v5.5-rc1.

 

Correlated commits

0bc395a878

mm/lru: replace pgdat lru_lock with lruvec lock

branch

alexshi/lru-next

report

[LKP] [mm/lru] 0bc395a878: -50.4% regression

test scenario

test: disk_wrt

load: 8000

cpufreq_governor: performance

test machine

lkp-bdw-ep6

status

currently not merged, no response from author yet

 

    1. filebench.sum_bytes_mb/s

Filebench is a file system and storage benchmark that can generate a large variety of workloads. Unlike typical benchmarks it is extremely flexible and allows to specify application's I/O behavior using its extensive Workload Model Language (WML). Users can either describe desired workloads from scratch or use (with or without modifications) workload personalities shipped with Filebench (e.g., mail-, web-, file-, and database-server workloads). Filebench is equally good for micro- and macro-benchmarking, quick to setup, and relatively easy to use.

      1. scenario: fileserver test

 

 

Commit bbffdbb4a4 was reported to have -47.7% of filebench.sum_bytes_mb/s when comparing to v5.5-rc4.

 

Correlated commits

bbffdbb4a4

xfs: deferred inode inactivation

branch

djwong-xfs/repair-hard-problems

report

[LKP] [xfs] bbffdbb4a4: -47.7% regression

test scenario

disk: 1HDD

fs: xfs

test: fileserver.f

cpufreq_governor: performance

test machine

lkp-hsw-d01

status

currently not merged, no response from author yet


 

    1. will-it-scale.per_thread_ops

Will-it-scale takes a test case and runs it from 1 through to n parallel copies to see if the test case will scale. It builds both process and threads based tests in order to see any differences between the two.

Scenario: thread dup1

 

 

Commit 332f22a60e was reported to have -11.4% of will-it-scale.per_thread_ops when comparing to v5.5-rc1.

 

Correlated commits

332f22a60e

xdp: Remove map_to_flush and map swap detection

branch

alaahl/for-upstream

report

[LKP] [xdp] 332f22a60e: -11.4% regression

test scenario

nr_task: 100%

mode: thread

test: dup1

cpufreq_governor: performance

test machine

lkp-knm01

status

currently not merged, no response from author yet

 

  1. Latest Release Performance Comparing

 

This session gives some information about the performance difference among different kernel releases, especially between v5.5 and v5.4. There are 50+ performance benchmarks running in 0-Day CI, and we selected 9 benchmarks which historically showed the most regressions/improvements reported by 0-Day CI. Some typical configuration/parameters are used to run the test. For some of the regressions from the comparison, 0-Day did not successfully bisect it thus no related report sent out during the release development period, but it is still worth checking. The root cause to cause the regressions won’t be covered in this session. 

 

In the following figures, the value on the Y-axis is the relative performance number. We used the v5.4 data as the base (performance number is 100).

    1. test suite: vm-scalability

vm-scalability exercises functions and regions of the mm subsystem of the Linux kernel. Below 4 tests show the typical test results. 

 

vm-scalability Test 1

vm-scalability Test 2

 

Here are the test configuration and performance test summary for above tests:                                                                                                                                                                                                           

 

vm-scalability Test 1 

vm-scalability Test 2

test machine

model: Skylake

cpu_number: 104

memory: 192G

model: Haswell-EP

brand: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz

cpu_number: 72

memory: 256G

runtime

300s

300s

size

512G

8T

vm-scalability test parameter

test case: anon-w-rand

test case: anon-cow-seq-hugetlb

performance summary

vm-scalability.median on kernel v5.5 has 6.92% improvement when comparing to v5.4

vm-scalability.throughput on kernel v5.5 has 9.43% improvement when comparing to v5.4

 

    1. test suite: will-it-scale

Will-it-scale takes a test case and runs it from 1 through to n parallel copies to see if the test case will scale. It builds both process and threads based tests in order to see any differences between the two.

 

will-it-scale  Test 1

Will-it-scale Test 2

will-it-scale  Test 3

will-it-scale  Test 4

 

Here are the parameters and performance test summary for above tests:                                                                                                                                                                                                                          

 

will-it-scale Test 1 

will-it-scale Test 2 

will-it-scale Test 3 

will-it-scale Test 4 

test machine

model: Cascade Lake

brand: Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz

cpu_number: 192

memory: 192G

model: Cascade Lake

brand: Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz

cpu_number: 192

memory: 192G

model: Cascade Lake

brand: Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz

cpu_number: 192

memory: 192G

model: Ivy Bridge

brand: Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz

cpu_number: 8

memory: 16G

nr_task

16

16

100%

16

will-it-scale test parameter

mode: process

test case: unlink1

mode: process

test case: mmap1

mode: process

test case: poll1

mode: thread

test case: mmap1

summary

will-it-scale.per_process_ops on kernel v5.5 has -4.03% regression when comparing to v5.4

will-it-scale.per_process_ops on kernel v5.5 has 3.82% improvement when comparing to v5.4

will-it-scale.per_process_ops on kernel v5.5 has 4.09% improvement when comparing to v5.4

will-it-scale.per_thread_ops on kernel v5.5 is almost the same as that in v5.4

 

    1. test suite: unixbench

UnixBench is a system benchmark to provide a basic indicator of the performance of a Unix-like system.

 

Unixbench Test 1

Unixbench Test 2

 

Here are the test configuration and performance test summary for above tests:                                                                                                                                                                                                           

 

Unixbench Test 1 

Unixbench Test 2

test machine

model: Cascade Lake

brand: Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz

cpu_number: 192

memory: 192G

model: Ivy Bridge

brand: Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz

cpu_number: 8

memory: 16G

runtime

300s

300s

nr_task

1

30%

unixbench test parameter

test case: execl

test case: shell1

performance summary

unixbench.score on kernel v5.5 has -3.18% regression when comparing to v5.4

unixbench.score on kernel v5.5 has 15.76% improvement when comparing to v5.4

 

    1. test suite: reaim

reaim updates and improves the existing Open Source AIM 7 benchmark. aim7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of a multiuser system.

 

reaim Test 1                             


 

Here are the test configuration and performance test summary for above tests:                                                                                                                                                                                                                      

 

reaim Test 1 

test machine

model: Broadwell-EP

brand: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz

cpu_number: 88

memory: 128G

runtime

300s

nr_task

100%

reaim test parameter

test case: short

performance  summary

reaim.jobs_per_min on kernel v5.5 has 11.92% improvement when comparing to v5.4

                                                                                                                                                                                                     

    1. test suite: pigz

pigz, which stands for Parallel Implementation of GZip, is a fully functional replacement for gzip that exploits multiple processors and multiple cores to the hilt when compressing data.

pigz Test 1 


 

 

Here are the test configuration and performance test summary for above tests:          

 

 

pigz Test 1

test machine

model: Ivy Bridge

brand: Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz

cpu_number: 4

memory: 8G

nr_threads

100%

pigz Test parameter

blocksize: 512K

performance  summary

pigz.throughput on kernel v5.5 is almost the same as that in v5.4

            

  1. test suite: netperf

Netperf is a benchmark that can be used to measure the performance of many different types of networking. It provides tests for both unidirectional throughput, and end-to-end latency.

 

netperf Test 1

netperf Test 2

Here are the test configuration and performance test summary for above tests:                                                                                                                                                                                                                          

 

netperf Test 1 

netperf Test 2 

test machine

model: Cascade Lake

brand: Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz

cpu_number: 192

memory: 192G

model: Ivy Bridge

brand: Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz

cpu_number: 4

memory: 8G

disable_latency_stats

1

1

set_nic_irq_affinity

1

1

runtime

300s

300s

nr_threads

25%

200%

ip

ipv4

ipv4

netperf test parameter

test case: TCP_RR

test case: TCP_SENDFILE

send_size: 10K

performance  summary

netperf.Throughput_tps on kernel v5.5 has -6.86% regression when comparing to v5.4

netperf.Throughput_Mbps on kernel v5.5 has 3.6% improvement when comparing to v5.4

 

    1. test suite: hackbench

Hackbench is both a benchmark and a stress test for the Linux kernel scheduler. It's  main job is to create a specified number of pairs of schedulable entities (either threads or traditional processes) which communicate via either sockets or pipes and time how long  it takes for each pair to send data back and forth.

hackbench Test 1

hackbench Test 2

hackbench Test 3

hackbench Test 4

Here are the test configuration and performance test summary for above tests:                                                                                                                                                                                                           

 

hackbench Test 1 

hackbench Test 2 

hackbench Test 3 

hackbench Test 4 

test machine

model: Coffee Lake

brand: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz

cpu_number: 16

memory: 32G

model: Ivy Bridge

brand: Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz

cpu_number: 8

memory: 16G

model: Haswell-EP

brand: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz

cpu_number: 72

memory: 256G

model: Haswell-EP

brand: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz

cpu_number: 72

memory: 256G

disable_latency_stats

1

1

1

1

nr_task

100%

1600%

50%

50%

unixbench test parameter

mode: process

ipc: pipe

mode: process

ipc: pipe

iterations: 12

mode: process

ipc: socket

iterations: 12

mode: threads

ipc: socket

performance summary

hackbench.throughput on kernel v5.5 has -36.08% regression when comparing to v5.4

hackbench.throughput on kernel v5.5 has -8.52% regression when comparing to v5.4

hackbench.throughput on kernel v5.5 has 19.88% improvement when comparing to v5.4

hackbench.throughput on kernel v5.5 has 53.63% improvement when comparing to v5.4

            

    1. test suite: fio

Fio was originally written to save me the hassle of writing special test case programs when I wanted to test a specific workload, either for performance reasons or to find/reproduce a bug.

fio Test 1 

  

fio Test 2

Here are the test configuration and performance test summary for above tests:                                                                                                                                                                                                                          

 

fio Test 1

fio Test 2

test machine

model: Cascade Lake

brand: Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz

cpu_number: 96

memory: 256G

model: Cascade Lake

brand: Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz

cpu_number: 96

memory: 256G

runtime

200s

200s

file system

ext4

ext4

mount_option: dax

No requirement

dax

disk

2pmem

2pmem

boot_params

bp1_memmap: 104G!8G

bp2_memmap: 104G!132G

bp1_memmap: 104G!8G

bp2_memmap: 104G!132G

nr_task

50%

50%

time_based

tb

tb

fio test parameter

fio-setup-basic:

  rw: randwrite

  bs: 4k

  ioengine: libaio

  test_size: 200G

fio-setup-basic:

  rw: randwrite

  bs: 4k

  ioengine: libaio

  test_size: 200G

performance  summary

fio.write_bw_MBps on kernel v5.5 has -3.91% regression when comparing to v5.4

fio.write_bw_MBps on kernel v5.5 has 73.55% improvement when comparing to v5.4

 

    1. test suite: ebizzy

ebizzy is designed to generate a workload resembling common web application server workloads. It is highly threaded, has a large in-memory working set, and allocates and deallocates memory frequently.

ebizzy Test 1

 

 

Here are the test configuration and performance test summary for above test:                                                                                                                                                                                                                          

 

ebizzy Test 1

test machine

model: Haswell

brand: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz

cpu_number: 8

memory: 8G

nr_threads

200%

iterations

100x

ebizzy test parameter

duration: 10s

performance  summary

ebizzy.throughput on kernel v5.5 is almost the same as that in v5.4


 

  1. Test Machines

    1. IVB Desktop

model

Ivy Bridge

brand

Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz

cpu number

8

memory

16G

 

model

Ivy Bridge

brand

Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz

cpu number

4

memory

8G

 

    1. SKL SP

model

Skylake

brand

Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz

cpu number

80

memory

64G

 

    1. BDW EP

model

Broadwell-EP

brand

Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz

cpu number

88

memory

128G

 

    1. HSW EP

model

Haswell-EP

brand

Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz

cpu number

72

memory

128G

 

    1. IVB EP

model

Ivy Bridge-EP

brand

Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz

cpu number

40

memory

384G

 

model

Ivytown Ivy Bridge-EP

brand

Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz

cpu number

48

memory

64G

 

    1. HSX EX

model

Brickland Haswell-EX

brand

Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz

cpu number

144

memory

512G