Sorry, you need to enable JavaScript to visit this website.

Feedback

Your feedback is important to keep improving our website and offer you a more reliable experience.

DESCRIPTION

Each data point represents a percentage deviation from the benchmark baseline(which is default/untuned firefly performance, displayed as 1 for all 4 IO pattern). Any version scores above 1 indicates it has performance increasing compared with firefly, the vise versa.

 

HARDWARE CONFIGURATION:

  • HW configuration are shown under [Ceph Performance(default/untuned)].
  • UP setup/server stands for 4x 1U server(Storage Nodes) + 4x 2U server(Client nodes) + 1x 2U server(RGW node)
  • DP setup/server stands for 4x 2U server(Storage Nodes) + 4x 2U server(Client nodes) + 1x 2U server(RGW node)

SOFTWARE CONFIGURATION:

  • OS and Kernel: Ubuntu 14.04 trusty, 3.13.0
  • Ceph version: Infernalis - 9.2.0
  • 4 * 10 HDD as OSD device, readahead 2048, writecache on
  • 4 * 2   SSD as journal device
  • Using haproxy on radosgw, 5 radosgw processes listen on 5 different ports
  • [global]
        auth_service_required = none
        auth_cluster_required = none
        auth_client_required = none
        cluster_network = 10Gb nic
        public_network = 10Gb nic
        osd_mount_options = rw,noatime,inode64,logbsize=256k
        osd_mkfs_type = xfs 
        mon_pg_warn_max_per_osd = 1500
    [client]
        rbd_cache = false
    [client.radosgw.rgw-*]
        rgw cache enabled = true
        rgw cache lru size = 100000
        rgw thread pool size = 256
        rgw enable ops log = false
        rgw frontends =civetweb port=7485
        rgw override bucket index max shards = 0

DEFAULT/UNTUNED CONFIGURATION:*(APPLIED TO DATA MARKED 'DEFAULT')

  • rbd pool pg_num = 2000, replica size = 2

TUNED CONFIGURATION:*(APPLIED TO DATA MARKED 'TUNED')

  • rbd pool pg_num = 8192, replica size = 2
  • mount omap of each osd to a SSD partition
  • turn down all debug log in ceph.conf
  • [global]
        mon_pg_warn_max_per_osd = 1500
        ms_dispatch_throttle_bytes = 1048576000
        objecter_inflight_op_bytes = 1048576000
        objecter_inflight_ops = 10240
        throttler_perf_counter = false
    [osd]
        osd_op_threads = 20
        filestore_queue_max_ops = 500
        filestore_queue_max_bytes = 1048576000
        filestore_queue_committing_max_ops = 500
        filestore_queue_committing_max_bytes = 1048576000
        journal_max_write_entries = 1000
        journal_queue_max_ops = 3000
        journal_max_write_bytes = 1048576000
        journal_queue_max_bytes = 1048576000
        filestore_max_sync_interval = 10
        filestore_merge_threshold = 20
        filestore_split_multiple = 2
        osd_enable_op_tracker = false
        filestore_wbthrottle_enable = false

PERFORMANCE RESULT, QEMU RBD

infernalis_rbd

infernalis_rbd_graph

Y-Axis: normalized Bandwdith(MB/s) and Throughput(IOPS); X-Axis: releases w/wo tuning. 
The performance score for each workload is normalized based on the Bandwidth/throughput for Firefly release.The Higher the better.

 

PERFORMANCE RESULT, RGW, 

infernalis_object

infernalis_object_graph

Y-Axis: normalized Bandwdith(MB/s) and Throughput(IOPS); X-Axis: releases w/wo tuning. 
The performance score for each workload is normalized based on the Bandwidth/throughput for Firefly release.The Higher the better.