Sorry, you need to enable JavaScript to visit this website.

An Out-of-Band Method to Harness Intel® Resource Director Technology Power in Virtualization Environment

BY Tony Su, Huaqiang Wang ON Feb 26, 2020

Motivation

More and more customers are requiring virtual machine (VM) prioritization to assure their workload performance QoS (Quality of Service) running in their VMs, and want to leverage Intel® Resource Director Technology (Intel® RDT) on Intel® architecture. This document details how to harness Intel RDT features in the libvirt environment with hypervisor KVM, which can be looked at as an out-of-band (OOB) method. OpenStack* Intel RDT-enabling is still in progress and once it is completed, we hope to have an in-band method to use Intel RDT.

This document assumes readers have an understanding of Intel RDT concepts, essential hypervisors/virtual machines, as well as Linux*.

Warning: some commands in this document require ‘root’ privilege which are executed with a # prompt. Double check what you type and understand the execution result before you press the Enter key.

Terminology

Below is a list of acronyms frequently used in this document.

Term

Description

Intel RDT

Intel Resource Director Technology

CMT

Cache Monitoring Technology

MBM

Memory Bandwidth Monitor

L3 CAT

L3 Cache Allocation Technology

L3 CDP

L3 Code Data Prioritization

L2 CAT

L2 Cache Allocation Technology

MBA

Memory Bandwidth Allocation

VM

Virtual Machine

Model VM

To simulate a real-world VM in a cloud environment and for illustration purposes, we plan to create a VM with two unbalanced NUMA nodes with 10 vCPUs. One NUMA node0 has four vCPUs pinning to physical CPUs (let’s say CPU14-17) and uses two cache ways from L3 cache (id=0); another NUMA node1 has six floating vCPUs (floating over a list of host CPUs ) and uses four cache ways from L3 cache (id=1). Finally, 128MB of memory is allocated to each node. We will use cirros OS as a boot image and we will not use a network.

With this in mind, we will start our journey…

Provision VM

It is not very easy, especially for the first time, to successfully create a virtual machine that can leverage Intel RDT technology. Let’s start by preparing the environment. Then we’ll define our VM. Finally, we’ll create, run, and check VM and Intel RDT usage information.

Hardware Readiness Checking

The following is a list of mainstream CPUs with Intel RDT technology. Please make sure you are working on a machine that uses a processor on the list. The more recent the CPU, the better.

CPU

CMT

MBM

L3 CAT

L3 CDP

MBA

Intel® Xeon® Processor E5 v3 (Haswell)

Y

N

Y

N

N

Intel® Xeon® Processor D

Y

Y

Y

N

N

Intel® Xeon® Processor E3 v4 (Intel® microarchitecture code name Broadwell)

N

N

Y

N

N

Intel® Xeon® Processor E5 v4 (Intel® microarchitecture code name Broadwell)

Y

Y

Y

Y

N

Intel® Xeon® Scalable Processor (CascadeLake)

Y

Y

Y

Y

Y

* for L2 CAT information, please refer to intel/intel-cmt-cat: User space software for Intel(R) Resource Director Technology and Intel® Resource Director Technology Extensions: Introducing the L2 Code and Data Prioritization Feature for Future Intel Atom® Processors

By filtering some fields from the output of the lscpu command, we get the CPU type and model information and you can further determine the CPU Intel RDT readiness using the information in the table above.  For example, the CPU below shows an Intel® Xeon® Gold 6248 which is a scalable processor and supports Intel RDT technology.

$ lscpu | grep -E 'Architecture|Vendor ID|CPU family|Model|Model name|Stepping'
Architecture: x86_64
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
 

    BIOS setting checking

    Although there are no special settings in BIOS for Intel RDT technology, we need to enable “Intel® Virtualization Technology” in BIOS in order to use KVM hypervisor. An example of the BIOS setting is shown in the picture below. (Your BIOS setting might be different, and normally you can find “VT-d” or “Virtualization Technology” or something similar.)

     

    Linux kernel checking

    Hypervisor (KVM) checking

    Make sure the kvm module has been loaded into the Linux kernel.

    $ lsmod | grep kvm
    kvm_intel             188688  12
    kvm                   636931  1 kvm_intel
     

      if not, check cpuflags for the svm or vmx flag.

      $ lscpu | grep -E 'svm|vmx'
      Flags: fpu … ds_cpl vmx smx …
       

         Next, load the kvm and kvm_intel modules manually.

        # modprobe kvm
        # modprobe kvm-intel

          More troubleshooting for loading KVM: https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html

          Intel RDT support checking

          Use the commands listed below to check Intel RDT features supported by the Linux kernel. If your kernel supports some Intel RDT features, the corresponding flags will be listed. For example, you can see cat_l3 shows if your computer has the L3 CAT feature.

          Intel RDT Feature

          Checking Command

          Intel RDT

          $ lscpu | grep rdt_a

          CMT

          $ lscpu | grep -E “cqm_llc|cqm_occup_llc”

          MBM

          $ lscpu | grep -E “cqm_mbm_total|cqm_mbm_local”

          L3 CAT

          $ lscpu | grep cat_l3

          L3 CDP

          $ lscpu | grep cdp_l3

          MBA

          $ lscpu | grep mba

           

          Here is a sample output from one Intel RDT-enabled computer.

          $ lscpu | grep -E 'cat_l3|mba'
          Flags: fpu … epb cat_l3 … ssbd mba ibrs …

            If you are sure that your computer has the Intel RDT feature, but there are NO RDT flags shown with the lscpu command, then one of the following is true:

            • Your kernel might be too old (probably earlier than v4.10)
            • Intel RDT might have been disabled when you built your kernel

            Refer to the Intel RDT feature kernel-readiness table below for the correct kernel version.

             Intel RDT feature enabled

            Kernel version 

            L3 CAT, L3 CDP, L2 CAT

            4.10

            MBA

            4.12

            CMT, MBM

            4.14

            L2 CDP

            4.16

            MBA

            4.18

             

            If you have to re-build the kernel, make sure the CONFIG_X86_CPU_RESCTRL option is enabled. We recommend using the latest kernel for full Intel RDT feature support.

             

            Tip 1: During kernel building configuration, if you failed to find the CONFIG_X86_CPU_RESCTRL option, you can try to look for INTEL_RDT_A or CONFIG_INTEL_RDT, which may work for different kernel versions. Otherwise, you must use a more recent kernel (4.10+) source code.

            Tip 2: Even if your kernel supports Intel RDT, some features can still be disabled during kernel boot. If this happens, after the resctrl file system is mounted (see the section below for resctrl file system details), you might not be able to find the feature you want to use. In this situation, pass the rdt parameter to the kernel in the grub menu and try to explicitly enable or disable some features.

            For example, the following rdt parameter enables CAT L3 (by l3cat) and disables MBA (by !mba – prefix an exclamation point (!) before mba).

            linuxefi /boot/vmlinuz.x86_64 root=UUID=50…fe ro rdt=l3cat,!mba

              The full list of parameters:

              cmt, mbmtotal, mbmlocal, l3cat, l3cdp, l2cat, l2cdp, mba

              Reboot your machine after changing grub to take the new kernel parameters into effect.

              Tip 3: In some Linux distributions, although their kernel version is still low, the kernel may have back-ported Intel RDT features.

              $ uname -a
              Linux new2 3.10.0-1062.9.1.el7.x86_64 #1 SMP ...… x86_64 x86_64 GNU/Linux
               
              $ lscpu | grep cat_l3
              Flags: fpu vme … epb cat_l3 cdp_l3 … flush_l1d

                Install libvirt packages

                Here we use the CentOS package as an example. You may want to check your Linux distribution and find the corresponding packages and installation method.

                # yum install -y libvirt-client virt-install virt-manager

                  With the installation, we have virsh, virt-install, and virt-manager (GUI) to use.

                  Use the following table to check Intel RDT readiness in your libvirt version.

                  $ virsh --version

                    Intel RDT Feature

                    libvirt Version

                    CMT

                    v4.10.0

                    MBM

                    v6.0.0

                    L3 CAT / CDP

                    v4.1.0

                    MBA

                    v4.7.0

                     

                    Now that the provision is done, we can proceed to target our virtual machine with Intel RDT features.

                    Prepare VM

                    Get cache allocation granularity

                    We specify cache allocation size in bytes in libvirt VM definition XML file, so first  find the cacheway size on the machine. There are two ways to get the information.

                    1. Filter the size from virsh capabilities output.

                    From the following sample, we can see control granularity is 2304 KB which is the cacheway size. Other information is also useful such as id, level, type, size, cpus etc.

                    $ virsh capabilities
                        <capabilities>
                        ...
                            <cache>
                                <bank id='0' level='3' type='both' size='25344' unit='KiB' cpus='0-17,36-53'>
                                    <control granularity='2304' unit='KiB' type='both' maxAllocs='16'/>
                                </bank>
                                <bank id='1' level='3' type='both' size='25344' unit='KiB' cpus='18-35,54-71'>
                                    <control granularity='2304' unit='KiB' type='both' maxAllocs='16'/>
                                </bank>
                            </cache>
                    ...
                    </capabilities>

                      2. An alternative method is to get the cache ID, type, and compute cacheway size manually.

                      • Find the cache ID to be allocated for the VM.
                        $ cat /sys/bus/cpu/devices/cpu{14,15,16,17}/cache/index3/id
                            0
                            0
                            0
                            0
                         
                        $ cat /sys/bus/cpu/devices/cpu{14,15,16,17}/cache/index3/type
                            Unified
                            Unified
                            Unified
                            Unified
                         
                        1. Compute cacheway size. Take cpu17’s L3 cache as an example:
                          $ ls /sys/bus/cpu/devices/cpu17/cache/index3/
                          coherency_line_size  physical_line_partition  type
                          id                   shared_cpu_list
                          ways_of_associativity   level
                          shared_cpu_map           number_of_sets       size
                           
                          $ cat /sys/bus/cpu/devices/cpu17/cache/index3/size
                          25344K
                           
                          $ cat /sys/bus/cpu/devices/cpu17/cache/index3/ways_of_associativity
                          11
                           
                          1. Then, calculate the cacheway size of L3 cache by a formula which exactly matches virsh capabilities output.
                          Cacheway_size = L3_size/totally_ways = 25344KB/11 = 2304KB
                           

                            Mount resctrl filesystem

                            Before using libvirt to create an Intel RDT-enabled virtual machine, the resctrl filesystem must be mounted.

                            # mount -t resctrl resctrl /sys/fs/resctrl

                              Once the resctrl filesystem is mounted, check the Intel RDT information under the /sys/fs/resctrl directory.

                              $ cd /sys/fs/resctrl/
                              $ ls -F
                              cpus  cpus_list  info/  schemata  tasks
                               

                               

                                $ cat schemata   L3:0=7ff;1=7ff   MB:0=100;1=100  

                              From the schemata file, find the following:

                              • The first line: L3 cache information - 7ff means 11 ways of L3 cache being used.
                              • The second line: MB (Memory bandwidth information) – 100 means 100% of bandwidth being used.

                              Reserve L3 cache ways for VM

                              For the new VM, reserve four cache ways from L3 cache (id=0 and id=1) respectively by executing the command below (change 7ff to 7f0 in the first line) .

                              # cat <<EOF >schemata
                              > L3:0=7f0;1=7f0
                              > MB:0=100;1=100
                              > EOF
                               

                                After the command execution, “L3 line” becomes 7f0 and, thus four ways have been reserved.

                                # cat schemata
                                  L3:0=7f0;1=7f0
                                  MB:0=100;1=100
                                 

                                  Prepare OS image (cirros image)

                                  We will use the cirros image as our virtual machine boot image and save the image under a folder (let’s use /tmp for example).

                                  $ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

                                    We used the full path of the image in the VM XML definition.

                                    </domain>
                                      <devices>
                                        <disk type="file" device="disk">
                                          <driver name="qemu" type="qcow2" cache="none"/>
                                          <source file="/tmp/cirros-0.4.0-x86_64-disk.img"/>
                                          <target dev="vda" bus="virtio"/>
                                        </disk>
                                        ...
                                      </devices>
                                    </domain>
                                     

                                      Generate VM domain XML file

                                      For our purpose, the critical elements are numatune, vcpu/cpu and cputune. numatune defines memory cells, vcpu/cpu is for CPU topology, and cputune contains processor pinning and CAT allocation information.

                                      In the numatune element, we defined two memory nodes (ie. cells)—cell0 and cell1.

                                      <domain type="kvm">
                                      ...
                                         <numatune>
                                           <memory mode="strict" nodeset="0-1"/>
                                           <memnode cellid="0" mode="strict" nodeset="0"/>
                                           <memnode cellid="1" mode="strict" nodeset="1"/>
                                         </numatune>
                                      ...
                                      </domain>
                                       

                                        In the vcpu/cpu element, we defined two NUMA nodes with 10 CPUs: node0 has four CPUs and node1 has six CPUs. The “host-passthrough” parameter makes sure that the VM has the identical CPU type and cache information as the host machine.

                                        <domain>
                                        ...
                                          <vcpu>10</vcpu>
                                          <cpu match="exact" mode="host-passthrough">
                                            <cache mode="passthrough"/>
                                            <topology sockets="2" cores="5" threads="1"/>
                                            <numa>
                                              <cell id="0" cpus="0-3" memory="131072"/>
                                              <cell id="1" cpus="4-9" memory="131072"/>
                                            </numa>
                                          </cpu>
                                        ...
                                        </domain>
                                         

                                          In cputune, the vcpupin element, we set vcpu0-3 to pin to physical cpu14-17 respectively and left vcpu4-9 floating on physical CPUs.

                                          Two cachetune elements are used to allocate L3 cache to the virtual machine, which means the virtual machine exclusively uses the allocated L3 cache without interference by noisy neighbors.

                                          <domain>
                                          ...
                                          <cputune>
                                              <vcpupin vcpu="0" cpuset="14"/>
                                              <vcpupin vcpu="1" cpuset="15"/>
                                              <vcpupin vcpu="2" cpuset="16"/>
                                              <vcpupin vcpu="3" cpuset="17"/>
                                              <cachetune vcpus="0-3">
                                                <cache id="0" level="3" type="both" size="4608" unit="KiB"/>
                                              </cachetune>
                                              <cachetune vcpus="4-9">
                                                <cache id="1" level="3" type="both" size="9216" unit="KiB"/>
                                              </cachetune>
                                            </cputune>
                                          ...
                                          </domain>
                                           
                                          1. 4608KB (2 cache ways in L3 cache 0) to vcpu 0-3, ie the first NUMA node and
                                          2. 9216KB (4 cache ways in L3 cache 1) to vcpu 4-9, ie the second NUMA node

                                          The full VM XML is shown below for your reference.

                                          Note: Some elements must be changed to reflect your real environment before you can use it to create a real virtual machine.

                                          <domain type="kvm">
                                            <uuid>93a35a15-d900-423c-a0aa-f8097da0228c</uuid>
                                            <name>rdt-vm-manual</name>
                                            <memory>262144</memory>
                                            <numatune>
                                              <memory mode="strict" nodeset="0-1"/>
                                              <memnode cellid="0" mode="strict" nodeset="0"/>
                                              <memnode cellid="1" mode="strict" nodeset="1"/>
                                            </numatune>
                                           
                                            <vcpu>10</vcpu>
                                            <cpu match="exact" mode="host-passthrough">
                                              <cache mode="passthrough"/>
                                              <topology sockets="2" cores="5" threads="1"/>
                                              <numa>
                                                <cell id="0" cpus="0-3" memory="131072"/>
                                                <cell id="1" cpus="4-9" memory="131072"/>
                                              </numa>
                                            </cpu>
                                            <cputune>
                                              <vcpupin vcpu="0" cpuset="14"/>
                                              <vcpupin vcpu="1" cpuset="15"/>
                                              <vcpupin vcpu="2" cpuset="16"/>
                                              <vcpupin vcpu="3" cpuset="17"/>
                                              <cachetune vcpus="0-3">
                                                <cache id="0" level="3" type="both" size="4608" unit="KiB"/>
                                              </cachetune>
                                              <cachetune vcpus="4-9">
                                                <cache id="1" level="3" type="both" size="9216" unit="KiB"/>
                                              </cachetune>
                                            </cputune>
                                           
                                            <sysinfo type="smbios">
                                              <system>
                                                <entry name="manufacturer">Wonderful Manufacturer</entry>
                                                <entry name="product">Wonderful Virtual Machine </entry>
                                                <entry name="version">1.0.0</entry>
                                                <entry name="serial">93a35a15-d900-423c-a0aa-f8097da0228c</entry>
                                                <entry name="uuid">93a35a15-d900-423c-a0aa-f8097da0228c</entry>
                                                <entry name="family">Virtual Machine</entry>
                                              </system>
                                            </sysinfo>
                                            <os>
                                              <type machine="pc">hvm</type>
                                              <boot dev="hd"/>
                                              <smbios mode="sysinfo"/>
                                            </os>
                                            <features>
                                              <acpi/>
                                              <apic/>
                                            </features>
                                            <devices>
                                              <disk type="file" device="disk">
                                                <driver name="qemu" type="qcow2" cache="none"/>
                                                <source file="/tmp/cirros-0.4.0-x86_64-disk.img"/>
                                                <target dev="vda" bus="virtio"/>
                                              </disk>
                                              <graphics type="vnc" autoport="yes" listen="0.0.0.0"/>
                                              <video>
                                                <model type="cirrus"/>
                                              </video>
                                              <memballoon model="virtio">
                                                <stats period="10"/>
                                              </memballoon>
                                            </devices>
                                          </domain>

                                            Create a VM

                                            Now, create a VM with our XML file.

                                            # virsh create rdt_inst.xml
                                            Domain rdt-vm-manual created from rdt_inst.xml
                                             

                                              If you get an error,  check the XML file contents and format carefully or refer to the Troubleshooting section for common causes and possible solutions.

                                              Check the VM

                                              # virt-manager

                                                This command starts a GUI manager so we can check the new VM information.

                                                 

                                                Double-click the VM machine name (rdt-vm-manual) in the virt-manager main window to start the VM console and then you can login and check the VM information. Here, we can see lscpu command lists:

                                                1. Two NUMA nodes (sockets): socket 0 has four CPUs (0-3) and socket 1 has six CPUs (4-9).
                                                2. The CPU model has been passed through from host (Gold 6139 CPU)
                                                3. The Hypervisor is ‘KVM’
                                                4. The L3 cache size is 25344K

                                                Note: Don’t get confused by the L3 cache size shown in VM. CAT is not used to limit the cache size; it defines the total cacheways in host L3 cache that is exclusively allocated to the VM.

                                                If we go to the /sys/fs/resctrl directory, we can see two new folders have been created (marked in Bold) for each VM NUMA node.

                                                $ ls -F /sys/fs/resctrl

                                                 

                                                 

                                                 

                                                  cpus       info/       qemu-27-rdt-vm-manual-vcpus_4-9/  tasks cpus_list  qemu-27-rdt-vm-manual-vcpus_0-3/  schemata  

                                                 

                                                From the command below, we can see node0 (vcpus0-3) is using two cache ways of L3 (id=0) as 003 in hex is actually 11 in binary, and 11 means two cache ways (one bit specifies one cacheway)

                                                 

                                                $ cat /sys/fs/resctrl/qemu-27-rdt-vm-manual-vcpus_0-3/schemata
                                                    L3:0=003;1=7f0
                                                    MB:0=100;1=100

                                                  Similarly, we can check that node1 (vcpus4-9) is using four cache ways in L3 (id=1) as 00f in hex is in fact 1111 in binary.

                                                  $ cat /sys/fs/resctrl/qemu-27-rdt-vm-manual-vcpus_4-9/schemata
                                                      L3:0=7f0;1=00f
                                                  MB:0=100;1=100
                                                   

                                                    Monitor with CMT

                                                    If your libvirt version is 4.10+, you can monitor cache allocation status with CMT. Adding monitor elements in the VM definition XML file activates the CMT monitor mechanism. (The XML below is from another machine with a newer libvirt version, so it is slightly different from the sample above.)

                                                       <cachetune vcpus='0-3' id='vcpus_0-3'>
                                                          <cache id='0' level='3' type='both' size='5632' unit='KiB'/>
                                                          <monitor level='3' vcpus='0-3'/>
                                                        </cachetune>
                                                        <cachetune vcpus='4-9' id='vcpus_4-9'>
                                                          <cache id='1' level='3' type='both' size='11' unit='MiB'/>
                                                          <monitor level='3' vcpus='4-9'/>
                                                    </cachetune>
                                                     

                                                      The virsh domstats command can print out the stat result, including cache allocation size in bytes. For demonstration, we purposely pinned the first NUMA node to L3 cache id0 (and no allocation from id1) and made the second NUMA node floating (allocation from both L3 cache id0 and id1)

                                                       
                                                      # virsh domstats --cpu-total rdt-vm-manual
                                                      Domain: 'rdt-vm-manual'
                                                        cpu.time=3252251523
                                                        cpu.user=290000000
                                                        cpu.system=1080000000
                                                      cpu.cache.monitor.count=2
                                                      cpu.cache.monitor.0.name=vcpus_0-3
                                                      cpu.cache.monitor.0.vcpus=0-3
                                                      cpu.cache.monitor.0.bank.count=2
                                                      cpu.cache.monitor.0.bank.0.id=0
                                                      cpu.cache.monitor.0.bank.0.bytes=5406720
                                                      cpu.cache.monitor.0.bank.1.id=1
                                                      cpu.cache.monitor.0.bank.1.bytes=0
                                                      cpu.cache.monitor.1.name=vcpus_4-9
                                                      cpu.cache.monitor.1.vcpus=4-9
                                                      cpu.cache.monitor.1.bank.count=2
                                                      cpu.cache.monitor.1.bank.0.id=0
                                                      cpu.cache.monitor.1.bank.0.bytes=720896
                                                      cpu.cache.monitor.1.bank.1.id=1
                                                      cpu.cache.monitor.1.bank.1.bytes=8200192
                                                       

                                                        Summary

                                                        We successfully created one virtual machine with CAT enabled and vcpu pinning. We learned how to enable KVM, design the VM XML definition file with CAT elements, and check CAT information after VM creation. This detailed step-by-step tutorial should be able to help the readers harness Intel RDT in the virtualization environment, and capable of integrating with cloud orchestration software such as OpenStack . If you have any trouble while creating VM, there is also a Troubleshooting section below that lists common causes and possible solutions.

                                                        * For OpenStack RDT Spec status, please refer to https://blueprints.launchpad.net/nova/+spec/libvirt-pqos

                                                         

                                                        Troubleshooting

                                                        Resource control not supported?

                                                        # virsh create rdt_inst.xml
                                                        error: Failed to create domain from rdt_inst.xml
                                                        error: unsupported configuration: Resource control is not supported on this host
                                                         

                                                          Possible reason and solution:

                                                          # mount -t resctrl resctrl /sys/fs/resctrl
                                                          1. Does the CPU support Intel RDT?
                                                          2. Does the Kernel support Intel RDT? Is Intel RDT enabled during boot?
                                                          3. Did you forget to mount the resctrl filesystem?

                                                          Didn’t see L3 or MB in /sys/fs/resctrl/schemata file?

                                                          $ cat schemata
                                                            MB:0=100;1=100
                                                           

                                                            Possible reason and solution:

                                                            Enable Intel RDT during boot; Refer to the Linux Kernel Checking section.

                                                            cachetune element not support?

                                                            # virsh create rdt_inst.xml
                                                            error: Failed to create domain from rdt_inst.xml
                                                            error: unsupported configuration: cachetune is only supported for KVM domains
                                                             

                                                              Possible reason and solution:

                                                              Change VM XML definition file domain type to kvm like below.

                                                              <domain type="kvm">
                                                              </domain>
                                                               

                                                                Not enough room for VM?

                                                                # virsh create  rdt_inst.xml
                                                                error: Failed to create domain from rdt_inst.xml
                                                                error: unsupported configuration: Not enough room for allocation of 9437184 bytes for level 3 cache 0 scope type 'both'
                                                                 

                                                                  Possible reason and solution:

                                                                    <cachetune vcpus="0">
                                                                       <cache id="0" level="3" type="both" size="??" unit="KiB??"/>
                                                                    </cachetune>
                                                                   
                                                                  $ cat /sys/fs/resctrl/schemata
                                                                        L3:0=7ff;1=7ff
                                                                    MB:0=100;1=100
                                                                   

                                                                  Refer to the Reserve L3 Cacheways for VM section.

                                                                  1. The cache size specified is too large. Check that the size and unit is correct.
                                                                  2. You forgot to reserve cacheways in the schemata file or not enough cacheways was reserved in the schemata file.

                                                                  Unsupported configuration - not divisible by granularity

                                                                  # virsh create rdt_inst.xml
                                                                  error: Failed to create domain from rdt_inst.xml
                                                                  error: unsupported configuration: Cache allocation of size 1048576 is not divisible by granularity 2359296
                                                                   

                                                                    Possible reason and solution:

                                                                    The specified size in bytes is not the multiple of one cacheway size (ie. granularity). Refer to the Get Cacheway Size section for details.

                                                                    <cachetune vcpus="0">
                                                                       <cache id="0" level="3" type="both" size="???" unit="KiB"/>
                                                                    </cachetune>

                                                                      Reference

                                                                      1. Intel RDT Introduction:https://www.intel.com/content/www/us/en/architecture-and-technology/resource-director-technology.html
                                                                      2. User Interface for Resource Control Feature: https://www.kernel.org/doc/Documentation/x86/intel_rdt_ui.txt
                                                                      3. libvirt Domain XML format: https://libvirt.org/formatdomain.html#elementsCPUTuning
                                                                      4. Using Hypervisor KVM: https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html

                                                                       

                                                                      Disclaimers

                                                                      Intel and Xeon are trademarks of Intel Corporation or its subsidiaries.