Sorry, you need to enable JavaScript to visit this website.

Virtualization Advances for Performance, Efficiency, and Data Protection

BY 01 Staff (not verified) ON May 30, 2014

By Will Auld, principal engineer in Intel's virtualization and cloud team 

As virtualization has become mainstream in modern data centers, many business and IT decision makers are contemplating its potential benefits, both in the near term and into the future. This article contributes to that effort, with a bit of background about how the industry got to where we are with open source virtualization, as demonstrated first in the Xen* project and then with the Kernel-based Virtual Machine (KVM).

While this discussion is just one relatively narrow view of the larger story, it offers context to questions and decisions about virtualization’s importance and potential for organizations of all types and sizes.

First, a (Brief) Bit of History

For the purposes of this discussion, virtualization is the ability of a computer system to allow multiple tasks to run at the same time, dividing its resources among multiple virtual images of the underlying system. Each task runs in a virtual machine (VM), with its own OS that is referred to as a “guest OS.” The virtual machine monitor (VMM), which may also be known as a hypervisor, manages the presentation of underlying physical resources, giving the appearance to each VM as if it were running directly on the hardware itself.

The challenges and requirements involved in virtualization are illustrated by the timeline of technical developments that began with IBM making the first commercial use of virtualization in the 1960s.

  • 1974: Definition of processor requirements. In “Formal Requirements for Virtualizable Third Generation Architectures,” Gerald Popek and Robert Goldberg identified a standard set of processor capabilities needed to efficiently support full virtualization. IBM’s POWER* and Sun Microsystem’s SPARC* architectures met those requirements, whereas the x86 architecture did not.
  • 1999: VMware hosted hypervisor for x86. VMware used binary translation to help work around x86 deficits (according to the Popek and Goldberg definition) in terms of instruction handling.
  • 2003: Xen publicly downloadable. As an alternative to full virtualization, Xen defined an interface for new drivers to be written for specific OS components. While this approach—paravirtualization—requires some parts of the OS to be rewritten for Xen, removing the need to trap and emulate I/O devices allows for system wide optimizations that greatly increase performance.

Changing the Game: Full Virtualization on x86 Architecture

As virtualization technologies such as those from VMware and the Xen project continued to develop, Intel was engineering x86-based processors that would be fully capable of efficient, full virtualization according to the Popek and Goldberg model. Those features were introduced as Intel® Virtualization Technology for x86 architecture (Intel® VT-x) in 2006. The new capabilities were documented in the Intel Technology Journal article, “Intel Virtualization Technology: Hardware Support for Efficient Processor Virtualization.” The following advances in the virtualization ecosystem also emerged that year.

  • Enhancement of Xen for Intel VT-x. The effort to include full virtualization in Xen—also known as a Hardware Virtual Machine (HVM)—was documented in the Intel Technology Journal article, “Extending Xen with Intel Virtualization Technology.”
  • Introduction of KVM. Software vendor Qumranet introduced KVM, its Linux* kernel-based VMM, which implemented full virtualization based on Intel VT-x.

Since those developments in 2006, x86 processors have been capable of efficiently implementing full virtualization. Open source solutions include Xen, with both para-virtualized (PV) and fully virtualized (HVM) guests, as well as KVM, with fully virtualized guests. Successive generations of Intel processors have added features to improve virtualization capabilities even further. Intel contributes code and other enablement efforts to help open-source solutions realize the full benefit of the latest hardware features.

Efficient Control Separation between the VMM and Guest OSs

The processor capabilities defined by the Popek and Goldberg model for efficient full virtualization help support efficient, straightforward VMM design, which helps reduce virtualization overhead and increase performance. A key consideration in terms of this model is that instructions that have global effects on the machine state must be privileged instructions.

Prior to the introduction of Intel VT-x, a significant number of x86 instructions did not conform to that requirement. In addition, a number of non-privileged instructions could reveal critical system state that a VMM may need, with the potential to confuse a system operating under the assumption of control.

Intel VT-x addressed the problem of non-privileged instructions, accessing privileged state by creating a new environment (mode) for the operation of guest software under the control of a VMM. Virtual Machine eXtension (VMX) mode allows for running virtualized environments, and it consists of two sub-modes: root operation and non-root operation (that is, guest operation). The VMM uses root operation for its work and non-root operation for the guest’s work. To get from one operating state to the other, two transitions were also added: VM entry and VM exit.

VM entry and VM exit are available only in VMX mode and transition from root operation to non-root operation (VM entry) and back from non-root operation to root operation (VM exit). Root operation is very similar to non-VMX with a few extensions. However, non-root operation is very different from the normal x86 environment, because many instructions and other events that would not normally cause a fault on x86 do so in this state. Rather than being directed to the usual handler, these faults cause a VM exit from the guest to the VMM, enabling the VMM to maintain executive control over the virtualized environment.

Reduced Overhead: Flexible Control over VM Entry and Exit

While the faulting behavior described above is functionally conservative, it must be controlled to some extent, to avoid excessive VM exits that can result in low performance. For this reason, the VMX architects designed flexibility into the environment, providing a measure of control over the generation of these faults. For example, the INVD (invalidate internal caches) instruction always faults, while INVLPG (invalidate TLB entry) only faults when configured by the VMM to do so. Similarly, the VMM has complete control over which events in non-root operation cause VM exits.

The VMM manages VM entry and VM exit operations with the help of a special data structure called the Virtual Machine Control Structure (VMCS). This structure contains areas for guest state and host state, as well as execution controls where the VMM defines what should result in the event of interrupts, exceptions, accesses to specific I/O space, and certain Machine Specific Registers (MSRs). On VM entry, the guest state is loaded into the machine from the guest state area of the VMCS and, upon VM exit, the guest state is stored into this area and host state is loaded into the machine from the host state area. Note that the host state is not stored into the host area on VM entry.

The stored guest state does not include a complete snapshot of the system state for the guest, but rather includes only a small subset of this state. It does include all of the important hidden state in the processor that cannot be accessed or updated using current x86 capabilities. State that can be handled more efficiently by the VMM is not included in the VMCS.

More Efficient Mapping between Virtual and Physical Address Space

An example of performance-enhancing Intel VT features added with the introduction of next-generation Intel processors is Extended Page Tables (EPT). Without EPT, the hypervisor needed to catch every access by the guest to its page tables to construct a shadow set of tables.

Because the guest regards the virtual address space defined by the hypervisor as if it were physical address space, hardware page table walkers need to be able to go from the address space being used (guest virtual) to the real physical address space. To support the mapping between virtual and physical address space, the hypervisor constructs a set of shadow page tables, which are used by the guest. All page faults and accesses to the page tables must VM exit for the hypervisor to correctly manage the guest.

EPT provides a second level of page tables that are managed by the hypervisor on behalf of the guest, while the guest manages its tables without any interference from the hypervisor. As always, the guest creates its page tables to map from its virtual address space to its physical address space. The hypervisor manages a set of page tables that map from the guest physical to the actual physical address space. During guest operation (VMX non-root operation) when a virtual address is used, both levels of page tables are walked (or the equivalent thereof) to resolve the physical address.

  • First, the guest virtual address is used while walking the guest’s page table.
  • The result from this walk is used as the input to the second-level walk, in which the EPT finds the actual physical address.

With EPT, there is no need to VM exit when the guest is modifying its page tables. EPT removes a large source of VM exits and the associated code path, significantly improving performance.

The Business Benefits of Robust Virtualization

By enabling businesses to run applications in VMs on top of a hypervisor, virtualization offers a range of benefits, based on capabilities that include the following:

  • Guests can be run side-by-side with other guests
  • Guests can be stopped and started at will
  • Guests can be replicated or a snapshot can be saved
  • Guests can be moved among physical hosts

One key usage model of virtualization is the ability to isolate software stacks from each other on the same machine. Traditionally, IT has isolated workloads by running them on dedicated machines, preventing them from potential harm or security concerns. Virtualization enables each workload, along with its own OS, to run in its own VM, isolated from other guests. This allows multiple workloads to be run on the same physical machine while remaining isolated from each other.

Another usage model with clear business benefits is the ability to consolidate workloads from multiple servers onto a single physical host. As systems have become more powerful, workloads running on dedicated hardware often under-utilized the available resources. By consolidating such workloads onto a smaller number of physical hosts, virtualization helps reduce requirements in areas such as such as physical space, power, management, and equipment. Moreover, IT organizations, business units and even individual users can run multiple OSs such as Linux and Microsoft Windows* on the same machine.

Finally, virtualization is a key building block for cloud computing, which greatly facilitates allocating and de-allocating systems for various tasks, adjusting those allocations on demand. VMs are far better suited to this type of usage than bare-metal machines.

Conclusion

Ongoing advances in virtualization are enabled by Intel VT-x and the software ecosystem, including contributions by the Intel Open Source Technology Center to projects such as Xen and KVM. As performance, efficiency, and data protection for virtualized workloads increases, the proportion of business workloads suited to virtualization continues to grow. As virtualized usage models become increasingly ubiquitous, businesses benefit in terms of cost-efficiency, agility, and capacity for innovation.