Sorry, you need to enable JavaScript to visit this website.
Home / OpenStack* on Intel® Architecture / Blogs / 2016 / OpenStack* on Intel® Architecture - Enabling OVS-DPDK in OpenStack

Enabling OVS-DPDK in OpenStack

01 Staff

Network Functions Virtualization (NFV) is an initiative to virtualize network services that are currently being carried out by proprietary, dedicated hardware. Today, NFV agendas are predominantly pursued by communication service providers (CoSP) and, in many ways, can be seen as a natural evolution of existing telco deployments. NFV is how CoSPs are looking to bring the benefits of the cloud which have already been seen in the enterprise sector, such as lower cost, improved agility and flexibility, and reduced time to market, to their own sector. Open vSwitch working in conjunction with DPDK is central to creating a viable NFV stack.

An NFV Platform

Figure 1: An NFV platform

This stack is based on the same open source software defined and developed by organizations such as ETSI NFV and Open Platform for Network Functions Virtualisation (OPNFV) that you will find deployed in data centres around the globe. It uses OpenStack in the role of Virtual Infrastructure Manager (VIM), while QEMU/KVM, Open vSwitch (OVS), and Linux all all contribute essential functionality. However, the OVS used in this stack is not your everyday OVS: it’s a DPDK-accelerated OVS, designed to provide the kind of performance and low latency expected of a virtual switch (vSwitch) in an NFV deployment.

The focus is on accelerating packet processing
Figure 2: The focus is on accelerating packet processing

What is OVS with DPDK?

OVS is the most popular vSwitch in the OpenStack world. OVS is an open source implementation of a distributed, virtual, multilayer switch. The Data Plane Development Kit (DPDK) is a library for enabling fast, userspace packet processing. DPDK applications bypass the Linux Network Stack and can provide significant performance boosts. DPDK makes networking applications faster and more deterministic, while OVS is a networking application: DPDK in OVS enables OVS to be the vSwitch for production NFV deployments. Today, OVS with DPDK (OVS-DPDK) provides a number of technical and performance capabilities to meet that objective.


Of course, having a production-quality, open source implementation of a distributed virtual multilayer switch with DPDK acceleration is great, but not being able to use that vSwitch with an OpenStack distribution is not. OpenStack’s networking-as-a-service project, neutron, must to be able to use this functionality for it to be really useful. This is where networking-ovs-dpdk steps in.

Networking-ovs-dpdk represents a collection of deployment scripts that enable OVS-DPDK-based deployments. The project started by providing a custom, out-of-tree implementation of a DPDK-enabled version of the OVS agent and ML2 driver along with DevStack-based deployment scripts. These applications allow users to attach DPDK-backed vHost user ports to their instances, which can provide up to a 10x performance improvement over standard OVS ports.

OpenStack Liberty meant a new phase of development to enable OVS-DPDK. During the Liberty cycle, a feature was upstreamed that allowed the standard OVS agent to manage datapath types. This feature had multiple benefits: not only did it allow the user to choose between the standard or the DPDK–accelerated datapath in OVS, but it also resolved packet loss issues around patch port connections when using multiple datapath types. (Hint: you cannot use patch ports between bridges with disparate datapath types). This feature negated the need for a custom, DPDK-enabled version of the OVS agent, allowing the custom agent to be dropped from the networking-ovs-dpdk project. As a result, the amount of out-of-tree code needed to use DPDK acceleration in an OpenStack deployment was minimized. This point was further enhanced by the addition of Puppet deployment modules. In addition, an accelerated security group driver was provided to allow users to utilize this newfound performance without needing to disable security groups.

The mechanism driver, the OVS agent, and the OVSDB server, as of the Mitaka release
Figure 4: The mechanism driver, the OVS agent, and the OVSDB server, as of the Mitaka release

The project finally evolved to its current state during the latest OpenStack release cycle, Mitaka. Shortly before Mitaka development began in earnest, OVSDB—the central configuration database/interface for Open vSwitch—gained some very useful functionality. In OVS 2.3, OVSDB gained the ability to report capabilities of the vSwitch, such as the support for the kind of interfaces or data path types of the OVS instance. In Mitaka, the OVS agent was extended to capture this information and report it to neutron. This information could now be harnessed by the upstream OVS ML2 driver, which has gained the ability to choose the correct binding type for an interface and bind accordingly. This key feature eliminated the need for the out-of-tree ML2 driver and allowed for Linux distributions to start providing OVS-DPDK packages. More importantly, it meant that the use of DPDK acceleration with OVS, once configured, would be transparent to the guest. This allows users to get much faster virtual switching in OpenStack for virtually zero effort.

OVSDB to Neutron

Figure 5: OVSDB to Neutron

Other projects

OVS agent is not the only way to control and configure hosts with OVS: projects like OpenDaylight (ODL) or Open Virtual Network for Open vSwitch (OVN), among others, can also be used. In the case of ODL, dynamic use of DPDK vHost User ports, may also be available. How this works is covered in more detail in other blogs and webcasts. In short, ODL support relies on the same OVSDB feature to determine available datapath and interface types. OVN, on the other hand, supports a configurable interface type. However, this is an all-or-nothing affair: either DPDK vHost User interfaces are used for all instances and all hosts, or the standard OVS interface type is.

The mechanism driver, the ODL agent, and the OVSDB server, as part of networking-odl

Figure 6: The mechanism driver, the ODL agent, and the OVSDB server, as part of networking-odl

Getting started

There are a couple of steps required to enable OVS-DPDK on a host:

  1. Install OVS with DPDK.

    This means meeting both hardware and software requirements. For example, your host should support huge pages and should have NICs supported by DPDK etc. Assuming these are provided, OVS-DPDK can be installed by following the INSTALL.dpdk file provided with OVS. OVS-DPDK is also available as tech preview as part of RHEL OSP8, in the Cloud Archive for Ubuntu 14.04, and in the main archives for Ubuntu 16.06.

  2. Configure neutron.

    Files like ml2_conf.ini must be modified to configure the datapath types to use, etc. Refer to the upstream documentation for more information.

  3. Start the neutron OVS agent.
  4. Configure VMs to use large pages.

    At the moment, large pages are required on all hosts with DPDK vHost User ports. This requires huge pages on the host, but is otherwise a trivial operation.

    $ openstack flavor set FLAVOR_NAME hw:mem_page_size=large

Once the above is configured, DPDK vHost User ports can be used in neutron like any other port.

$ neutron port-create $NETWORK_ID
$ nova interface-attach --port-id $PORT_ID $INSTANCE_ID


This makes it incredibly easy to get started with DPDK in OpenStack deployments. We will continue to provide DevStack and Puppet scripts as part of the networking-ovs-dpdk project to make things even easier.


Further Reading

01 Staff

This coming week, our team will share how Intel is helping address the requirements demanded by data-centric, compute-intensive workloads quickly growing across data center and edge.

Manjeet Bhatia

Devops principles, like continuous integration (CI) and continuous delivery (CD), are attracting considerable attention given their propensity to increase software development efficiency and facili