Sorry, you need to enable JavaScript to visit this website.
Home / Intel in Cloud Native / Events / 2020 / KubeCon + CloudNativeCon Europe Virtual

KubeCon + CloudNativeCon Europe Virtual;F:SF!40000&BoothKey=351233
Aug 17, 2020 to Aug 20, 2020
Kata Containers
open source
cloud native
deep learning
persistent memory
Cloud Computing
Data Center
device plugins

Visit our booth HERE! (requires to be logged in the tool)

As CNCF’s flagship conference, KubeCon + CloudNativeCon Europe 2020 - Virtual will bring together more than 20,000 technologists from thriving open source communities to further education around cloud native computing. Maintainers and end-users of CNCF’s hosted projects – including Kubernetes, Prometheus, Envoy, CoreDNScontainerdFluentd, Jaeger, Vitess, TUF, OpenTracinggRPC, CNI, Notary, NATS, Linkerd, Helm, Rook, Harbor, etcd, Open Policy Agent, CRI-O, TiKVCloudEvents, Falco, Argo, and Dragonfly – along with other cloud-native technologies will gather virtually from August 17-20 – sharing insights around and encouraging participation in this fast-growing ecosystem.

Why is Intel here?

With more than 15,000 software engineers, Intel is committed to furthering open cloud innovation. We are in a unique position to bring together key industry players to address the complexity of building for diverse architectures and workloads and enable faster deployments of new innovations at cloud scale. Software is a key technology pillar for Intel to fully realize the advancements in architecture, process, memory, interconnect and security. 

We are continuing to shape the development of Kubernetes by addressing the limitations for low-latency workloads, defining the architecture and requirements for accelerator device plugins, and enabling new features in networking applications. 

Our goal at this event is to raise awareness of Intel’s contributions to open source projects, and to demonstrate the benefits of Intel® architecture across device, edge, and cloud. 

  • Bring awareness and promote the innovative open source projects that Intel contributes to and optimizes for Intel architecture. 

  • Show users new ways to use Intel accelerators in cloud services that provide better performance or enhanced security. 

  • By helping Kubernetes and other open source projects fully leverage platform capabilities, Intel can drive innovation to support rapidly changing demands. 


Session Title 


Time (CEST) 

Time (PDT) 

Mikko Ylinen 

Ismo Puustinen 


August 19 

13:45 – 14:20


August 19 

4:45 – 5:20 AM

Conor NolanIntel; 

Victor Pickard, Red Hat 


August 19 

16:55 – 17:30


August 19 

7:55 – 8:30 AM

Srini Addepalli, Intel 

w. VMWare, Linux Foundation, Ericson, Vodafone 


August 19 

17:40 – 18:15


August 19 

8:40 – 9:15 AM

Samuel Ortiz 


August 20 

13:00 – 13:35


August 20 

4:00 – 4:35 AM

Killian Muldoon, Intel & Tom Golway, HP 


August 20 

14:30 – 15:05


August 20 

5:30 – 6:05 AM

Marlow Weston, Intel w. Microsoft, Illumio, InGuardians 


August 20 

17:20 – 17:55


August 20 

8:20 – 8:55 AM


Demo Title



Blazing storage performance with NVMeoF and

Intel® Optane™ SSD 

Intel® Omni-Path Architecture,

Intel® Optane™ SSD, VROC

Non-Volatile Memory Express over Fabrics (NVMeoF) is becoming

one of the most disruptive technologies for data center storage,

designed to deliver high throughput and low-latency NVMe SSD

technology over a network fabric. This demo shows how to use Intel®

Omni-Path Architecture and Intel® SSD Data Center Family with

open-source projects for the NVMeoF in a cloud native environment

using Kubernetes*, and demonstrates that near native performance

can be achieved. 

Integrated Cloud Native NFV & App Stack -

End-to-End Cloud Native Platform for Edge-Computing

Intel® Xeon® Scalable Processors,

Intel® QuickAssist Technology, Intel GPU,

OpenNESS, Edge Computing,Cloud Native

During the edge transformation journey to cloud native, there are many

challenges. We introduce the Integrated Cloud Native NFV and App

Stack project, or ICN, what ICN is and the purpose of ICN. We will show

you a SDEWAN ICN demo of latest release of ICN. The demo explains

how ICN can resolve the challenges of getting edge applications to

communicate with each other.

DARS + DLRS - A Data-Driven Approach for

Intracranial Hemorrhage Detection

DLRS, DARS, Intel® Advanced Vector

Extensions 512

Intercranial Hemorrhage is diagnosed by examining CT-Scan images,

looking for key markers of hemorrhage, which is a job very suited for

an AI solution. We created an AI pipeline using the System Stacks for

Linux* OS, a purpose-built collection of containers that provide

integrated, optimized, and tuned AI frameworks. This demo is

deployed as a secure cloud service accessed through a web interface,

which enables upload of CT scan images from multiple endpoints.

The training and inference is distributed and orchestrated by Kubernetes*.

Envoy HTTP compression with Intel® QuickAssist

Technology in a Kubernetes* cluster

Intel® QuickAssist Technology,

Kata Containers*

This demo is an early preview of the work to offload HTTP Compression

to an Intel® QuickAssist Technology card. The entire workload is secured

with Kata Containers* and uses an Nginx* web server with an Envoy*

sidecar proxy. The virtual functions of the Intel® QAT card are enabled

and SR-IOV is used to more securely pass the vf's into the Kata Container.

Scheduling for workload performance on Intel® Architecture

NFD, Intel® Advanced Vector Extensions

512, Intel® QuickAssist Technology,

Intel® FPGA, Intel processor

Node Feature Discovery (NFD) is a Kubernetes* add-on that detects and

advertises hardware and software capabilities of each node that can be

used to facilitate intelligent scheduling of a workload. This demo shows

users how easy it is to use NFD to discover and schedule on advanced

Intel processor features like Intel® Advanced Vector Extensions 512 or

VNNI. Besides Intel processor features, NFD can help to schedule

workloads on nodes with specific PCIe cards, SRIOV networking, SSD

storage, NUMA topology, and much more.

Accelerating Cloud Workloads in Kubernetes* with

Intel® FPGA Device Plugin

Intel® FPGA, Accelerators

See how using the Intel® FPGA device plugin and FPGA Programmable

Acceleration Card D5005 with OpenCL™ Matrix Multiplication bitstream

design can accelerate an English letter recognition workload in a

multi-node Kubernetes* cluster by x185 times

Accelerate Redis* in Kubernete*s with PMEM-CSI and

Intel® Optane™ Persistent Memory

INTEL® Optane™ Persistent Memory,


Our PMEM-CSI (container storage interface) allows you to use Intel®

Optane™ Persistent Memory for your Kubernetes* workloads, providing

both non-volatile storage and fast data movement. In this demo, we

explore how using PMEM-CSI can accelerate Redis* in a Kubernetes


Power Driven Scheduling and Scaling with CPU telemetry

in Kubernetes* 

Networking, 5G, OpenNESS

Self-healing and Self-optimizing network infrastructures require a

combination of fine-grained visibility and control of platform features

to help ensure an optimum Quality of Service (QoS). One such

automation use case is platform power monitoring and overload

detection.This demo is built on container Bare Metal Reference

Architecture – a standardized Kubernetes deployment that exposes

platform features for increased and deterministic network performance.

Platform telemetry insights are provided to detect a potential platform

overload scenario when workloads are operating under compute

intensive conditions. Automatic scaling combined with intelligent

scheduling is triggered when overloaded nodes are identified,

provisioning additional workloads and ensuring their placement on

nodes with capacity for additional work. The audience will take away an

understanding of the role of telemetry insights and orchestration in

network platform automation and its necessity for infrastructure and

service management.


Take the 2-min survey using the link in our vitrual booth! Your input is very valuable to us!


*Other names and brands may be claimed as the property of others.