Sorry, you need to enable JavaScript to visit this website.

KubeCon + CloudNativeCon North America

San Diego, California
Nov 18, 2019 (All day) to Nov 21, 2019 (All day)
Add to Calendar

Intel is very excited to participate as a Diamond Sponsor! 

The Cloud Native Computing Foundation’s flagship conference gathers adopters and technologists from leading open source and cloud native communities in San Diego, California from November 18-21, 2019. Join Kubernetes, Prometheus, Envoy, CoreDNS, containerd, Fluentd, OpenTracing, gRPC, CNI, Jaeger, Notary, TUF, Vitess, NATS, Linkerd, Helm, Rook, Harbor, etcd, Open Policy Agent, CRI-O, and TiKV as the community gathers for four days to further the education and advancement of cloud native computing. 

Kubernetes and other cloud native technologies enable higher velocity software development at a lower cost than traditional infrastructure. Cloud native – orchestrating containers as part of a microservices architecture – is a departure from traditional application design. The Cloud Native Computing Foundation is helping to build a map through this new terrain, and KubeCon + CloudNativeCon is where the community comes together to share their expertise on this formerly uncharted but increasingly popular territory. Who should attend? Application Developers, IT Operations, Technical Management, Executive Leadership, Product Mangers and Product Marketing, End-users, Service Providers, CNCF Contributors, and those looking to learn more about Cloud Native!
Join Intel Fellow and Linux and Data-Centric Software Architect, Arjan van de Ven, to hear his Keynote "Modernizing Virtualization Technology for Cloud Native Computing"!

Intel @ KubeCon + CloudNativeCon

  • Keynote: Tuesday, November 19th. 5:41PM
  • Join us at Booth #D2 in the Sponsor Showcase.
  • Check out some exciting demos over all 3 days at the Intel booth! Details below...


Intel Sessions

Hardware-based KMS Plug-in to Protect Secrets in Kubernetes

Raghu Yeluri & Haidong Xia

Tuesday, Nov. 19 11:50AM 6E Upper Level
PodOverhead: Accounting for Greater Cluster Stability Eric Ernst Tuesday, Nov. 19 2:25PM 1AB Mezzanine Level
Birds of a Feather: SODA: The Path to Data Autonomy Anjaneya "Reddy" Chagam Tuesday, Nov. 19 4:25PM 6D Upper Level
Panel: Is Service Mesh Ready for Edge-Native Applications? Srini Addepalli Tuesday, Nov. 19 4:25PM 14AB Mezzanine Level
Keynote: Modernizing Virtualization Technology for Cloud Native Computing Arjan van de Ven Tuesday, Nov. 19 5:41PM Exhibit Hall AB
Kubeflow: Multi-Tenant, Self-Serve, Accelerated Platform for Practitioners Kam Kasravi Wednesday, Nov. 20 3:20PM 17AB Mezzanine Level
Mitigating Noisy Neighbors: Advanced Container Resource Management Alexander Kanevskiy Wednesday, Nov. 20 5:20PM 28ABCDE Upper Level
Running High-performance User-space Packet Processing Apps in Kubernetes Abdul Halim Thursday, Nov. 21 2:25PM 5AB Upper Level



Kata Containers* Advanced Use Cases + Initial showing of cloud hypervisor Watch a Kubernetes* cluster on Clear Linux* OS scheduling pods that run offloaded accelerated crypto functions and cloud hypervisor in a more secure way.
Accelerating Data Analytics on Kubernetes* Infrastructure with Intel® Optane™ DC Persistent Memory The demo shows how a Kubernetes/OpenShift* infrastructure with Intel® Optane™ DC Persistent Memory in App Direct Mode can improve data analytics workload performance and TCO.
StarlingX* for Edge Computing Demo Suite The collection of interactive demos shows how StarlingX* can orchestrate AI and NFV workload across the industrial, retail, and telco industries.
Intelligent Power Management for NFV Enabled by AI The reference design showcases the AI method for hardware resource scaling on NFV to achieve up to 40% savings in power consumption with Intel SpeedStep technology.
Horizontal Scalable Video Inference on Clear Linux* OS Stack + Intel® Xeon® Scalable Processor Cloud Edge Server This elastic cloud native Kubernetes* design on Clear Linux* OS stack utilizes Intel® Xeon® Scalable Processor edge servers, and scales automatically based on performance requirement for AI workloads.
The Future is Serverless AI See a doodle come to life as a picture with full details delivered by Deep Learning Reference Stack in a serverless function-as-a-service way both in cloud and edge.
1. Topology Manager & Data to Insights to TAS
2. Closed Loop Automation
This demo showcases Topology Manager, the brand-new component available in Kubernetes* 1.16 release, and its benchmark test results for performance-sensitive workloads.  Also, learn how Closed-Loop Automation and Telemetry Aware Scheduling enable optimal workloads placement and benefit admins by saving overhead of manual profiling, monitoring, and moving workloads.
Kubernetes* Cluster Ease of Deployment An interactive user interface with one-click installation to provision Kubernetes* on Intel® architecture and enable Enhanced Platform Awareness to improve performance and determinism for NFV.


Intel Office Hours - Coming Soon!