Sorry, you need to enable JavaScript to visit this website.

Feedback

Your feedback is important to keep improving our website and offer you a more reliable experience.
Home / Blogs / Yuntongj / 2018 / OpenStack* on Intel® Architecture - Open Cloud AI Reference Design using Intel Products
Blog

Open Cloud AI Reference Design using Intel Products

Author: 
yuntong jin

Intel's Solutions in the Artificial Intelligence Area and Its Development Strategy

In this early stage of the Artificial Intelligence (AI) computing era, Intel is collaborating with partners in development, academia, and software ecosystems to accelerate the transition to the AI-driven computing of the future.

Intel's AI strategy is to provide customers with end-to-end AI solutions encompassing hardware, software, and platforms. We have developed a wide range of hardware and software product portfolios for various target customer groups to build AI ecosystems. As a global leader in computing hardware, Intel supports many types of AI workloads, from hybrid to specialized use cases, and from cloud to devices.

The following figure shows an overview of the Intel AI portfolio. The paragraphs below the figure provide more detail about the portfolio categories.

Intel AI Portfolio

Technology

  • In data centers and workstations, Intel® Xeon® CPU processors are the optimal choice for handling AI computing workloads such as data analysis, machine learning, and most of deep learning. Intel® Nervana™ Neural Network Processor (NNP) can help speed up workloads of highly concurrent computing classes such as deep learning, training, or inference. Intel® FPGA technology can serve as a highly customized deep learning inference solution to meet requirements of low latency and low power consumption scenarios.
  • On the edge and device side, Intel® FPGA is widely used in media and vision systems that are restricted by streaming delay. Intel® Movidius™ VPU can provide excellent inference throughput with extreme low power consumption to support vision and inference on IoT sensors, PCs, and other end products. Intel® GNA IP technology enables ultra-low power speech inference at the device level. Intel® Mobileye™ Technology is an automated driving inference platform, providing a complete set of product solutions for automated driving vehicles.

Libraries

Intel developed a series of AI software libraries and tools to ensure the same AI experiences on all Intel hardware engines while maximizing consistency, interoperability, and security:

  • Intel® Data Analytics Acceleration Library (Intel® DAAL) is optimized for classic machine learning algorithms.
  • Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) is a high performance computing library optimized for deep learning algorithms.
  • OpenVINO™ toolkit is a library for inference operations on multiple hardware platforms, such as CPU, VPU, GPU, FPGA, and others.

Frameworks

Intel directly optimized mainstream deep learning frameworks on the basis of these software libraries and tools. Users can choose TensorFlow*, MXNet*, Caffe*, PaddlePaddle* and other frameworks to build and process their AI workloads based on their own needs, without worrying about the performance of various neural network topologies on different AI hardware engines. At the same time, to reduce the coupling between the deep learning frameworks and the computing hardware engines, the Intel® nGraph™ compiler defines the deep learning intermediate representation layer (IR), and provides multi-path graphics optimization and extensive support for various hardware engines.

Partnerships

In addition to hardware, software, and tools, Intel is expanding the scope of AI applications through partnership programs such as Intel® AI Builder and Intel® AI Academy. Intel and its partner ecosystems, for example, provide prebuilt solutions for many segments and vertical markets, including health care, finance, retail, government, energy, transportation, and industry.

Finally, Intel will continue to drive AI computing to the forefront of the next decade, consistently explore the boundaries of AI applications, and unleash the full potential of AI through deepening investments, including funding cutting-edge academic research, strengthening internal research and development, and increasing investments in innovative pioneers.

Reference Design Architecture

This section describes an artificial intelligence (AI) cloud architecture based on Intel® technologies, which adopts an end-to-end multi-tier model and integrates Intel's hardware and software, alongside open source AI and cloud computing technologies. The structure of the reference design is shown in the following figure.

Intel AI Cloud Structure

AI Edge Devices

For edge devices, the architecture uses Intel's advanced AI systems Intel® Movidius™ and FPGAs. Intel® Movidius™ is a neural compute stick (NCS) pertaining to the miniature deep learning hardware driver, which can provide real-time AI analysis in edge nodes, and accelerate the deep learning inference applications. Intel® Movidius™ NCS currently supports several neural networks, such as GoogleNet, and developers can also use self-developed network models. FPGA can provide the lowest level acceleration implementation at the chip level. FPGA accelerator can also be used on edge nodes to effectively perform deep learning inference, while achieving integration of various interfaces and customized computing and transmission structures. Its performance is close to and sometimes exceeds the level of deep learning graphics.

AI Framework

To give developers more choices, the reference design supports multiple AI learning systems. The open source frameworks TensorFlow* and Intel® Optimization for Caffe are used for AI learning and training platforms.

TensorFlow* is an AI learning system developed by Google* that can be used in artificial learning of images, sensing, and languages. TensorFlow can transfer complex data structures to neural network systems and edit data models. Currently, TensorFlow can support distributed computing of heterogeneous devices. Caffe is an open source system used in image deep learning that was developed by BVLC (Berkeley Vision and Learning Center). Intel® Optimization for Caffe* is based on the original Caffe with extensive optimization for Intel system architecture. It uses CPU resources efficiently and supports multi-node distributed computing.

Cloud Datacenter

In the cloud datacenter, the reference design adopts an IaaS architecture solution based on open source technology as follows:

  • Resource management and scheduling is coordinated through OpenStack and Kubernetes*.
  • Computing uses physical machines, KVM virtual machines, and Docker* containers.
  • Storage is based on Ceph and GlusterFS* distributed storage clusters.
  • Networking is built on Intel® Omni-Path 100G high speed interconnection, adopts SDN scheme.
  • Network visualization and high speed transmission use Neutron and Calico.

Unified management of computing, storage, and network nodes provides efficient and highly flexible cloud computing IaaS services for the required resources.

Intel Hardware Platforms

The reference design employs Intel's cutting-edge hardware technologies (shown in blue in the diagram), including Intel® Xeon™ CPU series, FPGAs, Intel® Omni-Path Technology, Ethernet SFP+, Intel SSD, and more.

Hardware platforms are centrally managed using open source IaaS technology, which is optimized for Intel platforms, thus achieving high performance, high efficiency, and high utilization.

Related Information

For further reading, refer to this white paper, which describes the architecture, design details, and configuration for an AI reference design based on Intel® products. The reference design was developed for the China National AI Innovation Competition.

Acknowledgements

Jin Yuntong, Hu Xiaopao, Yih Leong Sun, and Du Yongfeng from Intel organized the design of this reference architecture.

Intel's Open Source Technology Center team manager Ding Jianfeng, Michael Kadera, Senalka McDonald, Wang Qing, Intel China Strategic Cooperation and Innovation Business Unit (Ecosystem Development Office) Zhang Zhibin, Intel DCG, and Li Hua, CTO of AWCloud Software Co.,Ltd. strongly supported this reference architecture.

Chen Ke, Li Xiaoyan, Feng Shaohe, Shang Dehao, Yang Yanguo, Wen Wei, Xia Lei, Xu Maorong, Xu Yihua, Wang Ruxin, Zhang Xin, and Hu Chao from Intel, and AWCloud Software Co.,Ltd. participated in the design of this reference architecture.

AWCloud Software Co.,Ltd. contributed to the reference deployment case of the China AI Open Platform.

Liang Bing from Intel wrote a case study of the China AI Open Platform. 

Archive

Further Reading

Author: 
Nicole Huesman

This coming week, our team will share how Intel is helping address the requirements demanded by data-centric, compute-intensive workloads quickly growing across data center and edge.

Author: 
Manjeet Bhatia

Devops principles, like continuous integration (CI) and continuous delivery (CD), are attracting considerable attention given their propensity to increase software development efficiency and facili