Sorry, you need to enable JavaScript to visit this website.

OpenVINO™ toolkit helps developers and data scientists to accelerate the development of high performance computer vision and AI applications. It enables deep learning inference and easy heterogeneous execution across many types of Intel® platforms (CPU, Intel® Processor Graphics).

It includes:

  • Deep Learning Deployment Toolkit – This toolkit allows developers to deploy pretrained deep learning models through a high-level C++ inference engine API integrated with application logic. This open source version comprises of two components, namely Model Optimizer and Inference Engine, as well as CPU, GPU and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics.It supports pre-trained models from Open Model Zoo along with 100+ open source and public models trained using popular frameworks such as Caffe, Tensorflow, MXNet and ONNX. 

     Join now on Github

     Model Optimizer Developer Guide  |  Inference Engine Developer Guide

 

  • Open Model Zoo – This repository includes 20+ pre-trained and optimized deep learning models and many samples to expedite development and deliver high performance deep learning inference on Intel® processors. Use these models for development and production deployment without the need to search for or to train your own models.
 ​

      Download now from Github

      To run pre-trained models from Open Model Zoo on Intel® FPGAs and Intel® Movidius™ VPU support, you must use the Intel® Distribution of OpenVINO™ toolkit.