Sorry, you need to enable JavaScript to visit this website.


OpenVINO™ toolkit, short for Open Visual Inference and Neural network Optimization toolkit, provides developers with improved neural network performance on a variety of Intel® processors and helps them further unlock cost-effective, real-time vision applications. The toolkit enables deep learning inference and easy heterogeneous execution across multiple Intel® platforms (CPU, Intel® Processor Graphics)—providing implementations across cloud architectures to edge devices. This open source distribution provides flexibility and availability to the developer community to innovate deep learning and AI solutions.

OpenVINO is unique due to its software openness and flexibility as well as its extensive deep learning models. Instead of a limited binary package, OpenVINO toolkit is also available as open source with Apache* 2.0 license. Third parties can add support to hardware of their choice through OpenVINO by adding a respective plugin and leverage all OpenVINO’s infrastructure such as the Model Optimizer and Inference Engine. No other toolkit offers such flexibility. OpenVINO supports a comprehensive number of deep learning models OUT OF THE BOX. An unmatched 40 public models and about 40 Intel pre-trained models are supported through Intel Model Zoo. 

OpenVINO™ toolkit contains:

       •      Deep Learning Deployment Toolkit
       •      Open Model Zoo

The Intel® Distribution of OpenVINO™ toolkit  is also available with additional, proprietary support for Intel® FPGAs, Intel® Movidius™ Neural Compute Stick, Intel® Gaussian Mixture Model - Neural Network Accelerator (Intel® GMM-GNA) and provides optimized traditional computer vision libraries (OpenCV*, OpenVX*), and media encode/decode functions. To learn more and download this free commercial product, visit:

OpenVINO is a trademark of Intel Corporation or its subsidiaries in the U.S. and/or other countries.