Sorry, you need to enable JavaScript to visit this website.

Deep Neural Networks (DNN) are on the cutting edge of machine learning. Today this approach is used for image recognition and both video and natural language processing, as well as to solve complex visual understanding problems such as autonomous driving. This is an area of active innovation where new algorithms and software tools are being published on a monthly basis. DNN are very demanding in terms of compute resources, and performance-efficient implementations of algorithms are critical for both academia and the industry.

We recently introduced Intel® Deep-Learning Deployment Toolkit, as part of the Intel® Computer-Vision SDK to enable application developers an easy way to use the DL acceleration capabilities of Intel® Processor Graphics,. Now we would like to bring the collaboration with the developer community to a new level with launch of clDNN, an open source performance library for deep learning. We invite researchers, framework developers and production software developers to work together on accelerating deep learning workloads on Intel architectures.

So what is it?

clDNN is a library of DNN performance primitives optimized for Intel® HD Graphics and Intel® Iris® Pro Graphics. This is a set of highly optimized building blocks intended to accelerate compute-intensive parts of deep learning inference applications. The library is distributed as source code via a GitHub* repository and is licensed under the Apache 2.0 license. The library is implemented in C++ and provides both C++ and C APIs, which allows the functionality to be used from a wide range of high-level languages, such as Python or Java.