Sorry, you need to enable JavaScript to visit this website.

Deep Neural Network Library (DNNL)

Deep learning is on the cutting edge of applied artificial intelligence. Today this approach is used for image recognition and both video and natural language processing, as well as to solve complex visual understanding problems such as autonomous driving. This is the area of active innovation where new algorithms and software tools are being published on a monthly basis. DNN are very demanding in terms of compute resources, and performance-efficient implementations of algorithms are critical for both academia and the industry.

Here at Intel we work in close collaboration with the leading academic and industry co-travelers to solve the architectural challenges, both in hardware and software, for Intel's upcoming multicore/many-core compute platforms. To help innovators tackle the complexities of machine learning, we are making performance optimizations available to our developers through software development tools, including Deep Neural Network Library (DNNL) and Intel Data Analytics Acceleration Library (Intel DAAL), and application software including OpenVINO, Intel Optimized Tensorflow, Pytorch, MXNet.

So what is it?

DNNL is a library of deep learning functions optimized for Intel processors and Intel Processor Graphics. The library provides efficient implementations of compute-intensive functions common in deep learning applications and frameworks with C and C++ API. The functionality is highly optimized and takes advantage of the latest hardware features including Intel DL Boost and low precision data types like bfloat16, fp16, and int8. The library is distributed as source code via a GitHub repository and is licensed under the Apache 2.0 license.