Deep Neural Network Library (DNNL)  1.1.0
Performance library for Deep Learning
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Groups Pages
DNNL Documentation

Deep Neural Network Library (DNNL) is an open-source performance library for deep learning applications. The library includes basic building blocks for neural networks optimized for Intel Architecture Processors and Intel Processor Graphics. DNNL is intended for deep learning applications and framework developers interested in improving application performance on Intel CPUs and GPUs.

Building and Linking

Programming Model

Primitives

Compute intensive operations:

Memory bandwidth limited operations:

Data manipulation:

Performance Benchmarking and Inspection

Advanced topics

Examples

Topic Engine C++ API C API
Tutorials CPU/GPU Getting started
CPU/GPU Memory format propagation
CPU/GPU Performance Profiling Example
CPU/GPU Reorder between CPU and GPU engines Reorder between CPU and GPU engines
GPU Getting started on GPU with OpenCL extensions API
f32 inference CPU/GPU CNN f32 inference example CNN f32 inference example
CPU RNN f32 inference example
int8 inference CPU/GPU CNN int8 inference example
CPU RNN int8 inference example
f32 training CPU/GPU CNN f32 training example
CPU CNN f32 training example
CPU/GPU RNN f32 training example
bf16 training CPU CNN bf16 training example