Features
Ease-of-use Python API
With only two or three clauses added to your original code, Intel® Extension for PyTorch* provides simple frontend Python APIs and utilities to get performance optimizations such as graph optimization and operator optimization.
Check the API Documentation for details of API functions. Examples are also available.
Note
The package name used when you import Intel® Extension for PyTorch* changed
from intel_pytorch_extension
(for versions 1.2.0 through 1.9.0) to
intel_extension_for_pytorch
(for versions 1.10.0 and later). Use the
correct package name depending on the version you are using.
Here are detailed discussions of specific feature topics, summarized in the rest of this document:
ISA Dynamic Dispatching
Intel® Extension for PyTorch* features dynamic dispatching functionality to automatically adapt execution binaries to the most advanced instruction set available on your machine.
For more detailed information, check ISA Dynamic Dispatching.
Channels Last
Compared with the default NCHW memory format, using channels_last (NHWC) memory format could further accelerate convolutional neural networks. In Intel® Extension for PyTorch*, NHWC memory format has been enabled for most key CPU operators, though not all of them have been accepted and merged into the PyTorch master branch yet.
For more detailed information, check Channels Last.
Auto Mixed Precision (AMP)
Low precision data type BFloat16 has been natively supported on 3rd Generation Xeon® Scalable Processors (aka Cooper Lake) with AVX512 instruction set. It will also be supported on the next generation of Intel® Xeon® Scalable Processors with Intel® Advanced Matrix Extensions (Intel® AMX) instruction set providing further boosted performance. The support of Auto Mixed Precision (AMP) with BFloat16 for CPU and BFloat16 optimization of operators has been enabled in Intel® Extension for PyTorch*, and partially upstreamed to PyTorch master branch. These optimizations will be landed in PyTorch master through PRs that are being submitted and reviewed.
For more detailed information, check Auto Mixed Precision (AMP).
Bfloat16 computation can be conducted on platforms with AVX512 instruction set. On platforms with AVX512 BFloat16 instruction, there will be an additional performance boost.
Graph Optimization
To further optimize TorchScript performance, Intel® Extension for PyTorch* supports transparent fusion of frequently used operator patterns such as Conv2D+ReLU and Linear+ReLU. For more detailed information, check Graph Optimization.
Compared to eager mode, graph mode in PyTorch normally yields better performance from optimization methodologies such as operator fusion. Intel® Extension for PyTorch* provides further optimizations in graph mode. We recommend you take advantage of Intel® Extension for PyTorch* with TorchScript. You may wish to run with the torch.jit.trace() function first, since it generally works better with Intel® Extension for PyTorch* than using the torch.jit.script() function. More detailed information can be found at the pytorch.org website.
Operator Optimization
Intel® Extension for PyTorch* also optimizes operators and implements several customized operators for performance boosts. A few ATen operators are replaced by their optimized counterparts in Intel® Extension for PyTorch* via the ATen registration mechanism. Some customized operators are implemented for several popular topologies. For instance, ROIAlign and NMS are defined in Mask R-CNN. To improve performance of these topologies, Intel® Extension for PyTorch* also optimized these customized operators.
- class ipex.nn.FrozenBatchNorm2d(num_features: int, eps: float = 1e-05)
BatchNorm2d where the batch statistics and the affine parameters are fixed
- Parameters
num_features (int) –
from an expected input of size
- Shape
Input:
Output:
(same shape as input)
- ipex.nn.functional.interaction(*args)
Get the interaction feature beyond different kinds of features (like gender or hobbies), used in DLRM model.
For now, we only optimized “dot” interaction at DLRM Github repo. Through this, we use the dot product to represent the interaction feature between two features.
For example, if feature 1 is “Man” which is represented by [0.1, 0.2, 0.3], and feature 2 is “Like play football” which is represented by [-0.1, 0.3, 0.2].
The dot interaction feature is ([0.1, 0.2, 0.3] * [-0.1, 0.3, 0.2]^T) = -0.1 + 0.6 + 0.6 = 1.1
- Parameters
*args – Multiple tensors which represent different features
- Shape
Input:
, where N is the number of different kinds of features, B is the batch size, D is feature sizeOutput:
Optimizer Optimization
Optimizers are one of key parts of the training workloads. Intel® Extension for PyTorch* brings two types of optimizations to optimizers: 1. Operator fusion for the computation in the optimizers. 2. SplitSGD for BF16 training, which reduces the memory footprint of the master weights by half.
For more detailed information, check Optimizer Fusion and Split SGD
Runtime Extension
Intel® Extension for PyTorch* Runtime Extension provides PyTorch frontend APIs for users to get finer-grained control of the thread runtime and provides:
Multi-stream inference via the Python frontend module MultiStreamModule.
Spawn asynchronous tasks from both Python and C++ frontend.
Program core bindings for OpenMP threads from both Python and C++ frontend.
Note
Intel® Extension for PyTorch* Runtime extension is still in the experimental stage. The API is subject to change. More detailed descriptions are available in the API Documentation.
For more detailed information, check Runtime Extension.
INT8 Quantization
Intel® Extension for PyTorch* has built-in quantization recipes to deliver good statistical accuracy for most popular DL workloads including CNN, NLP and recommendation models.
Check more detailed information for INT8 Quantization.