Intel® Neural Compressor¶
An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, ONNX Runtime, and MXNet)
Intel® Neural Compressor, formerly known as Intel® Low Precision Optimization Tool, is an open-source Python library that runs on Intel CPUs and GPUs, which delivers unified interfaces across multiple deep-learning frameworks for popular network compression technologies such as quantization, pruning, and knowledge distillation. This tool supports automatic accuracy-driven tuning strategies to help the user quickly find out the best quantized model. It also implements different weight-pruning algorithms to generate a pruned model with predefined sparsity goal. It also supports knowledge distillation to distill the knowledge from the teacher model to the student model. Intel® Neural Compressor is a critical AI software component in the Intel® oneAPI AI Analytics Toolkit.
Visit the Intel® Neural Compressor online document website at: https://intel.github.io/neural-compressor.
Installation¶
Prerequisites¶
Python version: 3.7, 3.8, 3.9, 3.10
Install on Linux¶
Release binary install
# install stable basic version from pip pip install neural-compressor # Or install stable full version from pip (including GUI) pip install neural-compressor-full
Nightly binary install
git clone https://github.com/intel/neural-compressor.git cd neural-compressor pip install -r requirements.txt # install nightly basic version from pip pip install -i https://test.pypi.org/simple/ neural-compressor # Or install nightly full version from pip (including GUI) pip install -i https://test.pypi.org/simple/ neural-compressor-full
More installation methods can be found at Installation Guide. Please check out our FAQ for more details.
Getting Started¶
Quantization with Python API¶
# A TensorFlow Example
pip install tensorflow
# Prepare fp32 model
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb
import tensorflow as tf
from neural_compressor.experimental import Quantization, common
quantizer = Quantization()
quantizer.model = './mobilenet_v1_1.0_224_frozen.pb'
dataset = quantizer.dataset('dummy', shape=(1, 224, 224, 3))
quantizer.calib_dataloader = common.DataLoader(dataset)
quantizer.fit()
Quantization with JupyterLab Extension¶
Search for jupyter-lab-neural-compressor
in the Extension Manager in JupyterLab and install with one click:
System Requirements¶
Validated Hardware Environment¶
Intel® Neural Compressor supports CPUs based on Intel 64 architecture or compatible processors:¶
Intel Xeon Scalable processor (formerly Skylake, Cascade Lake, Cooper Lake, and Icelake)
Future Intel Xeon Scalable processor (code name Sapphire Rapids)
Intel® Neural Compressor supports GPUs built on Intel’s Xe architecture:¶
Intel® Neural Compressor quantized models are scalable for broad devices:¶
Examples of ONNX INT8 model quantized by Intel® Neural Compressor verified with accuracy on INTEL/AMD/ARM CPUs and NV GPU.
INC quantized models | Intel ICX | NV A100 CUDA Execution Provider |
AMD Milan | ARM Graviton2 |
---|---|---|---|---|
ResNet50 QDQ | 74% | 74% | 73% | 74% |
BERT-base QDQ | 86% | 85% | 86% | 86% |
Note: More examples validated on AWS please check extension list.
Validated Software Environment¶
OS version: CentOS 8.4, Ubuntu 20.04
Python version: 3.7, 3.8, 3.9, 3.10
Framework | TensorFlow | Intel TensorFlow | PyTorch | IPEX | ONNX Runtime | MXNet |
---|---|---|---|---|---|---|
Version | 2.10.0 2.9.1 2.8.2 | 2.10.0 2.9.1 2.8.0 | 1.12.1+cpu 1.11.0+cpu 1.10.0+cpu |
1.12.0 1.11.0 1.10.0 |
1.12.1 1.11.0 1.10.0 |
1.8.0 1.7.0 1.6.0 |
Note: Set the environment variable
TF_ENABLE_ONEDNN_OPTS=1
to enable oneDNN optimizations if you are using TensorFlow v2.6 to v2.8. oneDNN is the default for TensorFlow v2.9.
Validated Models¶
Intel® Neural Compressor validated 420+ examples for quantization with a performance speedup geomean of 2.2x and up to 4.2x on VNNI while minimizing accuracy loss. Over 30 pruning and knowledge distillation samples are also available. More details for validated models are available here.
Documentation¶
Selected Publications¶
Meet the Innovation of Intel AI Software: Intel® Extension for TensorFlow* (Oct 2022)
PyTorch* Inference Acceleration with Intel® Neural Compressor (Oct 2022)
Neural Coder, a new plug-in for Intel Neural Compressor was covered by Twitter, LinkedIn, and Intel Developer Zone from Intel, and Twitter and LinkedIn from Hugging Face. (Oct 2022)
Intel Neural Compressor successfully landed on GCP, AWS, and Azure marketplace. (Oct 2022)
Alibaba Cloud and Intel Neural Compressor Deliver Better Productivity for PyTorch Users (Sep 2022)
Efficient Text Classification with Intel Neural Compressor (Sep 2022)
View our full publication list.
Additional Content¶
Hiring¶
We are actively hiring. Send your resume to inc.maintainers@intel.com if you are interested in model compression techniques.