Releases

2.5.0

We are excited to announce the release of Intel® Extension for PyTorch* 2.5.0+cpu which accompanies PyTorch 2.5. This release mainly brings you the support for Llama3.2, optimization on newly launched Intel® Xeon® 6 P-core platform, GPTQ/AWQ format support, and latest optimization to push better performance for LLM models. This release also includes a set of bug fixing and small optimizations. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try this release and feedback as to improve further on this product.

Highlights

  • Llama 3.2 support

Meta has newly released Llama 3.2, which includes small and medium-sized vision LLMs (11B and 90B), and lightweight, text-only models (1B and 3B). Intel® Extension for PyTorch* provides support of Llama 3.2 since its launch date with early release version, and now support with this official release.

  • Optimization for Intel® Xeon® 6 Intel® Xeon® 6 deliver new degrees of performance with more cores, a choice of microarchitecture, additional memory bandwidth, and exceptional input/output (I/O) across a range of workloads. Intel® Extension for PyTorch* provides dedicated optimization on this new processor family for features like Multiplexed Rank DIMM (MRDIMM), SNC=3 scenario, etc..

  • Large Language Model (LLM) optimization: Intel® Extension for PyTorch* provides more feature support of the weight only quantization including GPTQ/AWQ format support, symmetric quantization of activation and weight, and added chunked prefill/prefix prefill support in LLM module API, etc.. These features enable better adoption of community model weight and provides better performance for low-precision scenarios. This release also extended the optimized models to include newly published Llama 3.2 vision models. A full list of optimized models can be found at LLM optimization.

  • Bug fixing and other optimization

    • Optimized the performance of the IndirectAccessKVCacheAttention kernel #3185 #3209 #3214 #3218 #3248

    • Fixed the Segmentation fault in the IndirectAccessKVCacheAttention kernel #3246

    • Fixed the correctness issue in the PagedAttention kernel for Llama-68M-Chat-v1 #3307

    • Fixed the support in ipex.llm.optimize to ensure model.generate returns the correct output type when return_dict_in_generate is set to True. #3333

    • Optimized the performance of the Flash Attention kernel #3291

    • Upgraded oneDNN to v3.6 #3305

Full Changelog: https://github.com/intel/intel-extension-for-pytorch/compare/v2.4.0+cpu…v2.5.0+cpu

2.4.0

We are excited to announce the release of Intel® Extension for PyTorch* 2.4.0+cpu which accompanies PyTorch 2.4. This release mainly brings you the support for Llama3.1, basic support for LLM serving frameworks like vLLM/TGI, and a set of optimization to push better performance for LLM models. This release also extends the list of optimized LLM models to a broader level and includes a set of bug fixing and small optimizations. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try this release and feedback as to improve further on this product.

Highlights

  • Llama 3.1 support

Meta has newly released Llama 3.1 with new features like longer context length (128K) support. Intel® Extension for PyTorch* provides support of Llama 3.1 since its launch date with early release version, and now support with this official release.

  • Serving framework support

Typical LLM serving frameworks including vLLM, TGI can co-work with Intel® Extension for PyTorch* now which provides optimized performance for Xeon® Scalable CPUs. Besides the integration of LLM serving frameworks with ipex.llm module level APIs, we also continue optimizing the performance and quality of underneath Intel® Extension for PyTorch* operators such as paged attention and flash attention. We also provide new support in ipex.llm module level APIs for 4bits AWQ quantization based on weight only quantization, and distributed communications with shared memory optimization.

  • Large Language Model (LLM) optimization:

Intel® Extension for PyTorch* further optimized the performance of the weight only quantization kernels, enabled more fusion pattern variants for LLMs and extended the optimized models to include whisper, falcon-11b, Qwen2, and definitely Llama 3.1, etc. A full list of optimized models can be found at LLM optimization.

  • Bug fixing and other optimization

    • Fixed the quantization with auto-mixed-precision (AMP) mode of Qwen-7b #3030

    • Fixed the illegal memory access issue in the Flash Attention kernel #2987

    • Re-structured the paths of LLM example scripts #3080

    • Upgraded oneDNN to v3.5.2 #3143

    • Misc fix and enhancement #3079 #3116

Full Changelog: https://github.com/intel/intel-extension-for-pytorch/compare/v2.3.0+cpu…v2.4.0+cpu

2.3.100

Highlights

  • Added the optimization for Phi-3: #2883

  • Fixed the state_dict method patched by ipex.optimize to support DistributedDataParallel #2910

  • Fixed the linking issue in CPPSDK #2911

  • Fixed the ROPE kernel for cases where the batch size is larger than one #2928

  • Upgraded deepspeed to v0.14.3 to include the support for Phi-3 #2985

Full Changelog: https://github.com/intel/intel-extension-for-pytorch/compare/v2.3.0+cpu…v2.3.100+cpu

2.3.0

We are excited to announce the release of Intel® Extension for PyTorch* 2.3.0+cpu which accompanies PyTorch 2.3. This release mainly brings you the new feature on Large Language Model (LLM) called module level LLM optimization API, which provides module level optimizations for commonly used LLM modules and functionalities, and targets to optimize customized LLM modeling for scenarios like private models, self-customized models, LLM serving frameworks, etc. This release also extends the list of optimized LLM models to a broader level and includes a set of bug fixing and small optimizations. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try this release and feedback as to improve further on this product.

Highlights

  • Large Language Model (LLM) optimization

    Intel® Extension for PyTorch* provides a new feature called module level LLM optimization API, which provides module level optimizations for commonly used LLM modules and functionalities. LLM creators can then use this new API set to replace related parts in models by themselves, with which to reach peak performance.

    There are 3 categories of module level LLM optimization APIs in general:

    • Linear post-op APIs

    # using module init and forward
    ipex.llm.modules.linearMul
    ipex.llm.modules.linearGelu
    ipex.llm.modules.linearNewGelu
    ipex.llm.modules.linearAdd
    ipex.llm.modules.linearAddAdd
    ipex.llm.modules.linearSilu
    ipex.llm.modules.linearSiluMul
    ipex.llm.modules.linear2SiluMul
    ipex.llm.modules.linearRelu
    
    • Attention related APIs

    # using module init and forward
    ipex.llm.modules.RotaryEmbedding
    ipex.llm.modules.RMSNorm
    ipex.llm.modules.FastLayerNorm
    ipex.llm.modules.VarlenAttention
    ipex.llm.modules.PagedAttention
    ipex.llm.modules.IndirectAccessKVCacheAttention
    
    # using as functions
    ipex.llm.functional.rotary_embedding
    ipex.llm.functional.rms_norm
    ipex.llm.functional.fast_layer_norm
    ipex.llm.functional.indirect_access_kv_cache_attention
    ipex.llm.functional.varlen_attention
    
    • Generation related APIs

    # using for optimizing huggingface generation APIs with prompt sharing
    ipex.llm.generation.hf_beam_sample
    ipex.llm.generation.hf_beam_search
    ipex.llm.generation.hf_greedy_search
    ipex.llm.generation.hf_sample
    

    More detailed introduction on how to apply this API set and example code walking you through can be found here.

  • Bug fixing and other optimization

Full Changelog: https://github.com/intel/intel-extension-for-pytorch/compare/v2.2.0+cpu…v2.3.0+cpu

2.2.0

We are excited to announce the release of Intel® Extension for PyTorch* 2.2.0+cpu which accompanies PyTorch 2.2. This release mainly brings in our latest optimization on Large Language Model (LLM) including new dedicated API set (ipex.llm), new capability for auto-tuning accuracy recipe for LLM, and a broader list of optimized LLM models, together with a set of bug fixing and small optimization. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try this release and feedback as to improve further on this product.

Highlights

  • Large Language Model (LLM) optimization

    Intel® Extension for PyTorch* provides a new dedicated module, ipex.llm, to host for Large Language Models (LLMs) specific APIs. With ipex.llm, Intel® Extension for PyTorch* provides comprehensive LLM optimization cross various popular datatypes including FP32/BF16/INT8/INT4. Specifically for low precision, both SmoothQuant and Weight-Only quantization are supported for various scenarios. And user can also run Intel® Extension for PyTorch* with Tensor Parallel to fit in the multiple ranks or multiple nodes scenarios to get even better performance.

    A typical API under this new module is ipex.llm.optimize, which is designed to optimize transformer-based models within frontend Python modules, with a particular focus on Large Language Models (LLMs). It provides optimizations for both model-wise and content-generation-wise. ipex.llm.optimize is an upgrade API to replace previous ipex.optimize_transformers, which will bring you more consistent LLM experience and performance. Below shows a simple example of ipex.llm.optimize for fp32 or bf16 inference:

    import torch
    import intel_extension_for_pytorch as ipex
    import transformers
    
    model= transformers.AutoModelForCausalLM(model_name_or_path).eval()
    
    dtype = torch.float # or torch.bfloat16
    model = ipex.llm.optimize(model, dtype=dtype)
    
    model.generate(YOUR_GENERATION_PARAMS)
    

    More examples of this API can be found at LLM optimization API.

    Besides the new optimization API for LLM inference, Intel® Extension for PyTorch* also provides new capability for users to auto-tune a good quantization recipe for running SmoothQuant INT8 with good accuracy. SmoothQuant is a popular method to improve the accuracy of int8 quantization. The new auto-tune API allows automatic global alpha tuning, and automatic layer-by-layer alpha tuning provided by Intel® Neural Compressor for the best INT8 accuracy. More details can be found at SmoothQuant Recipe Tuning API Introduction.

    Intel® Extension for PyTorch* newly optimized many more LLM models including more llama2 variance like llama2-13b/llama2-70b, encoder-decoder model like T5, code generation models like starcoder/codegen, and more like Baichuan, Baichuan2, ChatGLM2, ChatGLM3, mistral, mpt, dolly, etc.. A full list of optimized models can be found at LLM Optimization.

  • Bug fixing and other optimization

Full Changelog: https://github.com/intel/intel-extension-for-pytorch/compare/v2.1.100+cpu…v2.2.0+cpu

2.1.100

Highlights

Full Changelog: https://github.com/intel/intel-extension-for-pytorch/compare/v2.1.0+cpu…v2.1.100+cpu

2.1.0

Highlights

  • Large Language Model (LLM) optimization (Experimental): Intel® Extension for PyTorch* provides a lot of specific optimizations for LLMs in this new release. In operator level, we provide highly efficient GEMM kernel to speedup Linear layer and customized operators to reduce the memory footprint. To better trade-off the performance and accuracy, different low-precision solutions e.g., smoothQuant for INT8 and weight-only-quantization for INT4 and INT8 are also enabled. Besides, tensor parallel can also be adopt to get lower latency for LLMs.

    A new API function, ipex.optimize_transformers, is designed to optimize transformer-based models within frontend Python modules, with a particular focus on Large Language Models (LLMs). It provides optimizations for both model-wise and content-generation-wise. You just need to invoke the ipex.optimize_transformers function instead of the ipex.optimize function to apply all optimizations transparently. More detailed information can be found at Large Language Model optimizations overview.

    Specifically, this new release includes the support of SmoothQuant and weight only quantization (both INT8 weight and INT4 weight) as to provide better performance and accuracy for low precision scenarios.

    A typical usage of this new feature is quite simple as below:

    import torch
    import intel_extension_for_pytorch as ipex
    ...
    model = ipex.optimize_transformers(model, dtype=dtype)
    
  • torch.compile backend optimization with PyTorch Inductor (Experimental): We optimized Intel® Extension for PyTorch to leverage PyTorch Inductor’s capability when working as a backend of torch.compile, which can better utilize torch.compile’s power of graph capture, Inductor’s scalable fusion capability, and still keep customized optimization from Intel® Extension for PyTorch.

  • performance optimization of static quantization under dynamic shape: We optimized the static quantization performance of Intel® Extension for PyTorch for dynamic shapes. The usage is the same as the workflow of running static shapes while inputs of variable shapes could be provided during runtime.

  • Bug fixing and other optimization

    • Optimized the runtime memory usage #1563

    • Fixed the excessive size of the saved model #1677 #1688

    • Supported shared parameters in ipex.optimize #1664

    • Enabled the optimization of LARS fusion #1695

    • Supported dictionary input in ipex.quantization.prepare #1682

    • Updated oneDNN to v3.3 #2137

2.0.100

Highlights

  • Enhanced the functionality of Intel® Extension for PyTorch as a backend of torch.compile: #1568 #1585 #1590

  • Fixed the Stable Diffusion fine-tuning accuracy issue #1587 #1594

  • Fixed the ISA check on old hypervisor based VM #1513

  • Addressed the excessive memory usage in weight prepack #1593

  • Fixed the weight prepack of convolution when padding_mode is not 'zeros' #1580

  • Optimized the INT8 LSTM performance #1566

  • Fixed TransNetV2 calibration failure #1564

  • Fixed BF16 RNN-T inference when AVX512_CORE_VNNI ISA is used #1592

  • Fixed the ROIAlign operator #1589

  • Enabled execution on designated numa nodes with launch script #1517

Full Changelog: https://github.com/intel/intel-extension-for-pytorch/compare/v2.0.0+cpu…v2.0.100+cpu

2.0.0

We are pleased to announce the release of Intel® Extension for PyTorch* 2.0.0-cpu which accompanies PyTorch 2.0. This release mainly brings in our latest optimization on NLP (BERT), support of PyTorch 2.0’s hero API –- torch.compile as one of its backend, together with a set of bug fixing and small optimization.

Highlights

  • Fast BERT optimization (Experimental): Intel introduced a new technique to speed up BERT workloads. Intel® Extension for PyTorch* integrated this implementation, which benefits BERT model especially training. A new API ipex.fast_bert is provided to try this new optimization. More detailed information can be found at Fast Bert Feature.

  • MHA optimization with Flash Attention: Intel optimized MHA module with Flash Attention technique as inspired by Stanford paper. This brings less memory consumption for LLM, and also provides better inference performance for models like BERT, Stable Diffusion, etc.

  • Work with torch.compile as an backend (Experimental): PyTorch 2.0 introduces a new feature, torch.compile, to speed up PyTorch execution. We’ve enabled Intel® Extension for PyTorch as a backend of torch.compile, which can leverage this new PyTorch API’s power of graph capture and provide additional optimization based on these graphs. The usage of this new feature is quite simple as below:

import torch
import intel_extension_for_pytorch as ipex
...
model = ipex.optimize(model)
model = torch.compile(model, backend='ipex')
  • Bug fixing and other optimization

    • Supported RMSNorm which is widely used in the t5 model of huggingface #1341

    • Optimized InstanceNorm #1330

    • Fixed the quantization of LSTM #1414 #1473

    • Fixed the correctness issue of unpacking non-contiguous Linear weight #1419

    • oneDNN update #1488

Known Issues

Please check at Known Issues webpage.

1.13.100

Highlights

Full Changelog: https://github.com/intel/intel-extension-for-pytorch/compare/v1.13.0+cpu…v1.13.100+cpu

1.13.0

We are pleased to announce the release of Intel® Extension for PyTorch* 1.13.0-cpu which accompanies PyTorch 1.13. This release is highlighted with quite a few usability features which help users to get good performance and accuracy on CPU with less effort. We also added a couple of performance features as always. Check out the feature summary below.

  • Usability Features

  1. Automatic channels last format conversion: Channels last conversion is now applied automatically to PyTorch modules with ipex.optimize by default. Users don’t have to explicitly convert input and weight for CV models.

  2. Code-free optimization (experimental): ipex.optimize is automatically applied to PyTorch modules without the need of code changes when the PyTorch program is started with the Intel® Extension for PyTorch* launcher via the new --auto-ipex option.

  3. Graph capture mode of ipex.optimize (experimental): A new boolean flag graph_mode (default off) was added to ipex.optimize, when turned on, converting the eager-mode PyTorch module into graph(s) to get the best of graph optimization.

  4. INT8 quantization accuracy autotune (experimental): A new quantization API ipex.quantization.autotune was added to refine the default Intel® Extension for PyTorch* quantization recipe via autotuning algorithms for better accuracy.

  5. Hypertune (experimental) is a new tool added on top of Intel® Extension for PyTorch* launcher to automatically identify the good configurations for best throughput via hyper-parameter tuning.

  6. ipexrun: The counterpart of torchrun, is a shortcut added for invoking Intel® Extension for PyTorch* launcher.

  • Performance Features

  1. Packed MKL SGEMM landed as the default kernel option for FP32 Linear, bringing up-to 20% geomean speedup for real-time NLP tasks.

  2. DL compiler is now turned on by default with oneDNN fusion and gives additional performance boost for INT8 models.

Highlights

  • Automatic channels last format conversion: Channels last conversion is now applied to PyTorch modules automatically with ipex.optimize by default for both training and inference scenarios. Users don’t have to explicitly convert input and weight for CV models.

    import intel_extension_for_pytorch as ipex
    # No need to do explicitly format conversion
    # m = m.to(format=torch.channels_last)
    # x = x.to(format=torch.channels_last)
    # for inference
    m = ipex.optimize(m)
    m(x)
    # for training
    m, optimizer = ipex.optimize(m, optimizer)
    m(x)
    
  • Code-free optimization (experimental): ipex.optimize is automatically applied to PyTorch modules without the need of code changes when the PyTorch program is started with the Intel® Extension for PyTorch* launcher via the new --auto-ipex option.

    Example: QA case in HuggingFace

    # original command
    ipexrun --use_default_allocator --ninstance 2 --ncore_per_instance 28 run_qa.py \
      --model_name_or_path bert-base-uncased --dataset_name squad --do_eval \
      --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 \
      --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_squad/
    
    # automatically apply bfloat16 optimization (--auto-ipex --dtype bfloat16)
    ipexrun --use_default_allocator --ninstance 2 --ncore_per_instance 28 --auto_ipex --dtype bfloat16 run_qa.py \
      --model_name_or_path bert-base-uncased --dataset_name squad --do_eval \
      --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 \
      --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_squad/
    
  • Graph capture mode of ipex.optimize (experimental): A new boolean flag graph_mode (default off) was added to ipex.optimize, when turned on, converting the eager-mode PyTorch module into graph(s) to get the best of graph optimization. Under the hood, it combines the goodness of both TorchScript tracing and TorchDynamo to get as max graph scope as possible. Currently, it only supports FP32 and BF16 inference. INT8 inference and training support are under way.

    import intel_extension_for_pytorch as ipex
    model = ...
    model.load_state_dict(torch.load(PATH))
    model.eval()
    optimized_model = ipex.optimize(model, graph_mode=True)
    
  • INT8 quantization accuracy autotune (experimental): A new quantization API ipex.quantization.autotune was added to refine the default Intel® Extension for PyTorch* quantization recipe via autotuning algorithms for better accuracy. This is an optional API to invoke (after prepare and before convert) for scenarios when the accuracy of default quantization recipe of Intel® Extension for PyTorch* cannot meet the requirement. The current implementation is powered by Intel® Neural Compressor.

    import intel_extension_for_pytorch as ipex
    # Calibrate the model
    qconfig = ipex.quantization.default_static_qconfig
    calibrated_model = ipex.quantization.prepare(model_to_be_calibrated, qconfig, example_inputs=example_inputs)
    for data in calibration_data_set:
        calibrated_model(data)
    # Autotune the model
    calib_dataloader = torch.utils.data.DataLoader(...)
    def eval_func(model):
        # Return accuracy value
        ...
        return accuracy
    tuned_model = ipex.quantization.autotune(
                     calibrated_model, calib_dataloader, eval_func,
                     sampling_sizes=[100], accuracy_criterion={'relative': 0.01}, tuning_time=0
                  )
    # Convert the model to jit model
    quantized_model = ipex.quantization.convert(tuned_model)
    with torch.no_grad():
        traced_model = torch.jit.trace(quantized_model, example_input)
        traced_model = torch.jit.freeze(traced_model)
    # Do inference
    y = traced_model(x)
    
  • Hypertune (experimental) is a new tool added on top of Intel® Extension for PyTorch* launcher to automatically identify the good configurations for best throughput via hyper-parameter tuning.

    python -m intel_extension_for_pytorch.cpu.launch.hypertune --conf_file <your_conf_file> <your_python_script> [args]
    

Known Issues

Please check at Known Issues webpage.

1.12.300

Highlights

  • Optimize BF16 MHA fusion to avoid transpose overhead to boost BERT-* BF16 performance #992

  • Remove 64bytes alignment constraint for FP32 and BF16 AddLayerNorm fusion #992

  • Fix INT8 RetinaNet accuracy issue #1032

  • Fix Cat.out issue that does not update the out tensor (#1053) #1074

Full Changelog: https://github.com/intel/intel-extension-for-pytorch/compare/v1.12.100…v1.12.300

1.12.100

This is a patch release to fix the AVX2 issue that blocks running on non-AVX512 platforms.

1.12.0

We are excited to bring you the release of Intel® Extension for PyTorch* 1.12.0-cpu, by tightly following PyTorch 1.12 release. In this release, we matured the automatic int8 quantization and made it a stable feature. We stabilized runtime extension and brought about a MultiStreamModule feature to further boost throughput in offline inference scenario. We also brought about various enhancements in operation and graph which are positive for performance of broad set of workloads.

Highlights include:

  • Automatic INT8 quantization became a stable feature baking into a well-tuned default quantization recipe, supporting both static and dynamic quantization and a wide range of calibration algorithms.

  • Runtime Extension, featured MultiStreamModule, became a stable feature, could further enhance throughput in offline inference scenario.

  • More optimizations in graph and operations to improve performance of broad set of models, examples include but not limited to wave2vec, T5, Albert etc.

  • Pre-built experimental binary with oneDNN Graph Compiler tuned on would deliver additional performance gain for Bert, Albert, Roberta in INT8 inference.

Highlights

  • Matured automatic INT8 quantization feature baking into a well-tuned default quantization recipe. We facilitated the user experience and provided a wide range of calibration algorithms like Histogram, MinMax, MovingAverageMinMax, etc. Meanwhile, We polished the static quantization with better flexibility and enabled dynamic quantization as well. Compared to the previous version, the brief changes are as follows. Refer to tutorial page for more details.

    v1.11.0-cpu v1.12.0-cpu
    import intel_extension_for_pytorch as ipex
    # Calibrate the model
    qconfig = ipex.quantization.QuantConf(qscheme=torch.per_tensor_affine)
    for data in calibration_data_set:
        with ipex.quantization.calibrate(qconfig):
            model_to_be_calibrated(x)
    qconfig.save('qconfig.json')
    # Convert the model to jit model
    conf = ipex.quantization.QuantConf('qconfig.json')
    with torch.no_grad():
        traced_model = ipex.quantization.convert(model, conf, example_input)
    # Do inference 
    y = traced_model(x)
    
    import intel_extension_for_pytorch as ipex
    # Calibrate the model
    qconfig = ipex.quantization.default_static_qconfig # Histogram calibration algorithm and
    calibrated_model = ipex.quantization.prepare(model_to_be_calibrated, qconfig, example_inputs=example_inputs)
    for data in calibration_data_set:
        calibrated_model(data)
    # Convert the model to jit model
    quantized_model = ipex.quantization.convert(calibrated_model)
    with torch.no_grad():
        traced_model = torch.jit.trace(quantized_model, example_input)
        traced_model = torch.jit.freeze(traced_model)
    # Do inference
    y = traced_model(x)
    
  • Runtime Extension, featured MultiStreamModule, became a stable feature. In this release, we enhanced the heuristic rule to further enhance throughput in offline inference scenario. Meanwhile, we also provide the ipex.cpu.runtime.MultiStreamModuleHint to custom how to split the input into streams and concat the output for each steam.

    v1.11.0-cpu v1.12.0-cpu
    import intel_extension_for_pytorch as ipex
    # Create CPU pool
    cpu_pool = ipex.cpu.runtime.CPUPool(node_id=0)
    # Create multi-stream model
    multi_Stream_model = ipex.cpu.runtime.MultiStreamModule(model, num_streams=2, cpu_pool=cpu_pool)
    
    import intel_extension_for_pytorch as ipex
    # Create CPU pool
    cpu_pool = ipex.cpu.runtime.CPUPool(node_id=0)
    # Optional
    multi_stream_input_hint = ipex.cpu.runtime.MultiStreamModuleHint(0)
    multi_stream_output_hint = ipex.cpu.runtime.MultiStreamModuleHint(0)
    # Create multi-stream model
    multi_Stream_model = ipex.cpu.runtime.MultiStreamModule(model, num_streams=2, cpu_pool=cpu_pool,
      multi_stream_input_hint,   # optional
      multi_stream_output_hint ) # optional
    
  • Polished the ipex.optimize to accept the input shape information which would conclude the optimal memory layout for better kernel efficiency.

    v1.11.0-cpu v1.12.0-cpu
    import intel_extension_for_pytorch as ipex
    model = ...
    model.load_state_dict(torch.load(PATH))
    model.eval()
    optimized_model = ipex.optimize(model, dtype=torch.bfloat16)
    
    import intel_extension_for_pytorch as ipex
    model = ...
    model.load_state_dict(torch.load(PATH))
    model.eval()
    optimized_model = ipex.optimize(model, dtype=torch.bfloat16, sample_input=input)
    
  • Provided more optimizations in graph and operations

    • Fuse Adam to improve training performance #822

    • Enable Normalization operators to support channels-last 3D #642

    • Support Deconv3D to serve most models and implement most fusions like Conv

    • Enable LSTM to support static and dynamic quantization #692

    • Enable Linear to support dynamic quantization #787

    • Fusions.

      • Fuse Add + Swish to accelerate FSI Riskful model #551

      • Fuse Conv + LeakyReLU #589

      • Fuse BMM + Add #407

      • Fuse Concat + BN + ReLU #647

      • Optimize Convolution1D to support channels last memory layout and fuse GeLU as its post operation. #657

      • Fuse Einsum + Add to boost Alphafold2 #674

      • Fuse Linear + Tanh #711

Known Issues

  • RuntimeError: Overflow when unpacking long when a tensor’s min max value exceeds int range while performing int8 calibration. Please customize QConfig to use min-max calibration method.

  • Calibrating with quantize_per_tensor, when benchmarking with 1 OpenMP* thread, results might be incorrect with large tensors (find more detailed info here. Editing your code following the pseudocode below can workaround this issue, if you do need to explicitly set OMP_NUM_THREAEDS=1 for benchmarking. However, there could be a performance regression if oneDNN graph compiler prototype feature is utilized.

    Workaround pseudocode:

    # perform convert/trace/freeze with omp_num_threads > 1(N)
    torch.set_num_threads(N)
    prepared_model = prepare(model, input)
    converted_model = convert(prepared_model)
    traced_model = torch.jit.trace(converted_model, input)
    freezed_model = torch.jit.freeze(traced_model)
    # run freezed model to apply optimization pass
    freezed_model(input)
    
    # benchmarking with omp_num_threads = 1
    torch.set_num_threads(1)
    run_benchmark(freezed_model, input)
    
  • Low performance with INT8 support for dynamic shapes The support for dynamic shapes in Intel® Extension for PyTorch* INT8 integration is still work in progress. When the input shapes are dynamic, for example inputs of variable image sizes in an object detection task or of variable sequence lengths in NLP tasks, the Intel® Extension for PyTorch* INT8 path may slow down the model inference. In this case, use stock PyTorch INT8 functionality. Note: Using Runtime Extension feature if batch size cannot be divided by number of streams, because mini batch size on each stream are not equivalent, scripts run into this issues.

  • BF16 AMP(auto-mixed-precision) runs abnormally with the extension on the AVX2-only machine if the topology contains Conv, Matmul, Linear, and BatchNormalization

  • Runtime extension of MultiStreamModule doesn’t support DLRM inference, since the input of DLRM (EmbeddingBag specifically) can’t be simplely batch split.

  • Runtime extension of MultiStreamModule has poor performance of RNNT Inference comparing with native throughput mode. Only part of the RNNT models (joint_net specifically) can be jit traced into graph. However, in one batch inference, joint_net is invoked multi times. It increases the overhead of MultiStreamModule as input batch split, thread synchronization and output concat.

  • Incorrect Conv and Linear result if the number of OMP threads is changed at runtime The oneDNN memory layout depends on the number of OMP threads, which requires the caller to detect the changes for the # of OMP threads while this release has not implemented it yet.

  • Low throughput with DLRM FP32 Train A ‘Sparse Add’ PR is pending on review. The issue will be fixed when the PR is merged.

  • If inference is done with a custom function, conv+bn folding feature of the ipex.optimize() function doesn’t work.

    import torch
    import intel_pytorch_extension as ipex
    class Module(torch.nn.Module):
        def __init__(self):
            super(Module, self).__init__()
            self.conv = torch.nn.Conv2d(1, 10, 5, 1)
            self.bn = torch.nn.BatchNorm2d(10)
            self.relu = torch.nn.ReLU()
        def forward(self, x):
            x = self.conv(x)
            x = self.bn(x)
            x = self.relu(x)
            return x
        def inference(self, x):
            return self.forward(x)
    if __name__ == '__main__':
        m = Module()
        m.eval()
        m = ipex.optimize(m, dtype=torch.float32, level="O0")
        d = torch.rand(1, 1, 112, 112)
        with torch.no_grad():
          m.inference(d)
    

    This is a PyTorch FX limitation. You can avoid this error by calling m = ipex.optimize(m, level="O0"), which doesn’t apply ipex optimization, or disable conv+bn folding by calling m = ipex.optimize(m, level="O1", conv_bn_folding=False).

1.11.200

Highlights

  • Enable more fused operators to accelerate particular models.

  • Fuse Convolution and LeakyReLU (#648)

  • Support torch.einsum and fuse it with add (#684)

  • Fuse Linear and Tanh (#685)

  • In addition to the original installation methods, this release provides Docker installation from DockerHub.

  • Provided the evaluation wheel packages that could boost performance for selective topologies on top of oneDNN graph compiler prototype feature. NOTE: This is still at an early development stage and not fully mature yet, but feel free to reach out through GitHub issues if you have any suggestions.

Full Changelog

1.11.0

We are excited to announce Intel® Extension for PyTorch* 1.11.0-cpu release by tightly following PyTorch 1.11 release. Along with extension 1.11, we focused on continually improving OOB user experience and performance. Highlights include:

  • Support a single binary with runtime dynamic dispatch based on AVX2/AVX512 hardware ISA detection

  • Support install binary from pip with package name only (without the need of specifying the URL)

  • Provide the C++ SDK installation to facilitate ease of C++ app development and deployment

  • Add more optimizations, including graph fusions for speeding up Transformer-based models and CNN, etc

  • Reduce the binary size for both the PIP wheel and C++ SDK (2X to 5X reduction from the previous version)

Highlights

  • Combine the AVX2 and AVX512 binary as a single binary and automatically dispatch to different implementations based on hardware ISA detection at runtime. The typical case is to serve the data center that mixtures AVX2-only and AVX512 platforms. It does not need to deploy the different ISA binary now compared to the previous version

    NOTE: The extension uses the oneDNN library as the backend. However, the BF16 and INT8 operator sets and features are different between AVX2 and AVX512. Refer to oneDNN document for more details.

    When one input is of type u8, and the other one is of type s8, oneDNN assumes the user will choose the quantization parameters so no overflow/saturation occurs. For instance, a user can use u7 [0, 127] instead of u8 for the unsigned input, or s7 [-64, 63] instead of the s8 one. It is worth mentioning that this is required only when the Intel AVX2 or Intel AVX512 Instruction Set is used.

  • The extension wheel packages have been uploaded to pypi.org. The user could directly install the extension by pip/pip3 without explicitly specifying the binary location URL.

v1.10.100-cpu v1.11.0-cpu
python -m pip install intel_extension_for_pytorch==1.10.100 -f https://software.intel.com/ipex-whl-stable
pip install intel_extension_for_pytorch
  • Compared to the previous version, this release provides a dedicated installation file for the C++ SDK. The installation file automatically detects the PyTorch C++ SDK location and installs the extension C++ SDK files to the PyTorch C++ SDK. The user does not need to manually add the extension C++ SDK source files and CMake to the PyTorch SDK. In addition to that, the installation file reduces the C++ SDK binary size from ~220MB to ~13.5MB.

v1.10.100-cpu v1.11.0-cpu
intel-ext-pt-cpu-libtorch-shared-with-deps-1.10.0+cpu.zip (220M)
intel-ext-pt-cpu-libtorch-cxx11-abi-shared-with-deps-1.10.0+cpu.zip (224M)
libintel-ext-pt-1.11.0+cpu.run (13.7M)
libintel-ext-pt-cxx11-abi-1.11.0+cpu.run (13.5M)
  • Add more optimizations, including more custom operators and fusions.

    • Fuse the QKV linear operators as a single Linear to accelerate the Transformer*(BERT-*) encoder part - #278.

    • Remove Multi-Head-Attention fusion limitations to support the 64bytes unaligned tensor shape. #531

    • Fold the binary operator to Convolution and Linear operator to reduce computation. #432 #438 #602

    • Replace the outplace operators with their corresponding in-place version to reduce memory footprint. The extension currently supports the operators including sliu, sigmoid, tanh, hardsigmoid, hardswish, relu6, relu, selu, softmax. #524

    • Fuse the Concat + BN + ReLU as a single operator. #452

    • Optimize Conv3D for both imperative and JIT by enabling NHWC and pre-packing the weight. #425

  • Reduce the binary size. C++ SDK is reduced from ~220MB to ~13.5MB while the wheel packaged is reduced from ~100MB to ~40MB.

  • Update oneDNN and oneDNN graph to 2.5.2 and 0.4.2 respectively.

What’s Changed

Full Changelog: https://github.com/intel/intel-extension-for-pytorch/compare/v1.10.100…v1.11.0

1.10.100

This release is meant to fix the following issues:

  • Resolve the issue that the PyTorch Tensor Expression(TE) did not work after importing the extension.

  • Wraps the BactchNorm(BN) as another operator to break the TE’s BN-related fusions. Because the BatchNorm performance of PyTorch Tensor Expression can not achieve the same performance as PyTorch ATen BN.

  • Update the documentation

    • Fix the INT8 quantization example issue #205

    • Polish the installation guide

1.10.0

The Intel® Extension for PyTorch* 1.10 is on top of PyTorch 1.10. In this release, we polished the front end APIs. The APIs are more simple, stable, and straightforward now. According to PyTorch community recommendation, we changed the underhood device from XPU to CPU. With this change, the model and tensor does not need to be converted to the extension device to get performance improvement. It simplifies the model changes.

Besides that, we continuously optimize the Transformer* and CNN models by fusing more operators and applying NHWC. We measured the 1.10 performance on Torchvison and HugginFace. As expected, 1.10 can speed up the two model zones.

Highlights

  • Change the package name to intel_extension_for_pytorch while the original package name is intel_pytorch_extension. This change targets to avoid any potential legal issues.

v1.9.0-cpu v1.10.0-cpu
import intel_extension_for_pytorch as ipex
import intel_extension_for_pytorch as ipex
  • The underhood device is changed from the extension-specific device(XPU) to the standard CPU device that aligns with the PyTorch CPU device design, regardless of the dispatch mechanism and operator register mechanism. The means the model does not need to be converted to the extension device explicitly.

v1.9.0-cpu v1.10.0-cpu
import torch
import torchvision.models as models

# Import the extension
import intel_extension_for_pytorch as ipex

resnet18 = models.resnet18(pretrained = True)

# Explicitly convert the model to the extension device
resnet18_xpu = resnet18.to(ipex.DEVICE)
import torch
import torchvision.models as models

# Import the extension
import intel_extension_for_pytorch as ipex

resnet18 = models.resnet18(pretrained = True)
  • Compared to v1.9.0, v1.10.0 follows PyTorch AMP API(torch.cpu.amp) to support auto-mixed-precision. torch.cpu.amp provides convenience for auto data type conversion at runtime. Currently, torch.cpu.amp only supports torch.bfloat16. It is the default lower precision floating point data type when torch.cpu.amp is enabled. torch.cpu.amp primarily benefits on Intel CPU with BFloat16 instruction set support.

import torch
class SimpleNet(torch.nn.Module):
    def __init__(self):
        super(SimpleNet, self).__init__()
        self.conv = torch.nn.Conv2d(64, 128, (3, 3), stride=(2, 2), padding=(1, 1), bias=False)

    def forward(self, x):
        return self.conv(x)
v1.9.0-cpu v1.10.0-cpu
# Import the extension
import intel_extension_for_pytorch as ipex

# Automatically mix precision
ipex.enable_auto_mixed_precision(mixed_dtype = torch.bfloat16)

model = SimpleNet().eval()
x = torch.rand(64, 64, 224, 224)
with torch.no_grad():
    model = torch.jit.trace(model, x)
    model = torch.jit.freeze(model)
    y = model(x)
# Import the extension
import intel_extension_for_pytorch as ipex

model = SimpleNet().eval()
x = torch.rand(64, 64, 224, 224)
with torch.cpu.amp.autocast(), torch.no_grad():
    model = torch.jit.trace(model, x)
    model = torch.jit.freeze(model)
    y = model(x)
  • The 1.10 release provides the INT8 calibration as an experimental feature while it only supports post-training static quantization now. Compared to 1.9.0, the fronted APIs for quantization is more straightforward and ease-of-use.

import torch
import torch.nn as nn
import intel_extension_for_pytorch as ipex

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.conv = nn.Conv2d(10, 10, 3)

    def forward(self, x):
        x = self.conv(x)
        return x

model = MyModel().eval()

# user dataset for calibration.
xx_c = [torch.randn(1, 10, 28, 28) for i in range(2))
# user dataset for validation.
xx_v = [torch.randn(1, 10, 28, 28) for i in range(20))
  • Clibration

v1.9.0-cpu v1.10.0-cpu
# Import the extension
import intel_extension_for_pytorch as ipex

# Convert the model to the Extension device
model = Model().to(ipex.DEVICE)

# Create a configuration file to save quantization parameters.
conf = ipex.AmpConf(torch.int8)
with torch.no_grad():
    for x in xx_c:
        # Run the model under calibration mode to collect quantization parameters
        with ipex.AutoMixPrecision(conf, running_mode='calibration'):
            y = model(x.to(ipex.DEVICE))
# Save the configuration file
conf.save('configure.json')
# Import the extension
import intel_extension_for_pytorch as ipex

conf = ipex.quantization.QuantConf(qscheme=torch.per_tensor_affine)
with torch.no_grad():
    for x in xx_c:
        with ipex.quantization.calibrate(conf):
            y = model(x)

conf.save('configure.json')
  • Inference

v1.9.0-cpu v1.10.0-cpu
# Import the extension
import intel_extension_for_pytorch as ipex

# Convert the model to the Extension device
model = Model().to(ipex.DEVICE)
conf = ipex.AmpConf(torch.int8, 'configure.json')
with torch.no_grad():
    for x in cali_dataset:
        with ipex.AutoMixPrecision(conf, running_mode='inference'):
            y = model(x.to(ipex.DEVICE))
# Import the extension
import intel_extension_for_pytorch as ipex

conf = ipex.quantization.QuantConf('configure.json')

with torch.no_grad():
    trace_model = ipex.quantization.convert(model, conf, example_input)
    for x in xx_v:
        y = trace_model(x)
  • This release introduces the optimize API at python front end to optimize the model and optimizer for training. The new API both supports FP32 and BF16, inference and training.

  • Runtime Extension (Experimental) provides a runtime CPU pool API to bind threads to cores. It also features async tasks. Note: Intel® Extension for PyTorch* Runtime extension is still in the experimental stage. The API is subject to change. More detailed descriptions are available in the extension documentation.

Known Issues

  • omp_set_num_threads function failed to change OpenMP threads number of oneDNN operators if it was set before.

    omp_set_num_threads function is provided in Intel® Extension for PyTorch* to change the number of threads used with OpenMP. However, it failed to change the number of OpenMP threads if it was set before.

    pseudo-code:

    omp_set_num_threads(6)
    model_execution()
    omp_set_num_threads(4)
    same_model_execution_again()
    

    Reason: oneDNN primitive descriptor stores the omp number of threads. Current oneDNN integration caches the primitive descriptor in IPEX. So if we use runtime extension with oneDNN based pytorch/ipex operation, the runtime extension fails to change the used omp number of threads.

  • Low performance with INT8 support for dynamic shapes

    The support for dynamic shapes in Intel® Extension for PyTorch* INT8 integration is still work in progress. When the input shapes are dynamic, for example, inputs of variable image sizes in an object detection task or of variable sequence lengths in NLP tasks, the Intel® Extension for PyTorch* INT8 path may slow down the model inference. In this case, use stock PyTorch INT8 functionality.

  • Low throughput with DLRM FP32 Train

    A ‘Sparse Add’ PR is pending review. The issue will be fixed when the PR is merged.

What’s Changed

Full Changelog: https://github.com/intel/intel-extension-for-pytorch/compare/v1.9.0…v1.10.0+cpu-rc3

1.9.0

What’s New

  • Rebased the Intel Extension for Pytorch from PyTorch-1.8.0 to the official PyTorch-1.9.0 release.

  • Support binary installation.

    python -m pip install torch_ipex==1.9.0 -f https://software.intel.com/ipex-whl-stable

  • Support the C++ library. The third party App can link the Intel-Extension-for-PyTorch C++ library to enable the particular optimizations.

1.8.0

What’s New

  • Rebased the Intel Extension for Pytorch from Pytorch -1.7.0 to the official Pytorch-1.8.0 release. The new XPU device type has been added into Pytorch-1.8.0(49786), don’t need to patch PyTorch to enable Intel Extension for Pytorch anymore

  • Upgraded the oneDNN from v1.5-rc to v1.8.1

  • Updated the README file to add the sections to introduce supported customized operators, supported fusion patterns, tutorials, and joint blogs with stakeholders

1.2.0

What’s New

  • We rebased the Intel Extension for pytorch from Pytorch -1.5rc3 to the official Pytorch-1.7.0 release. It will have performance improvement with the new Pytorch-1.7 support.

  • Device name was changed from DPCPP to XPU.

    We changed the device name from DPCPP to XPU to align with the future Intel GPU product for heterogeneous computation.

  • Enabled the launcher for end users.

  • We enabled the launch script that helps users launch the program for training and inference, then automatically setup the strategy for multi-thread, multi-instance, and memory allocator. Refer to the launch script comments for more details.

Performance Improvement

  • This upgrade provides better INT8 optimization with refined auto mixed-precision API.

  • More operators are optimized for the int8 inference and bfp16 training of some key workloads, like MaskRCNN, SSD-ResNet34, DLRM, RNNT.

Others

  • Bug fixes

    • This upgrade fixes the issue that saving the model trained by Intel extension for PyTorch caused errors.

    • This upgrade fixes the issue that Intel extension for PyTorch was slower than pytorch proper for Tacotron2.

  • New custom operators

    This upgrade adds several custom operators: ROIAlign, RNN, FrozenBatchNorm, nms.

  • Optimized operators/fusion

    This upgrade optimizes several operators: tanh, log_softmax, upsample, and embeddingbad and enables int8 linear fusion.

  • Performance

    The release has daily automated testing for the supported models: ResNet50, ResNext101, Huggingface Bert, DLRM, Resnext3d, MaskRNN, SSD-ResNet34. With the extension imported, it can bring up to 2x INT8 over FP32 inference performance improvements on the 3rd Gen Intel Xeon scalable processors (formerly codename Cooper Lake).

Known issues

  • Multi-node training still encounter hang issues after several iterations. The fix will be included in the next official release.

1.1.0

What’s New

  • Added optimization for training with FP32 data type & BF16 data type. All the optimized FP32/BF16 backward operators include:

    • Conv2d

    • Relu

    • Gelu

    • Linear

    • Pooling

    • BatchNorm

    • LayerNorm

    • Cat

    • Softmax

    • Sigmoid

    • Split

    • Embedding_bag

    • Interaction

    • MLP

  • More fusion patterns are supported and validated in the release, see table:

Fusion Patterns Release
Conv + Sum v1.0
Conv + BN v1.0
Conv + Relu v1.0
Linear + Relu v1.0
Conv + Eltwise v1.1
Linear + Gelu v1.1
  • Add docker support

  • [Alpha] Multi-node training with oneCCL support.

  • [Alpha] INT8 inference optimization.

Performance

  • The release has daily automated testing for the supported models: ResNet50, ResNext101, Huggingface Bert, DLRM, Resnext3d, Transformer. With the extension imported, it can bring up to 1.2x~1.7x BF16 over FP32 training performance improvements on the 3rd Gen Intel Xeon scalable processors (formerly codename Cooper Lake).

Known issue

  • Some workloads may crash after several iterations on the extension with jemalloc enabled.

1.0.2

  • Rebase torch CCL patch to PyTorch 1.5.0-rc3

1.0.1-Alpha

  • Static link oneDNN library

  • Check AVX512 build option

  • Fix the issue that cannot normally invoke enable_auto_optimization

1.0.0-Alpha

What’s New

  • Auto Operator Optimization

    Intel Extension for PyTorch will automatically optimize the operators of PyTorch when importing its python package. It will significantly improve the computation performance if the input tensor and the model is converted to the extension device.

  • Auto Mixed Precision Currently, the extension has supported bfloat16. It streamlines the work to enable a bfloat16 model. The feature is controlled by enable_auto_mix_precision. If you enable it, the extension will run the operator with bfloat16 automatically to accelerate the operator computation.

Performance Result

We collected the performance data of some models on the Intel Cooper Lake platform with 1 socket and 28 cores. Intel Cooper Lake introduced AVX512 BF16 instructions that could improve the bfloat16 computation significantly. The detail is as follows (The data is the speedup ratio and the baseline is upstream PyTorch).

Imperative - Operator Injection Imperative - Mixed Precision JIT- Operator Injection JIT - Mixed Precision
RN50 2.68 5.01 5.14 9.66
ResNet3D 3.00 4.67 5.19 8.39
BERT-LARGE 0.99 1.40 N/A N/A

We also measured the performance of ResNeXt101, Transformer-FB, DLRM, and YOLOv3 with the extension. We observed that the performance could be significantly improved by the extension as expected.

Known issue

  • #10 All data types have not been registered for DPCPP

  • #37 MaxPool can’t get nan result when input’s value is nan

NOTE

The extension supported PyTorch v1.5.0-rc3. Support for other PyTorch versions is working in progress.