Blogs & Publications
Get Started with Intel® Extension for PyTorch* on GPU | Intel Software, Mar 2023
Accelerate PyTorch* INT8 Inference with New “X86” Quantization Backend on X86 CPUs, Mar 2023
Accelerating PyTorch Transformers with Intel Sapphire Rapids, Part 1, Jan 2023
Scaling inference on CPUs with TorchServe, PyTorch Conference, Dec 2022
What is New in Intel Extension for PyTorch, PyTorch Conference, Dec 2022
Accelerating PyTorch Deep Learning Models on Intel XPUs, Dec, 2022
Introducing the Intel® Extension for PyTorch* for GPUs, Dec 2022
PyTorch Stable Diffusion Using Hugging Face and Intel Arc, Nov 2022
PyTorch 1.13: New Potential for AI Developers to Enhance Model Performance and Accuracy, Nov 2022
Easy Quantization in PyTorch Using Fine-Grained FX, Sep 2022
Empowering PyTorch on Intel® Xeon® Scalable processors with Bfloat16, Aug 2022
Accelerating PyTorch Vision Models with Channels Last on CPU, Aug 2022
One-Click Enabling of Intel Neural Compressor Features in PyTorch Scripts, Aug 2022
PyTorch Inference Acceleration with Intel® Neural Compressor, Jun 2022
Accelerating PyTorch with Intel® Extension for PyTorch, May 2022
Grokking PyTorch Intel CPU performance from first principles (parts 1), Apr 2022
Grokking PyTorch Intel CPU performance from first principles (parts 2), Apr 2022
Grokking PyTorch Intel CPU performance from first principles, Apr 2022
KT Optimizes Performance for Personalized Text-to-Speech, Nov 2021
Accelerating PyTorch distributed fine-tuning with Intel technologies, Nov 2021
Scaling up BERT-like model Inference on modern CPU - parts 1, Apr 2021
Scaling up BERT-like model Inference on modern CPU - parts 2, Nov 2021
Optimizing DLRM by using PyTorch with oneCCL Backend, Feb 2021
Accelerate PyTorch with IPEX and oneDNN using Intel BF16 Technology, Feb 2021 Note: APIs mentioned in it are deprecated.
Intel and Facebook* collaborate to boost PyTorch* CPU performance, Apr 2019
Intel and Facebook* Collaborate to Boost Caffe*2 Performance on Intel CPU’s, Apr 2017