Installation Guide

System Requirements

Category Content
Compiler Recommend using GCC 10
Operating System CentOS 7, RHEL 8, Rocky Linux 8.5, Ubuntu newer than 18.04
Python See prebuilt wheel files availability matrix below

Install PyTorch

Make sure PyTorch is installed so that the extension will work properly. For each PyTorch release, we have a corresponding release of the extension. Here are the PyTorch versions that we support and the mapping relationship:

Please install CPU version of PyTorch through its official channel. For more details, refer to pytorch.org.


Note:

For the extension version earlier than 1.8.0, a patch has to be manually applied to PyTorch source code. Check that version’s installation guide.

From 1.8.0, compiling PyTorch from source is not required. If you still want to compile PyTorch, follow these installation instructions. Make sure to check out the correct PyTorch version according to the table above.


Install via wheel file

Prebuilt wheel files availability matrix for Python versions

Extension Version Python 3.6 Python 3.7 Python 3.8 Python 3.9 Python 3.10
1.12.100 ✔️ ✔️ ✔️ ✔️
1.12.0 ✔️ ✔️ ✔️ ✔️
1.11.200 ✔️ ✔️ ✔️ ✔️
1.11.0 ✔️ ✔️ ✔️ ✔️
1.10.100 ✔️ ✔️ ✔️ ✔️
1.10.0 ✔️ ✔️ ✔️ ✔️
1.9.0 ✔️ ✔️ ✔️ ✔️
1.8.0 ✔️

Note: Intel® Extension for PyTorch* has PyTorch version requirement. Check the mapping table above.

Starting from 1.11.0, you can use normal pip command to install the package.

python -m pip install intel_extension_for_pytorch

Alternatively, you can also install the latest version with the following commands:

python -m pip install intel_extension_for_pytorch --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/

For pre-built wheel files with oneDNN Graph Compiler, use the following command to perform the installation.

python -m pip install intel_extension_for_pytorch --extra-index-url https://pytorch-extension.intel.com/release-whl/dev/cpu/us/

Note: For versions before 1.10.0, use package name torch_ipex, rather than intel_extension_for_pytorch.

Note: To install a package with a specific version, run with the following command:

python -m pip install <package_name>==<version_name> --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/

Install via source compilation

git clone --recursive https://github.com/intel/intel-extension-for-pytorch
cd intel-extension-for-pytorch
git checkout v1.12.100

# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive

python setup.py install

Install via Docker container

Build Docker container from Dockerfile

Run the following commands to build the pip based deployment container:

$ cd docker
$ DOCKER_BUILDKIT=1 docker build -f Dockerfile.pip -t intel-extension-for-pytorch:pip .
$ docker run --rm intel-extension-for-pytorch:pip python -c "import torch; import intel_extension_for_pytorch as ipex; print('torch:', torch.__version__,' ipex:',ipex.__version__)"

Run the following commands to build the conda based development container:

$ cd docker
$ DOCKER_BUILDKIT=1 docker build -f Dockerfile.conda -t intel-extension-for-pytorch:conda .
$ docker run --rm intel-extension-for-pytorch:conda python -c "import torch; import intel_extension_for_pytorch as ipex; print('torch:', torch.__version__,' ipex:',ipex.__version__)"

Get docker container from dockerhub

Pre-built docker images are available at DockerHub.

Run the following command to pull the image to your local machine.

docker pull intel/intel-optimized-pytorch:latest

Install C++ SDK

Usage: For version newer than 1.11.0, download one run file above according to your scenario, run the following command to install it and follow the C++ example.

bash <libintel-ext-pt-name>.run install <libtorch_path>

You can get full usage help message by running the run file alone, as the following command.

bash <libintel-ext-pt-name>.run

Usage: For version before 1.11.0, download one zip file above according to your scenario, unzip it and follow the C++ example.