Overview
This guide shows how to build an Intel® Extension for TensorFlow* PyPI package from source and install it in Ubuntu 22.04 (64-bit).
Normally, you would install the latest released version of Intel® Extension for TensorFlow* using a pip install
command. There are times though when you might need to build from source code:
You want to get the latest feature in development branch.
You want to develop a feature or contribute to Intel® Extension for TensorFlow*.
Verify your code update.
Requirements
Hardware Requirements
Verified Hardware Platforms:
Intel® CPU (Xeon, Core)
Intel® Arc™ Graphics (experimental)
Common Requirements
Install Bazel
To build Intel® Extension for TensorFlow*, install Bazel 5.3.0. Refer to install Bazel.
Here are the recommended commands:
$ wget https://github.com/bazelbuild/bazel/releases/download/5.3.0/bazel-5.3.0-installer-linux-x86_64.sh
$ bash bazel-5.3.0-installer-linux-x86_64.sh --user
Check Bazel is installed successfully and is version 5.3.0:
$ bazel --version
Download Source Code
$ git clone https://github.com/intel/intel-extension-for-tensorflow.git intel-extension-for-tensorflow
$ cd intel-extension-for-tensorflow/
Create a Conda Environment
Install Conda.
Create Virtual Running Environment
$ conda create -n itex_build python=3.10
$ conda activate itex_build
Note, we support Python versions 3.9 through 3.11.
Install TensorFlow
Install TensorFlow 2.15.0, and refer to Install TensorFlow for details.
$ pip install tensorflow==2.15.0
Check TensorFlow was installed successfully and is version 2.15.0:
$ python -c "import tensorflow as tf;print(tf.__version__)"
Optional Requirements for CPU Build Only
Install Clang-17 compiler
ITEX CPU uses clang-17 as default compiler instead of gcc. Users can switch back to gcc in configure. Clang-17 can be installed through apt on Ubuntu or source build on other systems. Check https://apt.llvm.org/ for more details.
# Ubuntu 22.04
# Add in /etc/apt/sources.list
deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy main
deb-src http://apt.llvm.org/jammy/ llvm-toolchain-jammy main
# 17
deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy-17 main
deb-src http://apt.llvm.org/jammy/ llvm-toolchain-jammy-17 main
$ wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
$ apt update
$ apt-get install clang-17 lldb-17 lld-17
$ apt-get install libomp-17-dev
To source build clang-17, use the following command from https://llvm.org/docs/CMake.html:
# Cmake minimum version 3.20.0
$ mkdir mybuilddir
$ cd mybuilddir
$ cmake -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_PROJECTS="clang;lldb;lld;openmp" path/to/llvm-17/
$ cmake --build . --parallel 100
$ cmake --build . --target install
Extra Requirements for XPU Build Only
Install Intel GPU Driver
Install the Intel GPU Driver in the building server, which is needed to build with GPU support and AOT (Ahead-of-time compilation).
Refer to Install Intel GPU driver for details.
Note:
Make sure to install developer runtime packages before building Intel® Extension for TensorFlow*.
AOT (Ahead-of-time compilation)
AOT is a compiling option that reduces the initialization time of GPU kernels at startup time by creating the binary code for a specified hardware platform during compiling. AOT will make the installation package larger but improve performance time.
Without AOT, Intel® Extension for TensorFlow* will be translated to binary code for local hardware platform during startup. That will prolong startup time when using a GPU to several minutes or more.
For more information, refer to Use AOT for Integrated Graphics (Intel GPU).
Install oneAPI Base Toolkit
We recommend you install the oneAPI base toolkit using sudo
(or as root user) to the system directory /opt/intel/oneapi
.
The following commands assume the oneAPI base tookit is installed in /opt/intel/oneapi
. If you installed it in some other folder, please update the oneAPI path as appropriate.
Refer to Install oneAPI Base Toolkit Packages
The oneAPI base toolkit provides compiler and libraries needed by Intel® Extension for TensorFlow*.
Enable oneAPI components:
$ source /opt/intel/oneapi/compiler/latest/env/vars.sh
$ source /opt/intel/oneapi/mkl/latest/env/vars.sh
Build Intel® Extension for TensorFlow* PyPI
Configure
Configure For CPU
Configure the system build by running the ./configure
command at the root of your cloned Intel® Extension for TensorFlow* source tree.
$ ./configure
First to choose n
to build for CPU only, next to choose compiler: Y' for clang and
n` for gcc. Refer to Configure Example.
Configure For XPU
Configure the system build by running the ./configure
command at the root of your cloned Intel® Extension for TensorFlow* source tree. This script prompts you for the location of Intel® Extension for TensorFlow* dependencies and asks for additional build configuration options (path to DPC++ compiler, for example).
$ ./configure
Choose
Y
for Intel GPU support. Refer to Configure Example.Specify the Location of Compiler (DPC++).
Default is
/opt/intel/oneapi/compiler/latest/linux/
, which is the default installed path. ClickEnter
to confirm default location.If it’s differenct, confirm the compiler (DPC++) installed path and fill the correct path.
Specify the Ahead of Time (AOT) Compilation Platforms.
Default is ‘’, which means no AOT.
Fill one or more device type strings of special hardware platforms, such as
ats-m150
,acm-g11
.Here is the list of GPUs we’ve verified:
GPU | device type |
---|---|
Intel® Data Center GPU Flex Series 170 | ats-m150 |
Intel® Data Center GPU Flex Series 140 | ats-m75 |
Intel® Data Center GPU Max Series | pvc |
Intel® Arc™ A730M | acm-g10 |
Intel® Arc™ A380 | acm-g11 |
Please refer to the Available GPU Platforms
section in the end of the Ahead of Time Compilation document for more device types or create an issue to ask support.
To get the full list of supported device types, use the OpenCL™ Offline Compiler (OCLOC) tool (which is installed as part of the GPU driver), and run the following command, please look for -device <device_type>
field of the output:
ocloc compile --help
Choose to Build with oneMKL Support.
We recommend choosing
y
.Default is
/opt/intel/oneapi/mkl/latest
, which is the default installed path. ClickEnter
to confirm default location.If it’s wrong, please confirm the oneMKL installed path and fill the correct path.
Build Source Code
For CPU:
$ bazel build -c opt --config=cpu //itex/tools/pip_package:build_pip_package
For XPU:
$ bazel build -c opt --config=xpu //itex/tools/pip_package:build_pip_package
Create the Pip Package
$ bazel-bin/itex/tools/pip_package/build_pip_package WHL/
It will generate two wheels under WHL
directory:
intel_extension_for_tensorflow-*.whl
intel_extension_for_tensorflow_lib-*.whl
The Intel_extension_for_tensorflow_lib will differentiate between the CPU version or the xpu version
CPU version identifier is {ITEX_VERSION}.0
GPU version identifier is {ITEX_VERSION}.1 (Deprecated, duplicates of XPU version)
XPU version identifier is {ITEX_VERSION}.2
For example
ITEX version | ITEX-lib CPU version | ITEX-lib XPU version |
---|---|---|
1.x.0 | 1.x.0.0 | 1.x.0.2 |
Install the Package
$ pip install ./intel_extension_for_tensorflow*.whl
or
$ pip install ./intel_extension_for_tensorflow-*.whl
$ pip install ./intel_extension_for_tensorflow_lib-*.whl
Located at path/to/site-packages/
├── intel_extension_for_tensorflow
| ├── libitex_common.so
│ └── python
│ └── _pywrap_itex.so
├── intel_extension_for_tensorflow_lib
├── tensorflow
├── tensorflow-plugins
| ├── libitex_cpu.so # for CPU build
│ └── libitex_gpu.so # for XPU build
Additional
Configure Example for CPU
Here is example output and interaction you’d see while running the ./configure
script:
You have bazel 5.3.0 installed.
Python binary path: /path/to/envs/itex_build/bin/python
Found possible Python library paths:
['/path/to/envs/itex_build/lib/python3.9/site-packages']
Do you wish to build Intel® Extension for TensorFlow* with GPU support? [Y/n]: n
No GPU support will be enabled for Intel® Extension for TensorFlow*.
Do you want to use Clang to build ITEX host code? [Y/n]:
Clang will be used to compile ITEX host code.
Please specify the path to clang executable. [Default is /usr/lib/llvm-17/bin/clang]:
You have Clang 17.0.5 installed.
Only CPU support is available for Intel® Extension for TensorFlow*.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=cpu # Build Intel® Extension for TensorFlow* with CPU support.
Configuration finished
Configure Example For XPU
Here is example output and interaction you’d see while running the ./configure
script:
You have bazel 5.3.0 installed.
Python binary path: /path/to/envs/itex_build/bin/python
Found possible Python library paths:
['/path/to/envs/itex_build/lib/python3.9/site-packages']
Do you wish to build Intel® Extension for TensorFlow* with GPU support? [Y/n]:y
GPU support will be enabled for Intel® Extension for TensorFlow*.
Please specify the location where DPC++ is installed. [Default is /opt/intel/oneapi/compiler/latest/linux/]: /path/to/DPC++
Please specify the Ahead of Time(AOT) compilation platforms, separate with "," for multi-targets. [Default is ]: ats-m150
Do you wish to build Intel® Extension for TensorFlow* with MKL support? [y/N]:y
MKL support will be enabled for Intel® Extension for TensorFlow*.
Please specify the MKL toolkit folder. [Default is /opt/intel/oneapi/mkl/latest]: /path/to/oneMKL
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=xpu # Build Intel® Extension for TensorFlow* with GPU support.
NOTE: XPU mode which supports both CPU and GPU is disbaled."--config=xpu" only supports GPU, which is same as "--config=gpu"
Configuration finished