Troubleshooting

General Usage

  • Problem: FP64 data type is unsupported on current platform.

  • Problem: Runtime error invalid device pointer if import horovod.torch as hvd before import intel_extension_for_pytorch.

    • Cause: Intel® Optimization for Horovod* uses utilities provided by Intel® Extension for PyTorch*. The improper import order causes Intel® Extension for PyTorch* to be unloaded before Intel® Optimization for Horovod* at the end of the execution and triggers this error.

    • Solution: Do import intel_extension_for_pytorch before import horovod.torch as hvd.

  • Problem: Number of dpcpp devices should be greater than zero.

    • Cause: If you use Intel® Extension for PyTorch* in a conda environment, you might encounter this error. Conda also ships the libstdc++.so dynamic library file that may conflict with the one shipped in the OS.

    • Solution: Export the libstdc++.so file path in the OS to an environment variable LD_PRELOAD.

  • Problem: -997 runtime error when running some AI models on Intel® Arc™ Graphics family.

    • Cause: Some of the -997 runtime error are actually out-of-memory errors. As Intel® Arc™ Graphics GPUs have less device memory than Intel® Data Center GPU Flex Series 170 and Intel® Data Center GPU Max Series, running some AI models on them may trigger out-of-memory errors and cause them to report failure such as -997 runtime error most likely. This is expected. Memory usage optimization is working in progress to allow Intel® Arc™ Graphics GPUs to support more AI models.

  • Problem: Building from source for Intel® Arc™ A-Series GPUs fails on WSL2 without any error thrown.

    • Cause: Your system probably does not have enough RAM, so Linux kernel’s Out-of-memory killer was invoked. You can verify this by running dmesg on bash (WSL2 terminal).

    • Solution: If the OOM killer had indeed killed the build process, then you can try increasing the swap-size of WSL2, and/or decreasing the number of parallel build jobs with the environment variable MAX_JOBS (by default, it’s equal to the number of logical CPU cores. So, setting MAX_JOBS to 1 is a very conservative approach that would slow things down a lot).

  • Problem: Some workloads terminate with an error CL_DEVICE_NOT_FOUND after some time on WSL2.

    • Cause: This issue is due to the TDR feature on Windows.

    • Solution: Try increasing TDRDelay in your Windows Registry to a large value, such as 20 (it is 2 seconds, by default), and reboot.

  • Problem: RuntimeError: Can’t add devices across platforms to a single context. -33 (PI_ERROR_INVALID_DEVICE).

    • Cause: If you run Intel® Extension for PyTorch* in a Windows environment where Intel® discrete GPU and integrated GPU co-exist, and the integrated GPU is not supported by Intel® Extension for PyTorch* but is wrongly identified as the first GPU platform.

    • Solution: Disable the integrated GPU in your environment to work around. For long term, Intel® Graphics Driver will always enumerate the discrete GPU as the first device so that Intel® Extension for PyTorch* could provide the fastest device to end framework users in such co-exist scenario based on that.

  • Problem: RuntimeError: Failed to load the backend extension: intel_extension_for_pytorch. You can disable extension auto-loading with TORCH_DEVICE_BACKEND_AUTOLOAD=0.

    • Cause: If you import any third party library such as Transformers before import torch, and the third party library has dependency to torch and then implicitly autoloads intel_extension_for_pytorch, which introduces circle import.

    • Solution: Disable extension auto-loading with TORCH_DEVICE_BACKEND_AUTOLOAD=0.

Library Dependencies

  • Problem: Cannot find oneMKL library when building Intel® Extension for PyTorch* without oneMKL.

    /usr/bin/ld: cannot find -lmkl_sycl
    /usr/bin/ld: cannot find -lmkl_intel_ilp64
    /usr/bin/ld: cannot find -lmkl_core
    /usr/bin/ld: cannot find -lmkl_tbb_thread
    dpcpp: error: linker command failed with exit code 1 (use -v to see invocation)
    
    • Cause: When PyTorch* is built with oneMKL library and Intel® Extension for PyTorch* is built without MKL library, this linker issue may occur.

    • Solution: Resolve the issue by setting:

      export USE_ONEMKL=OFF
      export MKL_DPCPP_ROOT=${HOME}/intel/oneapi/mkl/latest
      

    Then clean build Intel® Extension for PyTorch*.

  • Problem: Undefined symbol: mkl_lapack_dspevd. Intel MKL FATAL ERROR: cannot load libmkl_vml_avx512.so.2 or `libmkl_vml_def.so.2.

    • Cause: This issue may occur when Intel® Extension for PyTorch* is built with oneMKL library and PyTorch* is not build with any MKL library. The oneMKL kernel may run into CPU backend incorrectly and trigger this issue.

    • Solution: Resolve the issue by installing the oneMKL library from conda:

      conda install mkl
      conda install mkl-include
      

    Then clean build PyTorch*.

  • Problem: OSError: libmkl_intel_lp64.so.2: cannot open shared object file: No such file or directory.

    • Cause: Wrong MKL library is used when multiple MKL libraries exist in system.

    • Solution: Preload oneMKL by:

      export LD_PRELOAD=${MKL_DPCPP_ROOT}/lib/intel64/libmkl_intel_lp64.so.2:${MKL_DPCPP_ROOT}/lib/intel64/libmkl_intel_ilp64.so.2:${MKL_DPCPP_ROOT}/lib/intel64/libmkl_gnu_thread.so.2:${MKL_DPCPP_ROOT}/lib/intel64/libmkl_core.so.2:${MKL_DPCPP_ROOT}/lib/intel64/libmkl_sycl.so.2
      

      If you continue seeing similar issues for other shared object files, add the corresponding files under ${MKL_DPCPP_ROOT}/lib/intel64/ by LD_PRELOAD. Note that the suffix of the libraries may change (e.g. from .1 to .2), if more than one oneMKL library is installed on the system.

  • Problem: If you encounter issues related to MPI environment variable configuration when running distributed tasks.

    • Cause: MPI environment variable configuration not correct.

    • Solution: conda deactivate and then conda activate to activate the correct MPI environment variable automatically.

      conda deactivate
      conda activate
      
  • Problem: If you encounter issues Runtime error related to C++ compiler with torch.compile. Runtime Error: Failed to find C++ compiler. Please specify via CXX environment variable.

    • Cause: Not install and activate DPC++/C++ Compiler correctly.

    • Solution: Install DPC++/C++ Compiler and activate it by following commands.

      # {dpcpproot} is the location for dpcpp ROOT path and it is where you installed oneAPI DPCPP, usually it is /opt/intel/oneapi/compiler/latest or ~/intel/oneapi/compiler/latest
      source {dpcpproot}/env/vars.sh
      
  • Problem: LoweringException: ImportError: cannot import name ‘intel’ from ‘triton._C.libtriton’

    • Cause: Installing Triton causes pytorch-triton-xpu to stop working.

    • Solution: Resolve the issue with following command:

      pip list | grep triton
      # If triton related packages are listed, remove them
      pip uninstall triton
      pip uninstall pytorch-triton-xpu
      # Reinstall correct version of pytorch-triton-xpu
      pip install --pre pytorch-triton-xpu==3.1.0+91b14bf559  --index-url https://download.pytorch.org/whl/nightly/xpu
      
  • Problem: ERROR: can not install dpcpp-cpp-rt and torch==2.6.0 because these packages version has conflicting dependencies.

    • Cause: The intel-extension-for-pytorch v2.6.10+xpu uses Intel DPC++ Compiler 2025.0.4 to get a crucial bug fix in unified runtime, while torch v2.6.0+xpu is pinned with 2025.0.2, so we can not install PyTorch and intel-extension-for-pytorch in one pip installation command.

    • Solution: Install PyTorch and intel-extension-for-pytorch with seperate commands.

      python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu
      python -m pip install intel-extension-for-pytorch==2.6.10+xpu oneccl_bind_pt==2.6.0+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
      
  • Problem: ERROR: pip’s dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.

    torch 2.6.0+xpu requires intel-cmplr-lib-rt==2025.0.2, but you have intel-cmplr-lib-rt 2025.0.4 which is incompatible.
    torch 2.6.0+xpu requires intel-cmplr-lib-ur==2025.0.2, but you have intel-cmplr-lib-ur 2025.0.4 which is incompatible.
    torch 2.6.0+xpu requires intel-cmplr-lic-rt==2025.0.2, but you have intel-cmplr-lic-rt 2025.0.4 which is incompatible.
    torch 2.6.0+xpu requires intel-sycl-rt==2025.0.2, but you have intel-sycl-rt 2025.0.4 which is incompatible.
    
    • Cause: The intel-extension-for-pytorch v2.6.10+xpu uses Intel DPC++ Compiler 2025.0.4 to get a crucial bug fix in unified runtime, while torch v2.6.0+xpu is pinned with 2025.0.2.

    • Solution: Ignore the Error since actually torch v2.6.0+xpu is compatible with Intel Compiler 2025.0.4.

  • Problem: RuntimeError: oneCCL: ze_handle_manager.cpp:226 get_ptr: EXCEPTION: unknown memory type, when executing DLRMv2 BF16 training on 4 cards Intel® Data Center GPU Max platform.

    • Cause: Issue exists in the default sycl path of oneCCL 2021.14 which uses two IPC exchanges.

    • Solution: Use export CCL_ATL_TRANSPORT=ofi to work around.

Performance Issue

  • Problem: Extended durations for data transfers from the host system to the device (H2D) and from the device back to the host system (D2H).

    • Cause: Absence of certain Dynamic Kernel Module Support (DKMS) packages on Ubuntu 22.04 or earlier versions.

    • Solution: For those running Ubuntu 22.04 or below, it’s crucial to follow all the recommended installation procedures, including those labeled as optional. These steps are likely necessary to install the missing DKMS packages and ensure your system is functioning optimally. The Kernel Mode Driver (KMD) package that addresses this issue has been integrated into the Linux kernel for Ubuntu 23.04 and subsequent releases.