Installation#

Intel® Query Processing Library (Intel® QPL) currently doesn’t provide a binary distribution and can be built only from the sources.

System Requirements#

Intel® QPL supports only the Intel 64 platform.

Software Path Requirements#

The execution using Software Path does not imply any specific requirements on a system besides resolving Build Prerequisites. Intel® QPL relies on a run-time kernels dispatcher and CPUID check to choose the best available implementation.

Minimal tested platform
x86-64 CPU with Intel® Streaming SIMD Extensions 4.2 (Intel® microarchitecture code name Nehalem).
Recommended requirements for better performance
x86-64 CPU with Intel® Advanced Vector Extensions 512 support (Intel® microarchitecture code name Skylake (Server) processor or higher).

Hardware Path Requirements#

Execution using Hardware Path available only on Linux* OS.

Additionally, the operating system must meet the following requirements:

  • Linux kernel version 5.18 or later is required for using the first generation of Intel® In-Memory Analytics Accelerator (Intel® IAA).

  • Linux kernel version 6.3 or later is required for using the second generation of Intel® IAA.

  • Virtualization technology for directed I/O (VT-d) is enabled through the BIOS menu. For details on setup, please refer to “Intel® In-Memory Analytics Accelerator (Intel® IAA) User Guide”.

Accelerator Configuration#

Intel® IAA device can be configured with the libaccel-config library, which can be found at intel/idxd-config.

Intel® QPL provides a way to configure Intel® IAA through calling either of the following commands based on whether you are setting up via the Intel® QPL source or from the Intel® QPL installed directory:

python3 <qpl-library>/tools/scripts/accel_conf.py --load=<path to config file>
python3 <install-dir>/share/QPL/scripts/accel_conf.py <config file> --load=<path to config file>

With configuration files found at either <qpl-library>/tools/configs/ or <install-dir>/share/QPL/configs/. With configuration files of the format <# nodes>n<# devices>d<# engines>e<# of workqueues>w-s.conf or <# nodes>n<# devices>d<# engines>e<# of workqueues>w-s-n<which node>.conf.

Alternatively, users can use the following commands to directly configure the device with the accel-config:

accel-config load-config -c <config file>
accel-config enable-device <device>
accel-config enable-wq <device>/<wq>

Attention

Either sudo privileges or elevated permissions are required to configure Intel® IAA instance.

If user is non-root, sysadmin with root privileges should give users access to special groups.

useradd iax
passwd iax
chown -R iax /dev/iax
su iax

Starting from accel config 3.2, all devices are configured with the dsadev group, so users can run Intel QPL without root privileges if they belong to that group.

Building the Library#

Prerequisites#

Before building Intel® QPL, install and set up the following tools:

Linux* OS:

  • nasm 2.15.0 or higher (e.g., can be obtained from https://www.nasm.us)

  • Intel QPL requires C++ compiler with C++17 standard support. For instance, GCC 8.2+ (or Clang 12.0.1+ for building fuzz tests).

  • Universally Unique ID library uuid-dev version 2.35.2 or higher

  • CMake* version 3.16.3 or higher. If Intel QPL is build with -DSANITIZE_THREADS=ON, use CMake* version 3.23 or higher (see Available Build Options)

  • GNU Make

Additionally, libaccel-config library version 4.0 or higher may be required when building and running Intel QPL with certain build options (see Available Build Options for more details). Refer to accel-config releases for the latest version.

Attention

Currently, the accelerator configuration library officially offers only a dynamic version, libaccel-config.so. By default, Intel QPL loads libaccel-config.so dynamically with dlopen, but static loading can be enabled using the build option -DDYNAMIC_LOADING_LIBACCEL_CONFIG=OFF (see the Available Build Options section). The default dynamic loading is recommended, because in that case libaccel-config.so will not be a compile-time dependency, and if the application uses only the Software Path, libaccel-config.so will not be a runtime dependency. The static loading option is provided as an alternative to users who may have concerns with using dynamic loading in their applications.

Attention

See the Available Build Options section for additional requirements on libaccel-config under different conditions.

Windows* OS:

Available Build Options#

Intel QPL supports the following build options:

  • -DSANITIZE_MEMORY=[ON|OFF] - Enables memory sanitizing (OFF by default).

  • -DSANITIZE_THREADS=[ON|OFF] - Enables threads sanitizing (OFF by default).

Attention

Options -DSANITIZE_THREADS=ON and -DSANITIZE_MEMORY=ON are incompatible and can not be used for the same build.

Attention

If Intel QPL is build with -DSANITIZE_THREADS=ON, use CMake* version 3.23 or higher to avoid issue with finding pthread library in FindThreads.

  • -DLOG_HW_INIT=[ON|OFF] - Enables hardware initialization log (OFF by default).

  • -DEFFICIENT_WAIT=[ON|OFF] - Enables usage of efficient wait instructions (OFF by default).

  • -DLIB_FUZZING_ENGINE=[ON|OFF] - Enables fuzz testing (OFF by default).

  • -DQPL_BUILD_EXAMPLES=[OFF|ON] - Enables building library examples (ON by default). For more information on existing examples, see Low-Level C API Examples.

  • -DQPL_BUILD_TESTS=[OFF|ON] - Enables building library testing and benchmarks frameworks (ON by default). For more information on library testing, see Library Testing section. For information on benchmarking the library, see Benchmarks Framework Guide.

  • -DDYNAMIC_LOADING_LIBACCEL_CONFIG=[OFF|ON] - Enables loading the accelerator configuration library (libaccel-config) dynamically with dlopen (ON by default).

Attention

If Intel QPL is built with -DDYNAMIC_LOADING_LIBACCEL_CONFIG=ON, which is the default value, libaccel-config will be loaded dynamically with lazy binding, which means that if the application uses only the Software Path, the user does not need to have libaccel-config installed. If the Hardware Path is used, the user has to either place libaccel-config in /usr/lib64/ or specify the location of libaccel-config in LD_LIBRARY_PATH for the dynamic loader to find it.

Attention

If Intel QPL is built with -DDYNAMIC_LOADING_LIBACCEL_CONFIG=OFF, which is the non-default value, libaccel-config will be loaded statically and it will be a dependency at both compile-time and runtime. And it is required to add libaccel-config library to the link line (-laccel-config) when building an application with the Intel QPL. The user has to either place libaccel-config in /usr/lib64/ or specify the location of libaccel-config (for example, using LD_LIBRARY_PATH and LIBRARY_PATH). Since there may be different versions of libaccel-config on a system, the user is advised to create a symbolic link between libaccel-config.so and libaccel-config.so.1 to avoid potential compatibility issues.

Build Steps#

To build Intel QPL (by default it includes building examples, tests and benchmarks framework as well), complete the following steps:

  1. Make sure that System Requirements are met and all the tools from the Prerequisites section are available in your environment.

  2. Clone git sources using the following command:

    git clone --recursive https://github.com/intel/qpl.git <qpl_library>
    

Attention

--recursive is required for downloading sub-module dependencies for testing and benchmarking Intel QPL.

Attention

To build Intel QPL from the GitHub release package (.tar, .tgz) or without downloading sub-module dependencies for testing and benchmarking, use -DQPL_BUILD_TESTS=OFF.

  1. Build the library and tests by executing the following commands in <qpl_library>:

    Linux* OS:

    mkdir build
    cd build
    cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=<install_dir> ..
    cmake --build . --target install
    

    Windows* OS:

    mkdir build
    cd build
    cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=<install_dir> -G "NMake Makefiles" ..
    cmake --build . --target install
    
  2. The resulting library will be available in the folder <install_dir>/lib/.

Installed Package Structure#

┌── bin
├── include
│   └── qpl
|       ├── c_api
|       └── qpl.h
|
├── lib or lib64
|   ├── cmake
|   └── libqpl.a
|
└── share/QPL
    ├── configs
    └── scripts

Executables for tests and benchmarks framework are available in bin/ folder.

Configuration files and scripts for Intel® IAA (see Accelerator Configuration for more details) are available in share/QPL/ folder.

Examples are located in <qpl_library>/build/examples/.

Intel QPL could be easily integrated to other CMake projects once installed. Use -DCMAKE_PREFIX_PATH to point to the existing installation and add the next lines to your CMakeLists.txt:

find_package(QPL CONFIG REQUIRED)
target_link_libraries(app_name QPL::qpl)

Building the Documentation#

Prerequisites#

To build the offline version of the documentation, the following tools must be installed:

  • Doxygen 1.8.17 or higher (e.g., with apt install doxygen)

  • Python 3.8.5 or higher (e.g., with apt install python3.X)

  • Sphinx 7.2.6 or higher (e.g., with pip3 install sphinx)

  • sphinx_book_theme 1.1.2 or higher (e.g., with pip3 install sphinx-book-theme)

  • Breathe 4.35.0 or higher (e.g., with pip3 install breathe)

Attention

To avoid incompatibility between Breathe, sphinx_book_theme and Sphinx versions, use requirements.txt file to install guaranteed compatible combination of components.

pip3 install -r <qpl_library>/doc/requirements.txt

Build Steps#

To generate the full offline documentation from sources, use the following command:

/bin/bash <qpl_library>/doc/_get_docs.sh

Attention

Linux* OS shell (or Windows* OS shell alternatives) is required to run the documentation build script.

After the generation process is completed, open the <qpl_library>/doc/build/html/index.html file.