Quick start

Quick start#

The quick start sample below shows how to compute CRC on the data using Intel® Data Mover Library (Intel® DML). See CRC Generation operation for more information.

/*******************************************************************************
 * Copyright (C) 2021 Intel Corporation
 *
 * SPDX-License-Identifier: MIT
 ******************************************************************************/

#include <dml/dml.hpp>
#include <iostream>
#include <string>
#include <vector>

constexpr auto string  = "Calculate CRC value for this string...\n";

template <typename path>
int execute_crc() {
    std::cout << "Starting dml::crc example...\n";
    std::cout << string;

    // Prepare data
    auto crc_seed = std::uint32_t(0u);
    auto src      = std::basic_string<std::uint8_t>(reinterpret_cast<const std::uint8_t *>(string));

    // Run operation
    auto result = dml::execute<path>(dml::crc, dml::make_view(src), crc_seed);

    // Check result
    if (result.status == dml::status_code::ok) {
        std::cout << "Finished successfully. Calculated CRC is: 0x" << std::hex << result.crc_value << std::dec << std::endl;
    }
    else {
        std::cout << "Failure occurred." << std::endl;
        return -1;
    }

    return 0;
}

int main(int argc, char **argv)
{
    if (argc < 2) {
        std::cout << "Missing the execution path as the first parameter."
                  <<  "Use hardware_path, software_path or automatic_path." << std::endl;
        return 1;
    }

    std::string path = argv[1];
    if (path == "hardware_path") {
        std::cout << "Executing using dml::hardware path" << std::endl;
        return execute_crc<dml::hardware>();
    }
    else if (path == "software_path") {
        std::cout << "Executing using dml::software path" << std::endl;
        return execute_crc<dml::software>();
    }
    else if (path == "auto_path") {
        std::cout << "Executing using dml::automatic path" << std::endl;
        return execute_crc<dml::automatic>();
    }
    else {
        std::cout << "Unrecognized value for parameter."
                  << "Use hardware_path, software_path or automatic_path." << std::endl;
        return 1;
    }
}

In order to build the library and all the examples, including the one above, follow the steps at Building the Library. The compiled examples will then be located in <dml_library>/build/examples/.

To run the example on the Hardware Path, use:

sudo ./compression_example hardware_path

Attention

With the Hardware Path, the user must either place the libaccel-config library in /usr/lib64/ or specify the location of libaccel-config in LD_LIBRARY_PATH for the dynamic loader to find it.

Attention

Hardware Path requires first configuring Intel® Data Streaming Accelerator (Intel® DSA). See Accelerator Configuration.

Attention

High-Level API currently doesn’t offer a way of setting NUMA node for execution, so the library will auto detect NUMA node of the calling process and use Intel® Data Streaming Accelerator (Intel® DSA) device(s) located on the same node. Refer to NUMA support for High-Level API section for more details.

When using Low-Level API, user can also specify job->numa_id and set matching numactl policy to ensure that the calling process will be located on the same NUMA node as specified with numa_id. Refer to NUMA support for Low-Level API section for more details.

In both cases, it is user responsibility to configure accelerator and ensure device(s) availability on the NUMA node.

Similarly you can specify software_path for host execution or automatic_path for automatic dispatching (choice would be made by library based on accelerator availability and some other internal heuristics).