neural_compressor.common.benchmark

Benchmark API for Intel Neural Compressor.

Functions

get_linux_numa_info()

Collect numa/socket information on linux system.

get_windows_numa_info()

Collect socket information on Windows system due to no available numa info.

dump_numa_info()

Fetch NUMA info and dump stats in shell, return numa_info.

parse_str2list(cpu_ranges)

Parse '0-4,7,8' into [0,1,2,3,4,7,8] for machine readable.

format_list2str(cpus)

Format [0,1,2,3,4,7,8] back to '0-4,7,8' for human readable.

get_reversed_numa_info(numa_info)

Reverse numa_info.

get_numa_node(core_list, reversed_numa_info)

Return numa node used in current core_list.

set_cores_for_instance(args, numa_info)

Set cores for each instance based on the input args.

generate_prefix(args, core_list)

Generate the command prefix with numactl (Linux) or start (Windows) command.

run_multi_instance_command(args, ...)

Build and trigger commands for multi-instances with subprocess.

summary_latency_throughput(logfile_dict)

Get the summary of the benchmark.

benchmark()

Benchmark API interface.

Module Contents

neural_compressor.common.benchmark.get_linux_numa_info()[source]

Collect numa/socket information on linux system.

Returns:

demo: {numa_index: {“physical_cpus”: “xxx”; “logical_cpus”: “xxx”}}
E.g. numa_info = {

0: {“physical_cpus”: “0-23”, “logical_cpus”: “0-23,48-71”}, 1: {“physical_cpus”: “24-47”, “logical_cpus”: “24-47,72-95”}

}

Return type:

numa_info (dict)

neural_compressor.common.benchmark.get_windows_numa_info()[source]

Collect socket information on Windows system due to no available numa info.

Returns:

demo: {numa_index: {“physical_cpus”: “xxx”; “logical_cpus”: “xxx”}}
E.g. numa_info = {

0: {“physical_cpus”: “0-23”, “logical_cpus”: “0-23,48-71”}, 1: {“physical_cpus”: “24-47”, “logical_cpus”: “24-47,72-95”}

}

Return type:

numa_info (dict)

neural_compressor.common.benchmark.dump_numa_info()[source]

Fetch NUMA info and dump stats in shell, return numa_info.

Returns:

{numa_node_index: list of Physical CPUs in this numa node, …}

Return type:

numa_info (dict)

neural_compressor.common.benchmark.parse_str2list(cpu_ranges)[source]

Parse ‘0-4,7,8’ into [0,1,2,3,4,7,8] for machine readable.

neural_compressor.common.benchmark.format_list2str(cpus)[source]

Format [0,1,2,3,4,7,8] back to ‘0-4,7,8’ for human readable.

neural_compressor.common.benchmark.get_reversed_numa_info(numa_info)[source]

Reverse numa_info.

neural_compressor.common.benchmark.get_numa_node(core_list, reversed_numa_info)[source]

Return numa node used in current core_list.

neural_compressor.common.benchmark.set_cores_for_instance(args, numa_info)[source]

Set cores for each instance based on the input args.

All use cases are listed below:
Params: a=num_instance; b=num_cores_per_instance; c=cores;
  • no a, b, c: a=1, c=numa:0

  • no a, b: a=1, c=c

  • no a, c: a=numa:0/b, c=numa:0

  • no b, c: a=a, c=numa:0

  • no a: a=numa:0/b, c=c

  • no b: a=a, c=c

  • no c: a=a, c=a*b

  • a, b, c: a=a, c=a*b

Parameters:
  • args (argparse) – arguments for setting different configurations

  • numa_info (dict) – {numa_node_index: list of Physical CPUs in this numa node, …}

Returns:

{“instance_index”: [“node_index”, “cpu_index”, num_cpu]}

Return type:

core_list_per_instance (dict)

neural_compressor.common.benchmark.generate_prefix(args, core_list)[source]

Generate the command prefix with numactl (Linux) or start (Windows) command.

Parameters:
  • args (argparse) – arguments for setting different configurations

  • core_list – [“node_index”, “cpu_index”, num_cpu]

Returns:

command_prefix with specific core list for Linux or Windows.

Return type:

command_prefix (str)

neural_compressor.common.benchmark.run_multi_instance_command(args, core_list_per_instance, raw_cmd)[source]

Build and trigger commands for multi-instances with subprocess.

Parameters:
  • args (argparse) – arguments for setting different configurations

  • core_list_per_instance (dict) – {“instance_index”: [“node_index”, “cpu_index”, num_cpu]}

  • raw_cmd (str) – script.py and parameters for this script

neural_compressor.common.benchmark.summary_latency_throughput(logfile_dict)[source]

Get the summary of the benchmark.

neural_compressor.common.benchmark.benchmark()[source]

Benchmark API interface.