neural_compressor.experimental.benchmark

Benchmarking: measure the model performance with the objective settings.

Module Contents

Classes

Benchmark

Benchmark class is used to evaluate the model performance with the objective settings.

Functions

set_env_var(env_var, value[, overwrite_existing])

Set the specified environment variable.

set_all_env_var(conf[, overwrite_existing])

Set all the environment variables with the configuration dict.

get_architecture()

Get the architecture name of the system.

get_threads_per_core()

Get the threads per core.

get_threads()

Get the list of threads.

get_physical_ids()

Get the list of sockets.

get_core_ids()

Get the ids list of the cores.

get_bounded_threads(core_ids, threads, sockets)

Return the threads id list that we will bind instances to.

neural_compressor.experimental.benchmark.set_env_var(env_var, value, overwrite_existing=False)

Set the specified environment variable.

Only set new env in two cases: 1. env not exists 2. env already exists but overwrite_existing params set True

neural_compressor.experimental.benchmark.set_all_env_var(conf, overwrite_existing=False)

Set all the environment variables with the configuration dict.

Neural Compressor only uses physical cores

neural_compressor.experimental.benchmark.get_architecture()

Get the architecture name of the system.

neural_compressor.experimental.benchmark.get_threads_per_core()

Get the threads per core.

neural_compressor.experimental.benchmark.get_threads()

Get the list of threads.

neural_compressor.experimental.benchmark.get_physical_ids()

Get the list of sockets.

neural_compressor.experimental.benchmark.get_core_ids()

Get the ids list of the cores.

neural_compressor.experimental.benchmark.get_bounded_threads(core_ids, threads, sockets)

Return the threads id list that we will bind instances to.

class neural_compressor.experimental.benchmark.Benchmark(conf_fname_or_obj=None)

Bases: object

Benchmark class is used to evaluate the model performance with the objective settings.

Users can use the data that they configured in YAML NOTICE: neural_compressor Benchmark will use the original command to run sub-process, which depends on the user’s code and has the possibility to run unnecessary code

property results

Get the results of benchmarking.

property b_dataloader

Get the dataloader for the benchmarking.

property b_func

Not support getting b_func.

property model

Get the model.

property metric

Not support getting metric.

property postprocess

Not support getting postprocess.

summary_benchmark()

Get the summary of the benchmark.

config_instance()

Configure the multi-instance commands and trigger benchmark with sub process.

generate_prefix(core_list)

Generate the command prefix with numactl.

Parameters:

core_list – a list of core indexes bound with specific instances

run_instance(mode)

Run the instance with the configuration.

Parameters:
  • mode – ‘performance’ or ‘accuracy’

  • set ('performance' mode runs benchmarking with numactl on specific cores and instances) – by user config and returns model performance

  • accuracy ('accuracy' mode runs benchmarking with full cores and returns model) –