neural_compressor.benchmark

Benchmark is used for evaluating the model performance.

Functions

set_env_var(env_var, value[, overwrite_existing])

Set the specified environment variable.

set_all_env_var(conf[, overwrite_existing])

Set all the environment variables with the configuration dict.

get_architecture()

Get the architecture name of the system.

get_threads_per_core()

Get the threads per core.

get_threads()

Get the list of threads.

get_physical_ids()

Get the list of sockets.

get_core_ids()

Get the ids list of the cores.

get_bounded_threads(core_ids, threads, sockets)

Return the threads id list that we will bind instances to.

run_instance(model, conf[, b_dataloader, b_func])

Run the instance with the configuration.

generate_prefix(core_list)

Generate the command prefix with numactl.

call_one(cmd, log_file)

Execute one command for one instance in one thread and dump the log (for Windows).

config_instance(raw_cmd)

Configure the multi-instance commands and trigger benchmark with sub process.

summary_benchmark()

Get the summary of the benchmark.

profile(→ None)

Execute profiling for benchmark configuration.

benchmark_with_raw_cmd(raw_cmd[, conf])

Benchmark the model performance with the raw command.

fit(model, conf[, b_dataloader, b_func])

Benchmark the model performance with the configure.

Module Contents

neural_compressor.benchmark.set_env_var(env_var, value, overwrite_existing=False)[source]

Set the specified environment variable.

Only set new env in two cases: 1. env not exists 2. env already exists but overwrite_existing params set True

neural_compressor.benchmark.set_all_env_var(conf, overwrite_existing=False)[source]

Set all the environment variables with the configuration dict.

Neural Compressor only uses physical cores

neural_compressor.benchmark.get_architecture()[source]

Get the architecture name of the system.

neural_compressor.benchmark.get_threads_per_core()[source]

Get the threads per core.

neural_compressor.benchmark.get_threads()[source]

Get the list of threads.

neural_compressor.benchmark.get_physical_ids()[source]

Get the list of sockets.

neural_compressor.benchmark.get_core_ids()[source]

Get the ids list of the cores.

neural_compressor.benchmark.get_bounded_threads(core_ids, threads, sockets)[source]

Return the threads id list that we will bind instances to.

neural_compressor.benchmark.run_instance(model, conf, b_dataloader=None, b_func=None)[source]

Run the instance with the configuration.

Parameters:
  • model (object) – The model to be benchmarked.

  • conf (BenchmarkConfig) – The configuration for benchmark containing accuracy goal, tuning objective and preferred calibration & quantization tuning space etc.

  • b_dataloader – The dataloader for frameworks.

  • b_func – Customized benchmark function. If user passes the dataloader, then b_func is not needed.

neural_compressor.benchmark.generate_prefix(core_list)[source]

Generate the command prefix with numactl.

Parameters:

core_list – a list of core indexes bound with specific instances

neural_compressor.benchmark.call_one(cmd, log_file)[source]

Execute one command for one instance in one thread and dump the log (for Windows).

neural_compressor.benchmark.config_instance(raw_cmd)[source]

Configure the multi-instance commands and trigger benchmark with sub process.

Parameters:

raw_cmd – raw command used for benchmark

neural_compressor.benchmark.summary_benchmark()[source]

Get the summary of the benchmark.

neural_compressor.benchmark.profile(model, conf, b_dataloader) None[source]

Execute profiling for benchmark configuration.

Parameters:
  • model – The model to be profiled.

  • conf – The configuration for benchmark containing accuracy goal, tuning objective and preferred calibration & quantization tuning space etc.

  • b_dataloader – The dataloader for frameworks.

Returns:

None

neural_compressor.benchmark.benchmark_with_raw_cmd(raw_cmd, conf=None)[source]

Benchmark the model performance with the raw command.

Parameters:
  • raw_cmd (string) – The command to be benchmarked.

  • conf (BenchmarkConfig) – The configuration for benchmark containing accuracy goal, tuning objective and preferred calibration & quantization tuning space etc.

Example:

# Run benchmark according to config
from neural_compressor.benchmark import fit_with_raw_cmd

conf = BenchmarkConfig(iteration=100, cores_per_instance=4, num_of_instance=7)
fit_with_raw_cmd("test.py", conf)
neural_compressor.benchmark.fit(model, conf, b_dataloader=None, b_func=None)[source]

Benchmark the model performance with the configure.

Parameters:
  • model (object) – The model to be benchmarked.

  • conf (BenchmarkConfig) – The configuration for benchmark containing accuracy goal, tuning objective and preferred calibration & quantization tuning space etc.

  • b_dataloader – The dataloader for frameworks.

  • b_func – Customized benchmark function. If user passes the dataloader, then b_func is not needed.

Example:

# Run benchmark according to config
from neural_compressor.benchmark import fit

conf = BenchmarkConfig(iteration=100, cores_per_instance=4, num_of_instance=7)
fit(model='./int8.pb', conf=conf, b_dataloader=eval_dataloader)