neural_compressor.contrib.strategy.sigopt

The SigOpt Tuning Strategy provides support for the quantization process.

Module Contents

Classes

SigOptTuneStrategy

The tuning strategy using SigOpt HPO search in tuning space.

class neural_compressor.contrib.strategy.sigopt.SigOptTuneStrategy(model, conf, q_dataloader, q_func=None, eval_dataloader=None, eval_func=None, dicts=None, q_hooks=None)

Bases: neural_compressor.strategy.strategy.TuneStrategy

The tuning strategy using SigOpt HPO search in tuning space.

Parameters:
  • model (object) – The FP32 model specified for low precision tuning.

  • conf (Conf) – The Conf class instance initialized from user yaml config file.

  • q_dataloader (generator) – Data loader for calibration, mandatory for post-training quantization. It is iterable and should yield a tuple (input, label) for calibration dataset containing label, or yield (input, _) for label-free calibration dataset. The input could be a object, list, tuple or dict, depending on user implementation, as well as it can be taken as model input.

  • q_func (function, optional) – Reserved for future use.

  • eval_dataloader (generator, optional) – Data loader for evaluation. It is iterable and should yield a tuple of (input, label). The input could be a object, list, tuple or dict, depending on user implementation, as well as it can be taken as model input. The label should be able to take as input of supported metrics. If this parameter is not None, user needs to specify pre-defined evaluation metrics through configuration file and should set “eval_func” parameter as None. Tuner will combine model, eval_dataloader and pre-defined metrics to run evaluation process.

  • eval_func (function, optional) –

    The evaluation function provided by user. This function takes model as parameter, and evaluation dataset and metrics should be encapsulated in this function implementation and outputs a higher-is-better accuracy scalar value.

    The pseudo code should be something like:

    def eval_func(model):

    input, label = dataloader() output = model(input) accuracy = metric(output, label) return accuracy

  • dicts (dict, optional) – The dict containing resume information. Defaults to None.

params_to_tune_configs(params)

Get the parameters of the tuning strategy.

next_tune_cfg()

Yielding the tuning config to traverse by concreting strategies according to last tuning result.

get_acc_target(base_acc)

Get the tuning target of the accuracy ceiterion.

traverse()

The main traverse logic, which could be override by some concrete strategy which needs more hooks.

This is SigOpt version of traverse – with additional constraints setting to HPO.

create_exp(acc_target)

Set the config for the experiment.