:py:mod:`neural_compressor.contrib.strategy.sigopt`
===================================================

.. py:module:: neural_compressor.contrib.strategy.sigopt

.. autoapi-nested-parse::

   The SigOpt Tuning Strategy provides support for the quantization process.



Module Contents
---------------

Classes
~~~~~~~

.. autoapisummary::

   neural_compressor.contrib.strategy.sigopt.SigOptTuneStrategy




.. py:class:: SigOptTuneStrategy(model, conf, q_dataloader, q_func=None, eval_dataloader=None, eval_func=None, dicts=None, q_hooks=None)

   Bases: :py:obj:`neural_compressor.strategy.strategy.TuneStrategy`

   The tuning strategy using SigOpt HPO search in tuning space.

   :param model: The FP32 model specified for low precision tuning.
   :type model: object
   :param conf: The Conf class instance initialized from user yaml
                config file.
   :type conf: Conf
   :param q_dataloader: Data loader for calibration, mandatory for
                        post-training quantization.
                        It is iterable and should yield a tuple (input,
                        label) for calibration dataset containing label,
                        or yield (input, _) for label-free calibration
                        dataset. The input could be a object, list, tuple or
                        dict, depending on user implementation, as well as
                        it can be taken as model input.
   :type q_dataloader: generator
   :param q_func: Reserved for future use.
   :type q_func: function, optional
   :param eval_dataloader: Data loader for evaluation. It is iterable
                           and should yield a tuple of (input, label).
                           The input could be a object, list, tuple or dict,
                           depending on user implementation, as well as it can
                           be taken as model input. The label should be able
                           to take as input of supported metrics. If this
                           parameter is not None, user needs to specify
                           pre-defined evaluation metrics through configuration
                           file and should set "eval_func" parameter as None.
                           Tuner will combine model, eval_dataloader and
                           pre-defined metrics to run evaluation process.
   :type eval_dataloader: generator, optional
   :param eval_func: The evaluation function provided by user.
                     This function takes model as parameter, and
                     evaluation dataset and metrics should be
                     encapsulated in this function implementation and
                     outputs a higher-is-better accuracy scalar value.

                     The pseudo code should be something like:

                     def eval_func(model):
                          input, label = dataloader()
                          output = model(input)
                          accuracy = metric(output, label)
                          return accuracy
   :type eval_func: function, optional
   :param dicts: The dict containing resume information.
                 Defaults to None.
   :type dicts: dict, optional

   .. py:method:: params_to_tune_configs(params)

      Get the parameters of the tuning strategy.


   .. py:method:: next_tune_cfg()

      Yielding the tuning config to traverse by concreting strategies according to last tuning result.


   .. py:method:: get_acc_target(base_acc)

      Get the tuning target of the accuracy ceiterion.


   .. py:method:: traverse()

      The main traverse logic, which could be override by some concrete strategy which needs more hooks.

      This is SigOpt version of traverse -- with additional constraints setting to HPO.


   .. py:method:: create_exp(acc_target)

      Set the config for the experiment.