:py:mod:`neural_compressor.contrib.strategy.tpe`
================================================

.. py:module:: neural_compressor.contrib.strategy.tpe

.. autoapi-nested-parse::

   Fefine the tuning strategy that uses tpe search in tuning space.



Module Contents
---------------

Classes
~~~~~~~

.. autoapisummary::

   neural_compressor.contrib.strategy.tpe.TpeTuneStrategy




.. py:class:: TpeTuneStrategy(model, conf, q_dataloader, q_func=None, eval_dataloader=None, eval_func=None, dicts=None, q_hooks=None)

   Bases: :py:obj:`neural_compressor.strategy.strategy.TuneStrategy`

   The tuning strategy using tpe search in tuning space.

   :param model: The FP32 model specified for low precision tuning.
   :type model: object
   :param conf: The Conf class instance initialized from user yaml
                config file.
   :type conf: Conf
   :param q_dataloader: Data loader for calibration, mandatory for
                        post-training quantization.
                        It is iterable and should yield a tuple (input,
                        label) for calibration dataset containing label,
                        or yield (input, _) for label-free calibration
                        dataset. The input could be a object, list, tuple or
                        dict, depending on user implementation, as well as
                        it can be taken as model input.
   :type q_dataloader: generator
   :param q_func: Reserved for future use.
   :type q_func: function, optional
   :param eval_dataloader: Data loader for evaluation. It is iterable
                           and should yield a tuple of (input, label).
                           The input could be a object, list, tuple or dict,
                           depending on user implementation, as well as it can
                           be taken as model input. The label should be able
                           to take as input of supported metrics. If this
                           parameter is not None, user needs to specify
                           pre-defined evaluation metrics through configuration
                           file and should set "eval_func" parameter as None.
                           Tuner will combine model, eval_dataloader and
                           pre-defined metrics to run evaluation process.
   :type eval_dataloader: generator, optional
   :param eval_func: The evaluation function provided by user.
                     This function takes model as parameter, and
                     evaluation dataset and metrics should be
                     encapsulated in this function implementation and
                     outputs a higher-is-better accuracy scalar value.

                     The pseudo code should be something like:

                     def eval_func(model):
                          input, label = dataloader()
                          output = model(input)
                          accuracy = metric(output, label)
                          return accuracy
   :type eval_func: function, optional
   :param dicts: The dict containing resume information.
                 Defaults to None.
   :type dicts: dict, optional

   .. py:method:: traverse()

      Tpe traverse logic.


   .. py:method:: add_loss_to_tuned_history_and_find_best(tuning_history_list)

      Find the best tuned history.


   .. py:method:: object_evaluation(tune_cfg, model)

      Check if config was alredy evaluated.


   .. py:method:: calculate_loss(acc_diff, lat_diff, config)

      Calculate the accuracy loss.


   .. py:method:: stop(timeout, trials_count)

      Check if need to stop traversing the tuning space, either accuracy goal is met or timeout is reach.

      :returns: True if need stop, otherwise False.
      :rtype: bool