:py:mod:`neural_compressor.mix_precision`
=========================================

.. py:module:: neural_compressor.mix_precision

.. autoapi-nested-parse::

   Mix Precision for Neural Compressor.



Module Contents
---------------


Functions
~~~~~~~~~

.. autoapisummary::

   neural_compressor.mix_precision.fit



.. py:function:: fit(model, config=None, eval_func=None, eval_dataloader=None, eval_metric=None, **kwargs)

   Fit low precision model generation across multiple framework backends.

   :param model: For Tensorflow model, it could be a path
                 to frozen pb, loaded graph_def object or
                 a path to ckpt/savedmodel folder.
                 For PyTorch model, it's torch.nn.model
                 instance.
                 For MXNet model, it's mxnet.symbol.Symbol
                 or gluon.HybirdBlock instance.
   :type model: torch.nn.Module
   :param config: The path to the YAML configuration file or
                  QuantConf class containing accuracy goal,
                  tuning objective and preferred calibration &
                  quantization tuning space etc.
   :type config: string or obj
   :param eval_func: The evaluation function provided by user.
                     This function takes model as parameter,
                     and evaluation dataset and metrics should be
                     encapsulated in this function implementation
                     and outputs a higher-is-better accuracy scalar
                     value.
   :type eval_func: function, optional
   :param eval_dataloader: Data loader for evaluation. It is iterable
                           and should yield a tuple of (input, label).
                           The input could be a object, list, tuple or
                           dict, depending on user implementation,
                           as well as it can be taken as model input.
                           The label should be able to take as input of
                           supported metrics. If this parameter is
                           not None, user needs to specify pre-defined
                           evaluation metrics through configuration file
                           and should set "eval_func" paramter as None.
                           Tuner will combine model, eval_dataloader
                           and pre-defined metrics to run evaluation
                           process.
   :type eval_dataloader: generator, optional
   :param eval_metric: An Accuracy object that measures metric for
                       quantization.
   :type eval_metric: obj, optional

   :returns: A MixedPrecision object that generates low precision model across various DL frameworks.

   :raises AssertionError.: