:py:mod:`neural_compressor.utils.pytorch`
=========================================

.. py:module:: neural_compressor.utils.pytorch

.. autoapi-nested-parse::

   Pytorch utilities.



Module Contents
---------------


Functions
~~~~~~~~~

.. autoapisummary::

   neural_compressor.utils.pytorch.load



.. py:function:: load(checkpoint_dir=None, model=None, history_cfg=None, **kwargs)

   Execute the quantize process on the specified model.

   :param checkpoint_dir: The folder of checkpoint. 'best_configure.yaml' and
                          'best_model_weights.pt' are needed in This directory.
                          'checkpoint' dir is under workspace folder and
                          workspace folder is define in configure yaml file.
   :type checkpoint_dir: dir/file/dict
   :param model: fp32 model need to do quantization.
   :type model: object
   :param history_cfg: configurations from history.snapshot file.
   :type history_cfg: object
   :param \*\*kwargs: contains customer config dict and etc.
   :type \*\*kwargs: dict

   :returns: quantized model
   :rtype: (object)