neural_compressor.utils.pytorch

Pytorch utilities.

Module Contents

Functions

load([checkpoint_dir, model, history_cfg])

Execute the quantize process on the specified model.

neural_compressor.utils.pytorch.load(checkpoint_dir=None, model=None, history_cfg=None, **kwargs)

Execute the quantize process on the specified model.

Parameters:
  • checkpoint_dir (dir/file/dict) – The folder of checkpoint. ‘best_configure.yaml’ and ‘best_model_weights.pt’ are needed in This directory. ‘checkpoint’ dir is under workspace folder and workspace folder is define in configure yaml file.

  • model (object) – fp32 model need to do quantization.

  • history_cfg (object) – configurations from history.snapshot file.

  • **kwargs (dict) – contains customer config dict and etc.

Returns:

quantized model

Return type:

(object)