neural_compressor.experimental.model_conversion
¶
Helps convert one model format to another.
Module Contents¶
Classes¶
ModelConversion class is used to convert one model format to another. |
- class neural_compressor.experimental.model_conversion.ModelConversion(conf_fname_or_obj=None)¶
ModelConversion class is used to convert one model format to another.
Currently Neural Compressor only supports Quantization-aware training TensorFlow model to Default quantized model.
- The typical usage is:
from neural_compressor.experimental import ModelConversion, common conversion = ModelConversion() conversion.source = ‘QAT’ conversion.destination = ‘default’ conversion.model = ‘/path/to/saved_model’ q_model = conversion()
- Parameters:
conf_fname_or_obj (string or obj) – Optional. The path to the YAML configuration file or Conf class containing model conversion and evaluation setting if not specifed by code.
- property source¶
Return source.
- property destination¶
Return destination.
- property eval_dataloader¶
Return eval dataloader.
- property model¶
Return model.
- property metric¶
Return metric.
- property postprocess¶
Check postprocess.
- property eval_func¶
Return eval_func.
- dataset(dataset_type, *args, **kwargs)¶
Return dataset.
- Parameters:
dataset_type – dataset type
- Returns:
dataset class
- Return type:
class