neural_compressor.experimental.model_conversion
Helps convert one model format to another.
Module Contents
Classes
ModelConversion class is used to convert one model format to another. |
- class neural_compressor.experimental.model_conversion.ModelConversion(conf_fname_or_obj=None)[source]
ModelConversion class is used to convert one model format to another.
Currently Neural Compressor only supports Quantization-aware training TensorFlow model to Default quantized model.
- The typical usage is:
from neural_compressor.experimental import ModelConversion, common conversion = ModelConversion() conversion.source = ‘QAT’ conversion.destination = ‘default’ conversion.model = ‘/path/to/saved_model’ q_model = conversion()
- Parameters:
conf_fname_or_obj (string or obj) – Optional. The path to the YAML configuration file or Conf class containing model conversion and evaluation setting if not specified by code.