neural_compressor.tensorflow.quantization.quantize ================================================== .. py:module:: neural_compressor.tensorflow.quantization.quantize .. autoapi-nested-parse:: Intel Neural Compressor Tensorflow quantization base API. Functions --------- .. autoapisummary:: neural_compressor.tensorflow.quantization.quantize.need_apply neural_compressor.tensorflow.quantization.quantize.quantize_model neural_compressor.tensorflow.quantization.quantize.quantize_model_with_single_config Module Contents --------------- .. py:function:: need_apply(configs_mapping: Dict[Tuple[str, callable], neural_compressor.common.base_config.BaseConfig], algo_name) Whether to apply the algorithm. .. py:function:: quantize_model(model: Union[str, tensorflow.keras.Model, neural_compressor.tensorflow.utils.BaseModel], quant_config: Union[neural_compressor.common.base_config.BaseConfig, list], calib_dataloader: Callable = None, calib_iteration: int = 100, calib_func: Callable = None) The main entry to quantize model. :param model: a fp32 model to be quantized. :param quant_config: single or lists of quantization configuration. :param calib_dataloader: a data loader for calibration. :param calib_iteration: the iteration of calibration. :param calib_func: the function used for calibration, should be a substitution for calib_dataloader :param when the built-in calibration function of INC does not work for model inference.: :returns: the quantized model. :rtype: q_model .. py:function:: quantize_model_with_single_config(q_model: neural_compressor.tensorflow.utils.BaseModel, quant_config: neural_compressor.common.base_config.BaseConfig, calib_dataloader: Callable = None, calib_iteration: int = 100, calib_func: Callable = None) Quantize model using single config. :param model: a model wrapped by INC TF model class. :param quant_config: a quantization configuration. :param calib_dataloader: a data loader for calibration. :param calib_iteration: the iteration of calibration. :param calib_func: the function used for calibration, should be a substitution for calib_dataloader :param when the built-in calibration function of INC does not work for model inference.: :returns: the quantized model. :rtype: q_model