neural_compressor.tensorflow.quantization.utils.quantize_graph.qat.quantize_helper

QAT Quantize Helper Class.

Module Contents

Functions

init_quantize_config(model[, quantize_recipe])

Initialize quantization config at the beginning of QAT process.

qat_clone_function(layer)

Wrap or leave given layer based on quantize config object parameters.

neural_compressor.tensorflow.quantization.utils.quantize_graph.qat.quantize_helper.init_quantize_config(model, quantize_recipe=None)[source]

Initialize quantization config at the beginning of QAT process.

Parameters:
  • model_name (string) – Special pre-optimized model name.

  • quantize_recipe (dict) – A dict that decide whether given layers should be quantized.

Returns:

QuantizeConfig instance used to decide whether a specific layer

should be quantized.

Return type:

config (QuantizeConfig)

neural_compressor.tensorflow.quantization.utils.quantize_graph.qat.quantize_helper.qat_clone_function(layer)[source]

Wrap or leave given layer based on quantize config object parameters.

Parameters:

layer (tf.keras.layers.Layer) – input Keras layer

Returns:

layer wrapped by QuantizeWrapper class.

Return type:

wrapped_layer (QuantizeWrapper)