neural_compressor.tensorflow.quantization.utils.quantize_graph.qdq.fuse_qdq_bn

Quantize FusedBatchNormV3 to int8 op.

Module Contents

Classes

FuseNodeStartWithFusedBatchNormV3

Quantize FusedBatchNormV3 to int8 op _QuantizedFusedBatchNorm.

class neural_compressor.tensorflow.quantization.utils.quantize_graph.qdq.fuse_qdq_bn.FuseNodeStartWithFusedBatchNormV3(**kwargs)[source]

Quantize FusedBatchNormV3 to int8 op _QuantizedFusedBatchNorm.