neural_compressor.adaptor.tf_utils.quantize_graph.qdq.fuse_qdq_bn
Quantize FusedBatchNormV3 to int8 op.
Module Contents
Classes
Quantize FusedBatchNormV3 to int8 op _QuantizedFusedBatchNorm. |