neural_compressor.tensorflow.quantization.utils.quantize_graph.qdq.optimize_qdq

Fuse the DQ + OP + Q fusion pattern, convert fp32 op to int8.

Module Contents

Classes

OptimizeQDQGraph

Apply the fusion DQ + OP + Q pattern.

class neural_compressor.tensorflow.quantization.utils.quantize_graph.qdq.optimize_qdq.OptimizeQDQGraph(input_graph, input_node_names, output_node_names, op_wise_config, op_wise_sequences, device, fake_quant=False, new_api=False, performance_only=False, itex_mode=False)[source]

Apply the fusion DQ + OP + Q pattern.