neural_compressor.utils.export.tf2onnx

Helper functions to export model from TensorFlow to ONNX.

Functions

tf_to_fp32_onnx(graph_def, save_path[, opset_version, ...])

Export FP32 Tensorflow model into FP32 ONNX model using tf2onnx tool.

tf_to_int8_onnx(int8_model, save_path[, ...])

Export INT8 Tensorflow model into INT8 ONNX model.

Module Contents

neural_compressor.utils.export.tf2onnx.tf_to_fp32_onnx(graph_def, save_path, opset_version=14, input_names=None, output_names=None, inputs_as_nchw=None)[source]

Export FP32 Tensorflow model into FP32 ONNX model using tf2onnx tool.

Parameters:
  • graph_def (graph_def to convert) – fp32 graph_def.

  • save_path (str) – save path of ONNX model.

  • opset_version (int, optional) – opset version. Defaults to 14.

  • input_names (list, optional) – input names. Defaults to None.

  • output_names (list, optional) – output names. Defaults to None.

  • inputs_as_nchw (list, optional) – transpose the input. Defaults to None.

neural_compressor.utils.export.tf2onnx.tf_to_int8_onnx(int8_model, save_path, opset_version: int = 14, input_names=None, output_names=None, inputs_as_nchw=None)[source]

Export INT8 Tensorflow model into INT8 ONNX model.

Parameters:
  • int8_model (tensorflow ITEX QDQ model) – int8 model.

  • save_path (str) – save path of ONNX model.

  • opset_version (int, optional) – opset version. Defaults to 14.

  • input_names (list, optional) – input names. Defaults to None.

  • output_names (list, optional) – output names. Defaults to None.

  • inputs_as_nchw (list, optional) – transpose the input. Defaults to None.