neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils

Utils for Tensorflow model converting to ONNX model.

Module Contents

Classes

SeqType

Wrap around TensorProto.* to signify a tensor sequence of a given type.

Functions

set_name(name)

Set op name for inserted ops.

find_opset(opset)

Find opset.

assert_error(bool_val, error_msg, *args)

Raise error message.

map_numpy_to_onnx_dtype(np_dtype)

Map numpy dtype to ONNX dtype.

map_onnx_to_numpy_type(onnx_type)

Map ONNX dtype to numpy dtype.

add_port_to_name(name[, nr])

Map node output number to name.

get_tensorflow_node_attr(node, name)

Parse tensorflow node attribute.

get_tensorflow_tensor_shape(tensor)

Get shape from tensorflow tensor.

get_tensorflow_node_shape_attr(node)

Get shape from tensorflow attr "shape".

map_tensorflow_dtype(dtype)

Convert tensorflow dtype to ONNX.

get_tensorflow_tensor_data(tensor)

Get data from tensorflow tensor.

convert_tensorflow_tensor_to_onnx(tensor[, name])

Convert tensorflow tensor to onnx tensor.

read_tensorflow_node_attrs(node)

Read tensorflow node attribute names.

infer_onnx_shape_dtype(node, opset_version, ...[, ...])

Infer shapes and dtypes for outputs of the node.

make_onnx_shape(shape)

Shape with -1 is not valid in onnx ...

make_onnx_inputs_outputs(name, elem_type, shape, **kwargs)

Wrapper for creating onnx graph inputs or outputs.

save_protobuf(path, message[, as_text])

Save ONNX protobuf file.

is_onnx_domain(domain)

Check if it's onnx domain.

is_list_or_tuple(obj)

Check the object is list or tuple.

are_shapes_equal(src, dest)

Check whether 2 shapes are equal.

get_subgraphs_from_onnx(model_proto)

Returns an iterator over the graphs/subgraphs of a model (using dfs).

initialize_name_counter(model_proto)

Avoid name conflicts by initializing the counter used by make_name based on the provided model.

get_index_from_strided_slice_of_shape(node, ...)

Returns the index of the dimension that the strided slice is reading from the shape node or None.

compute_const_folding_using_tf(g, const_node_values, ...)

Find nodes with constant inputs and compute their values using TF.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.set_name(name)[source]

Set op name for inserted ops.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.find_opset(opset)[source]

Find opset.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.assert_error(bool_val, error_msg, *args)[source]

Raise error message.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.map_numpy_to_onnx_dtype(np_dtype)[source]

Map numpy dtype to ONNX dtype.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.map_onnx_to_numpy_type(onnx_type)[source]

Map ONNX dtype to numpy dtype.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.add_port_to_name(name, nr=0)[source]

Map node output number to name.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.get_tensorflow_node_attr(node, name)[source]

Parse tensorflow node attribute.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.get_tensorflow_tensor_shape(tensor)[source]

Get shape from tensorflow tensor.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.get_tensorflow_node_shape_attr(node)[source]

Get shape from tensorflow attr “shape”.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.map_tensorflow_dtype(dtype)[source]

Convert tensorflow dtype to ONNX.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.get_tensorflow_tensor_data(tensor)[source]

Get data from tensorflow tensor.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.convert_tensorflow_tensor_to_onnx(tensor, name='')[source]

Convert tensorflow tensor to onnx tensor.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.read_tensorflow_node_attrs(node)[source]

Read tensorflow node attribute names.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.infer_onnx_shape_dtype(node, opset_version, input_shapes, input_dtypes, initializers=None)[source]

Infer shapes and dtypes for outputs of the node.

Sometimes, shape inference needs the values of node’s inputs, so initializers are used.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.make_onnx_shape(shape)[source]

Shape with -1 is not valid in onnx …

make it a name.

class neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.SeqType(tensor_dtype)[source]

Wrap around TensorProto.* to signify a tensor sequence of a given type.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.make_onnx_inputs_outputs(name, elem_type, shape, **kwargs)[source]

Wrapper for creating onnx graph inputs or outputs.

Parameters:
  • name – Text

  • elem_type – TensorProto.DataType

  • shape – Optional[Sequence[int]]

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.save_protobuf(path, message, as_text=False)[source]

Save ONNX protobuf file.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.is_onnx_domain(domain)[source]

Check if it’s onnx domain.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.is_list_or_tuple(obj)[source]

Check the object is list or tuple.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.are_shapes_equal(src, dest)[source]

Check whether 2 shapes are equal.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.get_subgraphs_from_onnx(model_proto)[source]

Returns an iterator over the graphs/subgraphs of a model (using dfs).

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.initialize_name_counter(model_proto)[source]

Avoid name conflicts by initializing the counter used by make_name based on the provided model.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.get_index_from_strided_slice_of_shape(node, outputs_to_values)[source]

Returns the index of the dimension that the strided slice is reading from the shape node or None.

neural_compressor.adaptor.tf_utils.graph_rewriter.onnx.tf2onnx_utils.compute_const_folding_using_tf(g, const_node_values, graph_outputs)[source]

Find nodes with constant inputs and compute their values using TF.