neural_compressor.tensorflow.quantization.utils.utility

Tensorflow Utils Helper functions.

Functions

read_graph(in_graph[, in_graph_is_binary])

Reads input graph file as GraphDef.

write_graph(out_graph_def, out_graph_file)

Write output graphDef to file.

is_ckpt_format(model_path)

Check the model_path format is ckpt or not.

is_saved_model_format(model_path)

Check the model_path format is saved_model or not.

get_tensor_by_name(graph, name[, try_cnt])

Get the tensor by name.

iterator_sess_run(sess, iter_op, feed_dict, output_tensor)

Run the graph that have iterator integrated in the graph.

collate_tf_preds(results)

Collate the prediction results.

get_input_output_node_names(graph_def)

Get the input node name and output node name of the graph_def.

fix_ref_type_of_graph_def(graph_def)

Fix ref type of the graph_def.

strip_unused_nodes(graph_def, input_node_names, ...)

Strip unused nodes of the graph_def.

strip_equivalent_nodes(graph_def, output_node_names)

Strip nodes with the same input and attr.

get_graph_def(model[, outputs, auto_input_output])

Get the model's graph_def.

get_model_input_shape(model)

Get the input shape of the input model.

generate_feed_dict(input_tensor, inputs)

Generate feed dict helper function.

apply_inlining(func)

Apply an inlining optimization to the function's graph definition.

construct_function_from_graph_def(func, graph_def[, ...])

Rebuild function from graph_def.

parse_saved_model(model[, freeze, input_tensor_names, ...])

Parse a input saved_model.

reconstruct_saved_model(graph_def, func, frozen_func, ...)

Reconstruct a saved_model.

Module Contents

neural_compressor.tensorflow.quantization.utils.utility.read_graph(in_graph, in_graph_is_binary=True)[source]

Reads input graph file as GraphDef.

Parameters:
  • in_graph – input graph file.

  • in_graph_is_binary – whether input graph is binary, default True.

Returns:

input graphDef.

neural_compressor.tensorflow.quantization.utils.utility.write_graph(out_graph_def, out_graph_file)[source]

Write output graphDef to file.

Parameters:
  • out_graph_def – output graphDef.

  • out_graph_file – path to output graph file.

Returns:

None.

neural_compressor.tensorflow.quantization.utils.utility.is_ckpt_format(model_path)[source]

Check the model_path format is ckpt or not.

Parameters:

model_path (string) – the model folder path

Returns:

return the ckpt prefix if the model_path contains ckpt format data else None.

Return type:

string

neural_compressor.tensorflow.quantization.utils.utility.is_saved_model_format(model_path)[source]

Check the model_path format is saved_model or not.

Parameters:

model_path (string) – the model folder path

Returns:

return True if the model_path contains saved_model format else False.

Return type:

bool

neural_compressor.tensorflow.quantization.utils.utility.get_tensor_by_name(graph, name, try_cnt=3)[source]

Get the tensor by name.

Considering the ‘import’ scope when model may be imported more then once, handle naming format like both name:0 and name.

Parameters:
  • graph (tf.compat.v1.GraphDef) – the model to get name from

  • name (string) – tensor of tensor_name:0 or tensor_name without suffixes

  • try_cnt – the times to add ‘import/’ to find tensor

Returns:

tensor got by name.

Return type:

tensor

neural_compressor.tensorflow.quantization.utils.utility.iterator_sess_run(sess, iter_op, feed_dict, output_tensor, iteration=-1, measurer=None)[source]

Run the graph that have iterator integrated in the graph.

Parameters:
  • sess (tf.compat.v1.Session) – the model sess to run the graph

  • iter_op (Operator) – the MakeIterator op

  • feed_dict (dict) – the feeds to initialize a new iterator

  • output_tensor (list) – the output tensors

  • iteration (int) – iterations to run, when -1 set, run to end of iterator

Returns:

the results of the predictions

Return type:

preds

neural_compressor.tensorflow.quantization.utils.utility.collate_tf_preds(results)[source]

Collate the prediction results.

neural_compressor.tensorflow.quantization.utils.utility.get_input_output_node_names(graph_def)[source]

Get the input node name and output node name of the graph_def.

neural_compressor.tensorflow.quantization.utils.utility.fix_ref_type_of_graph_def(graph_def)[source]

Fix ref type of the graph_def.

neural_compressor.tensorflow.quantization.utils.utility.strip_unused_nodes(graph_def, input_node_names, output_node_names)[source]

Strip unused nodes of the graph_def.

The strip_unused_nodes pass is from tensorflow/python/tools/strip_unused_lib.py of official tensorflow r1.15 branch

neural_compressor.tensorflow.quantization.utils.utility.strip_equivalent_nodes(graph_def, output_node_names)[source]

Strip nodes with the same input and attr.

neural_compressor.tensorflow.quantization.utils.utility.get_graph_def(model, outputs=[], auto_input_output=False)[source]

Get the model’s graph_def.

neural_compressor.tensorflow.quantization.utils.utility.get_model_input_shape(model)[source]

Get the input shape of the input model.

neural_compressor.tensorflow.quantization.utils.utility.generate_feed_dict(input_tensor, inputs)[source]

Generate feed dict helper function.

neural_compressor.tensorflow.quantization.utils.utility.apply_inlining(func)[source]

Apply an inlining optimization to the function’s graph definition.

Parameters:

func – A concrete function get from saved_model.

Returns:

The optimized graph in graph_def format.

Return type:

new_graph_def

neural_compressor.tensorflow.quantization.utils.utility.construct_function_from_graph_def(func, graph_def, frozen_func=None)[source]

Rebuild function from graph_def.

Parameters:
  • func – The original concrete function get from saved_model.

  • graph_def – The optimized graph after applying inlining optimization.

Returns:

The reconstructed function.

Return type:

new_func

neural_compressor.tensorflow.quantization.utils.utility.parse_saved_model(model, freeze=False, input_tensor_names=[], output_tensor_names=[])[source]

Parse a input saved_model.

Parameters:

model (string or AutoTrackable object) – The input saved_model.

Returns:

The graph_def parsed from saved_model. _saved_model: TF AutoTrackable object loaded from saved_model. func: The concrete function get from saved_model. frozen_func: The reconstructed function from inlining optimized graph.

Return type:

graph_def

neural_compressor.tensorflow.quantization.utils.utility.reconstruct_saved_model(graph_def, func, frozen_func, trackable, path)[source]

Reconstruct a saved_model.

Parameters:
  • graph_def – The input graph_def.

  • func – The concrete function get from the original saved_model.

  • frozen_func – The reconstructed function from inlining optimized graph.

  • trackable – TF AutoTrackable object loaded from the original saved_model.

  • path – The destination path to save the reconstructed saved_model.