tlt.models.image_classification.pytorch_image_classification_model.PyTorchImageClassificationModel

class tlt.models.image_classification.pytorch_image_classification_model.PyTorchImageClassificationModel(model_name: str, model=None, optimizer=None, loss=None, **kwargs)[source]

Class to represent a PyTorch model for image classification

__init__(model_name: str, model=None, optimizer=None, loss=None, **kwargs)[source]

Class constructor

Methods

__init__(model_name[, model, optimizer, loss])

Class constructor

benchmark(dataset[, saved_model_dir, ...])

Use Intel Neural Compressor to benchmark the model with the dataset argument.

cleanup_saved_objects_for_distributed()

evaluate(dataset[, use_test_set, ...])

Evaluate the accuracy of the model on a dataset.

export(output_dir)

Save a serialized version of the model to the output_dir path

export_for_distributed([export_dir, ...])

Exports the model, optimizer, loss, train data and validation data to the export_dir for distributed script to access.

freeze_layer(layer_name)

Freezes the model's layer using a layer name :param layer_name: The layer name that will be frozen in the model :type layer_name: string

list_layers([verbose])

Lists all of the named modules (e.g.

load_from_directory(model_dir)

Load a saved model from the model_dir path

optimize_graph(output_dir[, overwrite_model])

Performs FP32 graph optimization using the Intel Neural Compressor on the model and writes the inference-optimized model to the output_dir. Graph optimization includes converting variables to constants, removing training-only operations like checkpoint saving, stripping out parts of the graph that are never reached, removing debug operations like CheckNumerics, folding batch normalization ops into the pre-calculated weights, and fusing common operations into unified versions. :param output_dir: Writable output directory to save the optimized model :type output_dir: str :param overwrite_model: Specify whether or not to overwrite the output_dir, if it already exists (default: False) :type overwrite_model: bool.

predict(input_samples[, return_type, ...])

Perform feed-forward inference and predict the classes of the input_samples.

quantize(output_dir, dataset[, config, ...])

Performs post training quantization using the Intel Neural Compressor on the model using the dataset.

train(dataset, output_dir[, epochs, ...])

Trains the model using the specified image classification dataset.

unfreeze_layer(layer_name)

Unfreezes the model's layer using a layer name :param layer_name: The layer name that will be frozen in the model :type layer_name: string

Attributes

do_fine_tuning

When True, the weights in all of the model's layers will be trainable.

dropout_layer_rate

The probability of any one node being dropped when a dropout layer is used

framework

Framework with which the model is compatible

image_size

The fixed image size that the pretrained model expects as input, in pixels with equal width and height

learning_rate

Learning rate for the model

model_name

Name of the model

num_classes

The number of output neurons in the model; equal to the number of classes in the dataset

preprocessor

Preprocessor for the model

use_case

Use case (or category) to which the model belongs