tlt.models.image_classification.pytorch_image_classification_model.PyTorchImageClassificationModel¶
- class tlt.models.image_classification.pytorch_image_classification_model.PyTorchImageClassificationModel(model_name: str, model=None, optimizer=None, loss=None, **kwargs)[source]¶
Class to represent a PyTorch model for image classification
- __init__(model_name: str, model=None, optimizer=None, loss=None, **kwargs)[source]¶
Class constructor
Methods
__init__(model_name[, model, optimizer, loss])Class constructor
benchmark(dataset[, saved_model_dir, ...])Use Intel Neural Compressor to benchmark the model with the dataset argument.
cleanup_saved_objects_for_distributed()evaluate(dataset[, use_test_set, ...])Evaluate the accuracy of the model on a dataset.
export(output_dir)Save a serialized version of the model to the output_dir path
export_for_distributed([export_dir, ...])Exports the model, optimizer, loss, train data and validation data to the export_dir for distributed script to access.
freeze_layer(layer_name)Freezes the model's layer using a layer name :param layer_name: The layer name that will be frozen in the model :type layer_name: string
list_layers([verbose])Lists all of the named modules (e.g.
load_from_directory(model_dir)Load a saved model from the model_dir path
optimize_graph(output_dir[, overwrite_model])Performs FP32 graph optimization using the Intel Neural Compressor on the model and writes the inference-optimized model to the output_dir. Graph optimization includes converting variables to constants, removing training-only operations like checkpoint saving, stripping out parts of the graph that are never reached, removing debug operations like CheckNumerics, folding batch normalization ops into the pre-calculated weights, and fusing common operations into unified versions. :param output_dir: Writable output directory to save the optimized model :type output_dir: str :param overwrite_model: Specify whether or not to overwrite the output_dir, if it already exists (default: False) :type overwrite_model: bool.
predict(input_samples[, return_type, ...])Perform feed-forward inference and predict the classes of the input_samples.
quantize(output_dir, dataset[, config, ...])Performs post training quantization using the Intel Neural Compressor on the model using the dataset.
train(dataset, output_dir[, epochs, ...])Trains the model using the specified image classification dataset.
unfreeze_layer(layer_name)Unfreezes the model's layer using a layer name :param layer_name: The layer name that will be frozen in the model :type layer_name: string
Attributes
do_fine_tuningWhen True, the weights in all of the model's layers will be trainable.
dropout_layer_rateThe probability of any one node being dropped when a dropout layer is used
frameworkFramework with which the model is compatible
image_sizeThe fixed image size that the pretrained model expects as input, in pixels with equal width and height
learning_rateLearning rate for the model
model_nameName of the model
num_classesThe number of output neurons in the model; equal to the number of classes in the dataset
preprocessorPreprocessor for the model
use_caseUse case (or category) to which the model belongs