tlt.models.text_classification.pytorch_hf_text_classification_model.PyTorchHFTextClassificationModel¶
- class tlt.models.text_classification.pytorch_hf_text_classification_model.PyTorchHFTextClassificationModel(model_name: str, model=None, optimizer=None, loss=None, **kwargs)[source]¶
Class to represent a PyTorch Hugging Face pretrained model that can be used for multi-class text classification fine tuning.
- __init__(model_name: str, model=None, optimizer=None, loss=None, **kwargs)[source]¶
Class constructor
Methods
__init__
(model_name[, model, optimizer, loss])Class constructor
benchmark
(dataset[, saved_model_dir, ...])Use Intel Neural Compressor to benchmark the model with the dataset argument.
cleanup_saved_objects_for_distributed
()evaluate
([dataset_or_dataloader, ...])Evaulates the model on the given dataset (or) dataloader.
export
(output_dir)Saves the model to the given output_dir directory.
export_for_distributed
(export_dir[, ...])Exports the model, optimizer, loss, train data and validation data to the export_dir for distributed script to access.
freeze_layer
(layer_name)Freezes the model's layer using a layer name :param layer_name: The layer name that will be frozen in the model :type layer_name: string
list_layers
([verbose])Lists all of the named modules (e.g.
load_from_directory
(model_dir)Loads a saved pytorch model from the given model_dir directory.
optimize_graph
(output_dir[, overwrite_model])Performs FP32 graph optimization using the Intel Neural Compressor on the model and writes the inference-optimized model to the output_dir. Graph optimization includes converting variables to constants, removing training-only operations like checkpoint saving, stripping out parts of the graph that are never reached, removing debug operations like CheckNumerics, folding batch normalization ops into the pre-calculated weights, and fusing common operations into unified versions. :param output_dir: Writable output directory to save the optimized model :type output_dir: str :param overwrite_model: Specify whether or not to overwrite the output_dir, if it already exists (default: False) :type overwrite_model: bool.
predict
(input_samples[, return_raw, ...])Generates predictions for the specified input samples.
quantize
(output_dir, dataset[, config, ...])Performs post training quantization using the Intel Neural Compressor on the model using the dataset.
train
(dataset, output_dir[, epochs, ...])Trains the model using the specified text classification dataset.
unfreeze_layer
(layer_name)Unfreezes the model's layer using a layer name :param layer_name: The layer name that will be frozen in the model :type layer_name: string
Attributes
dropout_layer_rate
The probability of any one node being dropped when a dropout layer is used
framework
Framework with which the model is compatible
learning_rate
Learning rate for the model
model_name
Name of the model
num_classes
The number of output neurons in the model; equal to the number of classes in the dataset
preprocessor
Preprocessor for the model
use_case
Use case (or category) to which the model belongs