tlt.models.text_classification.pytorch_hf_text_classification_model.PyTorchHFTextClassificationModel.train¶
- PyTorchHFTextClassificationModel.train(dataset, output_dir: str, epochs: int = 1, initial_checkpoints=None, learning_rate: float = 1e-05, do_eval: bool = True, early_stopping: bool = False, lr_decay: bool = True, seed: Optional[int] = None, extra_layers: Optional[list] = None, device: str = 'cpu', ipex_optimize: bool = True, use_trainer: bool = False, force_download: bool = False, enable_auto_mixed_precision: Optional[bool] = None, distributed: bool = False, hostfile: Optional[str] = None, nnodes: int = 1, nproc_per_node: int = 1, **kwargs)[source]¶
Trains the model using the specified text classification dataset.
- Parameters
dataset (TextClassificationDataset/datasets.arrow_dataset.Dataset) – The dataset to use for training. If a train subset has been defined, that subset will be used to fit the model. Otherwise, the entire non-partitioned dataset will be used.
output_dir (str) – A writeable output directory to write checkpoint files during training
epochs (int) – The number of training epochs [default: 1]
initial_checkpoints (str) – Path to checkpoint weights to load. If the path provided is a directory, the latest checkpoint will be used.
learning_rate (float) – Learning rate for the model to train. Defaults to 1e-5
do_eval (bool) – If do_eval is True and the dataset has a validation subset, the model will be evaluated at the end of each epoch. If the dataset does not have a validation split, the test subset will be used.
early_stopping (bool) – Enable early stopping if convergence is reached while training at the end of each epoch.
lr_decay (bool) – If lr_decay is True and do_eval is True, learning rate decay on the validation loss is applied at the end of each epoch.
seed (int) – Optionally set a seed for reproducibility.
extra_layers (list[int]) – Optionally insert additional dense layers between the base model and output layer. This can help increase accuracy when fine-tuning a PyTorch model. The input should be a list of integers representing the number and size of the layers, for example [1024, 512] will insert two dense layers, the first with 1024 neurons and the second with 512 neurons.
device (str) – Device to train the model. Defaults to “cpu”
ipex_optimize (bool) – Optimize the model using Intel® Extension for PyTorch. Defaults to True
use_trainer (bool) – If use_trainer is True, then the model training is done using the Hugging Face Trainer and if use_trainer is False, the model training is done using native PyTorch training loop
force_download (bool) – Downloads the model with default parameters. Defaults to False.
enable_auto_mixed_precision (bool or None) – Enable auto mixed precision for training. Mixed precision uses both 16-bit and 32-bit floating point types to make training run faster and use less memory. It is recommended to enable auto mixed precision training when running on platforms that support bfloat16 (Intel third or fourth generation Xeon processors). If it is enabled on a platform that does not support bfloat16, it can be detrimental to the training performance. If enable_auto_mixed_precision is set to None, auto mixed precision will be automatically enabled when running with Intel fourth generation Xeon processors, and disabled for other platforms. Defaults to None.
distributed (bool) – Boolean flag to use distributed training. Defaults to False.
hostfile (str) – Name of the hostfile for distributed training. Defaults to None.
nnodes (int) – Number of nodes to use for distributed training. Defaults to 1.
nproc_per_node (int) – Number of processes to spawn per node to use for distributed training. Defaults to 1.
- Returns
If use_trainer=True, a Hugging Face TrainOutput object is returned. If use_trainer=False, a dictionary containing the model training history is returned.
- Raises
TypeError – if the dataset specified is not a TextClassificationDataset/datasets.arrow_dataset.Dataset
ValueError – if the given dataset has not been preprocessed yet