tlt.models.image_anomaly_detection.pytorch_image_anomaly_detection_model.PyTorchImageAnomalyDetectionModel.train

PyTorchImageAnomalyDetectionModel.train(dataset: PyTorchCustomImageAnomalyDetectionDataset, output_dir, epochs=1, batch_size=64, feature_dim=1000, pred_dim=250, generate_checkpoints=False, initial_checkpoints=None, seed=None, pooling='avg', kernel_size=2, pca_threshold=0.99, simsiam=False, cutpaste=False, cutpaste_type='normal', freeze_resnet=20, head_layer=2, optim='sgd', layer_name='layer3', ipex_optimize=True, enable_auto_mixed_precision=None, device=None)[source]

Trains the model using the specified image anomaly detection dataset.

Parameters
  • dataset (PyTorchCustomImageAnomalyDetectionDataset) – Dataset to use when training the model

  • output_dir (str) – Path to a writeable directory for output files

  • batch_size (int) – batch size for every forward operation, default is 64

  • layer_name (str) – The layer name whose output is desired for the extracted features

  • feature_dim (int) – Feature dimension, default is 1000

  • pred_dim (int) – Hidden dimension of the predictor, default is 250

  • epochs (int) – Number of epochs to train the model

  • generate_checkpoints (bool) – Whether to save/preserve the best weights during SimSiam or CutPaste training, default is False.

  • initial_checkpoints (str) – Path to checkpoint weights to load

  • seed (int) – Optional, set a seed for reproducibility

  • pooling (str) – Pooling to be applied on the extracted layer (‘avg’ or ‘max’), default is ‘avg’

  • kernel_size (int) – Kernel size in the pooling layer, default is 2

  • pca_threshold (float) – Threshold to apply to PCA model, default is 0.99

  • simsiam (bool) – Boolean option to enable/disable simsiam training, default is False

  • cutpaste (bool) – Boolean option to enable/disable cutpaste training, default is False

  • cutpaste_type (str) – cutpaste variant to use, default is normal

  • freeze_resnet (int) – Epochs up to which we freeze ResNet layers and only train the new header with FC layers, default is 20

  • head_layer (int) – number of layers in the projection head, default is 2

  • optim (str) – Choice of optimizer to use for training, default is sgd

  • ipex_optimize (bool) – Use Intel Extension for PyTorch (IPEX). Defaults to True.

  • enable_auto_mixed_precision (bool or None) – Enable auto mixed precision for training. Mixed precision uses both 16-bit and 32-bit floating point types to make training run faster and use less memory. It is recommended to enable auto mixed precision training when running on platforms that support bfloat16 (Intel third or fourth generation Xeon processors). If it is enabled on a platform that does not support bfloat16, it can be detrimental to the training performance. If enable_auto_mixed_precision is set to None, auto mixed precision will be automatically enabled when running with Intel fourth generation Xeon processors, and disabled for other platforms. Defaults to None.

  • device (str) – Enter “cpu” or “hpu” to specify which hardware device to run training on. If device=”hpu” is specified, but no HPU hardware or installs are detected, CPU will be used. (default: “cpu”)

Returns

Fitted principal components and PyTorch feature extraction model