:py:mod:`neural_compressor.experimental.common.criterion`
=========================================================

.. py:module:: neural_compressor.experimental.common.criterion

.. autoapi-nested-parse::

   Initialize critetion classes.

   Classes includes:
       TensorFlowCrossEntropyLoss, PyTorchCrossEntropyLoss,
       TensorflowKnowledgeDistillationLoss, PyTorchKnowledgeDistillationLoss,
       PyTorchIntermediateLayersKnowledgeDistillationLoss.



Module Contents
---------------

Classes
~~~~~~~

.. autoapisummary::

   neural_compressor.experimental.common.criterion.TensorflowCriterions
   neural_compressor.experimental.common.criterion.PyTorchCriterions
   neural_compressor.experimental.common.criterion.Criterions
   neural_compressor.experimental.common.criterion.TensorFlowCrossEntropyLoss
   neural_compressor.experimental.common.criterion.TensorFlowSparseCategoricalCrossentropy
   neural_compressor.experimental.common.criterion.PyTorchCrossEntropyLoss
   neural_compressor.experimental.common.criterion.KnowledgeDistillationFramework
   neural_compressor.experimental.common.criterion.KnowledgeDistillationLoss
   neural_compressor.experimental.common.criterion.PyTorchKnowledgeDistillationLoss
   neural_compressor.experimental.common.criterion.PyTorchKnowledgeDistillationLossWrapper
   neural_compressor.experimental.common.criterion.TensorflowKnowledgeDistillationLoss
   neural_compressor.experimental.common.criterion.TensorflowKnowledgeDistillationLossWrapper
   neural_compressor.experimental.common.criterion.TensorflowKnowledgeDistillationLossExternal
   neural_compressor.experimental.common.criterion.IntermediateLayersKnowledgeDistillationLoss
   neural_compressor.experimental.common.criterion.PyTorchIntermediateLayersKnowledgeDistillationLoss
   neural_compressor.experimental.common.criterion.PyTorchIntermediateLayersKnowledgeDistillationLossWrapper
   neural_compressor.experimental.common.criterion.SelfKnowledgeDistillationLoss
   neural_compressor.experimental.common.criterion.PyTorchSelfKnowledgeDistillationLoss
   neural_compressor.experimental.common.criterion.PyTorchSelfKnowledgeDistillationLossWrapper



Functions
~~~~~~~~~

.. autoapisummary::

   neural_compressor.experimental.common.criterion.criterion_registry



.. py:class:: TensorflowCriterions

   Bases: :py:obj:`object`

   Record criterions in TensorflowCriterions class.


.. py:class:: PyTorchCriterions

   Bases: :py:obj:`object`

   Record criterions in PyTorchCriterions class.


.. py:class:: Criterions(framework)

   Bases: :py:obj:`object`

   Integrate criterions of different framework.

   .. py:method:: register(name, criterion_cls)

      Register criterion name and result in existing criterion class.

      :param name: criterion name/type.
      :type name: string
      :param criterion_cls: criterion class.
      :type criterion_cls: string



.. py:function:: criterion_registry(criterion_type, framework)

   Use to register criterion classes in registry_criterions.

   :param criterion_type: The string of supported criterion.
   :type criterion_type: str
   :param framework: The string of supported framework.
   :type framework: str

   :returns: The class of register.
   :rtype: cls


.. py:class:: TensorFlowCrossEntropyLoss(param_dict)

   Bases: :py:obj:`object`

   TensorFlow CrossEntropyLoss criterion.


.. py:class:: TensorFlowSparseCategoricalCrossentropy(param_dict)

   Bases: :py:obj:`object`

   TensorFlow SparseCategoricalCrossentropyLoss criterion.


.. py:class:: PyTorchCrossEntropyLoss(param_dict)

   Bases: :py:obj:`object`

   PyTorch CrossEntropyLoss criterion.


.. py:class:: KnowledgeDistillationFramework(student_model=None, teacher_model=None)

   Bases: :py:obj:`object`

   Knowledge Distillation Framework.

   .. py:property:: student_model

      Return student model.

   .. py:property:: teacher_model

      Return teacher model.


.. py:class:: KnowledgeDistillationLoss(temperature=1.0, loss_types=['CE', 'CE'], loss_weights=[0.5, 0.5], student_model=None, teacher_model=None)

   Bases: :py:obj:`KnowledgeDistillationFramework`

   Initialize the KnowledgeDistillationLoss class.

   .. py:method:: teacher_model_forward(input, teacher_model=None)
      :abstractmethod:

      Define parameters for teacher_model_forward function.

      :param input: input data.
      :type input: tensor, tuple or dict
      :param teacher_model: teacher model. Defaults to None.
      :type teacher_model: model, optional

      :raises NotImplementedError: NotImplementedError


   .. py:method:: teacher_student_loss_cal(student_outputs, teacher_outputs)
      :abstractmethod:

      Define parameters for teacher_student_loss_cal function.

      :param student_outputs: student outputs
      :type student_outputs: tensor
      :param teacher_outputs: student outputs
      :type teacher_outputs: tensor

      :raises NotImplementedError: NotImplementedError


   .. py:method:: student_targets_loss_cal(student_outputs, targets)
      :abstractmethod:

      Define parameters for student_targets_loss_cal function.

      :param student_outputs: student outputs
      :type student_outputs: tensor
      :param targets: groud truth label
      :type targets: tensor

      :raises NotImplementedError: NotImplementedError


   .. py:method:: loss_cal(student_outputs, targets)

      Calculate loss of student model.

      :param student_outputs: student outputs
      :type student_outputs: tensor
      :param targets: groud truth label
      :type targets: tensor

      :returns: loss
      :rtype: tensor


   .. py:method:: loss_cal_sloss(student_outputs, teacher_outputs, student_loss)

      Calculate all losses between student model and teacher model.

      :param student_outputs: student outputs
      :type student_outputs: tensor
      :param teacher_outputs: teacher outputs
      :type teacher_outputs: tensor
      :param student_loss: student loss
      :type student_loss: tensor

      :returns: loss
      :rtype: tensor



.. py:class:: PyTorchKnowledgeDistillationLoss(temperature=1.0, loss_types=['CE', 'CE'], loss_weights=[0.5, 0.5], student_model=None, teacher_model=None)

   Bases: :py:obj:`KnowledgeDistillationLoss`

   The PyTorchKnowledgeDistillationLoss class inherits from KnowledgeDistillationLoss.

   .. py:method:: SoftCrossEntropy(logits, targets)

      Return SoftCrossEntropy.

      :param logits: output logits
      :type logits: tensor
      :param targets: ground truth label
      :type targets: tensor

      :returns: SoftCrossEntropy
      :rtype: tensor


   .. py:method:: KullbackLeiblerDivergence(logits, targets)

      Return KullbackLeiblerDivergence.

      :param logits: output logits
      :type logits: tensor
      :param targets: ground truth label
      :type targets: tensor

      :returns: KullbackLeiblerDivergence
      :rtype: tensor


   .. py:method:: teacher_model_forward(input, teacher_model=None, device=None)

      Teacher model forward.

      :param input: input data
      :type input: tensor
      :param teacher_model: teacher model. Defaults to None.
      :type teacher_model: torch.nn.model, optional
      :param device: device. Defaults to None.
      :type device: torch.device, optional

      :returns: output
      :rtype: tensor


   .. py:method:: teacher_student_loss_cal(student_outputs, teacher_outputs)

      Calculate loss between student model and teacher model.

      :param student_outputs: student outputs
      :type student_outputs: tensor
      :param teacher_outputs: teacher outputs
      :type teacher_outputs: tensor

      :returns: loss
      :rtype: tensor


   .. py:method:: student_targets_loss_cal(student_outputs, targets)

      Calculate loss of student model.

      :param student_outputs: student outputs
      :type student_outputs: tensor
      :param targets: groud truth label
      :type targets: tensor

      :returns: loss
      :rtype: tensor



.. py:class:: PyTorchKnowledgeDistillationLossWrapper(param_dict)

   Bases: :py:obj:`object`

   PyTorchKnowledgeDistillationLossWrapper wraps PyTorchKnowledgeDistillationLoss.


.. py:class:: TensorflowKnowledgeDistillationLoss(temperature=1.0, loss_types=['CE', 'CE'], loss_weights=[0.5, 0.5], student_model=None, teacher_model=None)

   Bases: :py:obj:`KnowledgeDistillationLoss`

   The TensorflowKnowledgeDistillationLoss class inherits from KnowledgeDistillationLoss.

   .. py:method:: SoftCrossEntropy(targets, logits)

      Return SoftCrossEntropy.

      :param logits: output logits
      :type logits: tensor
      :param targets: ground truth label
      :type targets: tensor

      :returns: SoftCrossEntropy
      :rtype: tensor


   .. py:method:: teacher_model_forward(input, teacher_model=None)

      Teacher model forward.

      :param input: input data
      :type input: tensor
      :param teacher_model: teacher model. Defaults to None.
      :type teacher_model: optional
      :param device: device. Defaults to None.
      :type device: torch.device, optional

      :returns: output
      :rtype: tensor


   .. py:method:: teacher_student_loss_cal(student_outputs, teacher_outputs)

      Calculate loss between student model and teacher model.

      :param student_outputs: student outputs
      :type student_outputs: tensor
      :param teacher_outputs: teacher outputs
      :type teacher_outputs: tensor

      :returns: loss
      :rtype: tensor


   .. py:method:: student_targets_loss_cal(student_outputs, targets)

      Calculate loss of student model.

      :param student_outputs: student outputs
      :type student_outputs: tensor
      :param targets: groud truth label
      :type targets: tensor

      :returns: loss
      :rtype: tensor



.. py:class:: TensorflowKnowledgeDistillationLossWrapper(param_dict)

   Bases: :py:obj:`object`

   TensorflowKnowledgeDistillationLossWrapper wraps TensorflowKnowledgeDistillationLoss.


.. py:class:: TensorflowKnowledgeDistillationLossExternal(temperature=1.0, loss_types=['CE', 'CE'], loss_weights=[0.5, 0.5], student_model=None, teacher_model=None)

   Bases: :py:obj:`KnowledgeDistillationLoss`

   TensorflowKnowledgeDistillationLossExternal inherits from KnowledgeDistillationLoss.

   .. py:method:: teacher_model_forward(input, teacher_model=None)

      Teacher model forward.

      :param input: input data
      :type input: tensor
      :param teacher_model: teacher model. Defaults to None.
      :type teacher_model: optional
      :param device: device. Defaults to None.
      :type device: optional

      :returns: output
      :rtype: tensor


   .. py:method:: teacher_student_loss_cal(student_outputs, teacher_outputs)

      Calculate loss between student model and teacher model.

      :param student_outputs: student outputs
      :type student_outputs: tensor
      :param teacher_outputs: teacher outputs
      :type teacher_outputs: tensor

      :returns: loss
      :rtype: tensor


   .. py:method:: student_targets_loss_cal(student_outputs, targets)

      Calculate loss of student model.

      :param student_outputs: student outputs
      :type student_outputs: tensor
      :param targets: groud truth label
      :type targets: tensor

      :returns: loss
      :rtype: tensor



.. py:class:: IntermediateLayersKnowledgeDistillationLoss(layer_mappings=[], loss_types=None, loss_weights=None, add_origin_loss=False, student_model=None, teacher_model=None)

   Bases: :py:obj:`KnowledgeDistillationFramework`

   The IntermediateLayersKnowledgeDistillationLoss class inherits from KnowledgeDistillationLoss.

   .. py:method:: init_loss_funcs()
      :abstractmethod:

      Init loss funcs.

      :raises NotImplementedError: NotImplementedError


   .. py:method:: init_feature_matcher(student_feature, teacher_feature)
      :abstractmethod:

      Init feature matcher.

      :raises NotImplementedError: NotImplementedError


   .. py:method:: teacher_model_forward(input, teacher_model=None)
      :abstractmethod:

      Teacher model forward.

      :raises NotImplementedError: NotImplementedError


   .. py:method:: loss_cal()
      :abstractmethod:

      Calculate loss.

      :raises NotImplementedError: NotImplementedError


   .. py:method:: loss_cal_sloss(student_outputs, teacher_outputs, student_loss)

      Calculate all losses between student model and teacher model.

      :param student_outputs: student outputs
      :type student_outputs: tensor
      :param teacher_outputs: teacher outputs
      :type teacher_outputs: tensor
      :param student_loss: student loss
      :type student_loss: tensor

      :returns: loss
      :rtype: tensor


   .. py:method:: clear_features()

      Clean features in list.



.. py:class:: PyTorchIntermediateLayersKnowledgeDistillationLoss(layer_mappings=[], loss_types=None, loss_weights=None, add_origin_loss=False, student_model=None, teacher_model=None)

   Bases: :py:obj:`IntermediateLayersKnowledgeDistillationLoss`

   PyTorch Intermediate Layers Knowledge Distillation Loss.

   .. py:method:: register_hooks_for_models()

      Register hooks for models to record module output.

      :raises AttributeError: AttributeError


   .. py:method:: remove_all_hooks()

      Remove all hooks.


   .. py:method:: init_loss_funcs()

      Init loss funcs.


   .. py:method:: init_feature_matcher(student_feature, teacher_feature)

      Init feature matcher.

      :param student_feature: student feature
      :type student_feature: tensor
      :param teacher_feature: teacher feature
      :type teacher_feature: tensor

      :returns: pytorch_linear_feature_matcher


   .. py:method:: teacher_model_forward(input, teacher_model=None, device=None)

      Define parameters for teacher_model_forward function.

      :param input: input data.
      :type input: tensor, tuple or dict
      :param teacher_model: teacher model. Defaults to None.
      :type teacher_model: model, optional

      :raises NotImplementedError: NotImplementedError


   .. py:method:: loss_cal_sloss(student_outputs, teacher_outputs, student_loss)

      Calculate all losses between student model and teacher model.

      :param student_outputs: student outputs
      :type student_outputs: tensor
      :param teacher_outputs: teacher outputs
      :type teacher_outputs: tensor
      :param student_loss: student loss
      :type student_loss: tensor

      :returns: loss
      :rtype: tensor


   .. py:method:: loss_cal()

      Calculate loss of student model.

      :returns: loss
      :rtype: tensor



.. py:class:: PyTorchIntermediateLayersKnowledgeDistillationLossWrapper(param_dict)

   Bases: :py:obj:`object`

   PyTorch Intermediate Layers Knowledge Distillation Loss Wrapper.


.. py:class:: SelfKnowledgeDistillationLoss(layer_mappings=[], loss_types=None, loss_weights=None, temperature=1.0, add_origin_loss=False, student_model=None, teacher_model=None)

   Bases: :py:obj:`KnowledgeDistillationFramework`

   SelfKnowledge Distillation Loss.

   .. py:method:: init_loss_funcs()
      :abstractmethod:

      Init loss funcs.

      :raises NotImplementedError: NotImplementedError


   .. py:method:: teacher_model_forward(input, teacher_model=None)
      :abstractmethod:

      Teacher model forward.

      :raises NotImplementedError: NotImplementedError


   .. py:method:: loss_cal(student_outputs)
      :abstractmethod:

      Calculate loss.

      :raises NotImplementedError: NotImplementedError


   .. py:method:: loss_cal_sloss(student_outputs, teacher_outputs, student_loss)

      Calculate all losses between student model and teacher model.

      :param student_outputs: student outputs
      :type student_outputs: dict
      :param teacher_outputs: teacher outputs
      :type teacher_outputs: dict
      :param student_loss: student loss
      :type student_loss: tensor

      :returns: loss
      :rtype: tensor



.. py:class:: PyTorchSelfKnowledgeDistillationLoss(layer_mappings=[], loss_types=None, loss_weights=None, temperature=1.0, add_origin_loss=False, student_model=None, teacher_model=None)

   Bases: :py:obj:`SelfKnowledgeDistillationLoss`

   PyTorch SelfKnowledge Distillation Loss.

   .. py:method:: SoftCrossEntropy(logits, targets)

      Return SoftCrossEntropy.

      :param logits: output logits
      :type logits: tensor
      :param targets: ground truth label
      :type targets: tensor

      :returns: SoftCrossEntropy
      :rtype: tensor


   .. py:method:: KullbackLeiblerDivergence(logits, targets)

      Return KullbackLeiblerDivergence.

      :param logits: output logits
      :type logits: tensor
      :param targets: ground truth label
      :type targets: tensor

      :returns: KullbackLeiblerDivergence
      :rtype: tensor


   .. py:method:: L2Divergence(feature1, feature2)

      Return L2Divergence.

      :param feature1: feature1 value
      :type feature1: tensor
      :param feature2: feature2 value
      :type feature2: tensor

      :returns: L2Divergence between feature1 and feature2
      :rtype: tensor


   .. py:method:: init_loss_funcs()

      Init loss funcs.


   .. py:method:: loss_cal(student_outputs)

      Calculate loss of student model.

      :param student_outputs: student outputs
      :type student_outputs: dict

      :returns: loss
      :rtype: tensor


   .. py:method:: teacher_model_forward(input, teacher_model=None, device=None)

      Teacher model forward.

      :param input: input data
      :type input: tensor
      :param teacher_model: teacher model. Defaults to None.
      :type teacher_model: torch.nn.model, optional
      :param device: device. Defaults to None.
      :type device: torch.device, optional

      :returns: output
      :rtype: tensor



.. py:class:: PyTorchSelfKnowledgeDistillationLossWrapper(param_dict)

   Bases: :py:obj:`object`

   PyTorch SelfKnowledge Distillation Loss Wrapper.