neural_compressor.metric.metric

Neural Compressor metrics.

Module Contents

Classes

Metric

A wrapper of the information needed to construct a Metric.

TensorflowMetrics

Tensorflow metrics collection.

PyTorchMetrics

PyTorch metrics collection.

MXNetMetrics

MXNet metrics collection.

ONNXRTQLMetrics

ONNXRT QLinear metrics collection.

ONNXRTITMetrics

ONNXRT Integer metrics collection.

METRICS

Intel Neural Compressor Metrics.

BaseMetric

The base class of Metric.

WrapPyTorchMetric

The wrapper of Metric class for PyTorch.

WrapMXNetMetric

The wrapper of Metric class for MXNet.

WrapONNXRTMetric

The wrapper of Metric class for ONNXRT.

F1

F1 score of a binary classification problem.

Accuracy

The Accuracy for the classification tasks.

PyTorchLoss

A dummy PyTorch Metric.

Loss

A dummy Metric.

MAE

Computes Mean Absolute Error (MAE) loss.

RMSE

Computes Root Mean Squared Error (RMSE) loss.

MSE

Computes Mean Squared Error (MSE) loss.

TensorflowTopK

Compute Top-k Accuracy classification score for Tensorflow model.

GeneralTopK

Compute Top-k Accuracy classification score.

COCOmAPv2

Compute mean average precision of the detection task.

TensorflowMAP

Computes mean average precision.

TensorflowCOCOMAP

Computes mean average precision using algorithm in COCO.

TensorflowVOCMAP

Computes mean average precision using algorithm in VOC.

SquadF1

Evaluate for v1.1 of the SQuAD dataset.

mIOU

Compute the mean IOU(Intersection over Union) score.

ONNXRTGLUE

Compute the GLUE score.

ROC

Computes ROC score.

Functions

metric_registry(metric_type, framework)

Decorate for registering all Metric subclasses.

class neural_compressor.metric.metric.Metric(metric_cls, name='user_metric', **kwargs)

Bases: object

A wrapper of the information needed to construct a Metric.

The metric class should take the outputs of the model as the metric’s inputs, neural_compressor built-in metric always take (predictions, labels) as inputs, it’s recommended to design metric_cls to take (predictions, labels) as inputs.

class neural_compressor.metric.metric.TensorflowMetrics

Bases: object

Tensorflow metrics collection.

metrics

A dict to maintain all metrics for Tensorflow model.

class neural_compressor.metric.metric.PyTorchMetrics

Bases: object

PyTorch metrics collection.

metrics

A dict to maintain all metrics for PyTorch model.

class neural_compressor.metric.metric.MXNetMetrics

Bases: object

MXNet metrics collection.

metrics

A dict to maintain all metrics for MXNet model.

class neural_compressor.metric.metric.ONNXRTQLMetrics

Bases: object

ONNXRT QLinear metrics collection.

metrics

A dict to maintain all metrics for ONNXRT QLinear model.

class neural_compressor.metric.metric.ONNXRTITMetrics

Bases: object

ONNXRT Integer metrics collection.

metrics

A dict to maintain all metrics for ONNXRT Integer model.

class neural_compressor.metric.metric.METRICS(framework: str)

Bases: object

Intel Neural Compressor Metrics.

metrics

The collection of registered metrics for the specified framework.

register(name, metric_cls) None

Register a metric.

Parameters:
  • name – The name of metric.

  • metric_cls – The metric class.

neural_compressor.metric.metric.metric_registry(metric_type: str, framework: str)

Decorate for registering all Metric subclasses.

The cross-framework metric is supported by specifying the framework param as one of tensorflow, pytorch, mxnet, onnxrt.

Parameters:
  • metric_type – The metric type.

  • framework – The framework name.

Returns:

The function to register metric class.

Return type:

decorator_metric

class neural_compressor.metric.metric.BaseMetric(metric, single_output=False, hvd=None)

Bases: object

The base class of Metric.

property metric

Return its metric class.

Returns:

The metric class.

property hvd

Return its hvd class.

Returns:

The hvd class.

abstract update(preds, labels=None, sample_weight=None)

Update the state that need to be evaluated.

Parameters:
  • preds – The prediction result.

  • labels – The reference. Defaults to None.

  • sample_weight – The sampling weight. Defaults to None.

Raises:

NotImplementedError – The method should be implemented by subclass.

abstract reset()

Clear the predictions and labels.

Raises:

NotImplementedError – The method should be implemented by subclass.

abstract result()

Evaluate the difference between predictions and labels.

Raises:

NotImplementedError – The method should be implemented by subclass.

class neural_compressor.metric.metric.WrapPyTorchMetric(metric, single_output=False, hvd=None)

Bases: BaseMetric

The wrapper of Metric class for PyTorch.

update(preds, labels=None, sample_weight=None)

Convert the prediction to torch.

Parameters:
  • preds – The prediction result.

  • labels – The reference. Defaults to None.

  • sample_weight – The sampling weight. Defaults to None.

reset()

Clear the predictions and labels.

result()

Evaluate the difference between predictions and labels.

class neural_compressor.metric.metric.WrapMXNetMetric(metric, single_output=False, hvd=None)

Bases: BaseMetric

The wrapper of Metric class for MXNet.

update(preds, labels=None, sample_weight=None)

Convert the prediction to MXNet array.

Parameters:
  • preds – The prediction result.

  • labels – The reference. Defaults to None.

  • sample_weight – The sampling weight. Defaults to None.

reset()

Clear the predictions and labels.

result()

Evaluate the difference between predictions and labels.

Returns:

The evaluated result.

Return type:

acc

class neural_compressor.metric.metric.WrapONNXRTMetric(metric, single_output=False, hvd=None)

Bases: BaseMetric

The wrapper of Metric class for ONNXRT.

update(preds, labels=None, sample_weight=None)

Convert the prediction to NumPy array.

Parameters:
  • preds – The prediction result.

  • labels – The reference. Defaults to None.

  • sample_weight – The sampling weight. Defaults to None.

reset()

Clear the predictions and labels.

result()

Evaluate the difference between predictions and labels.

Returns:

The evaluated result.

Return type:

acc

class neural_compressor.metric.metric.F1

Bases: BaseMetric

F1 score of a binary classification problem.

The F1 score is the harmonic mean of the precision and recall. It can be computed with the equation: F1 = 2 * (precision * recall) / (precision + recall)

update(preds, labels)

Add the predictions and labels.

Parameters:
  • preds – The predictions.

  • labels – The labels corresponding to the predictions.

reset()

Clear the predictions and labels.

result()

Compute the F1 score.

class neural_compressor.metric.metric.Accuracy

Bases: BaseMetric

The Accuracy for the classification tasks.

The accuracy score is the proportion of the total number of predictions that were correct classified.

pred_list

List of prediction to score.

label_list

List of labels to score.

sample

The total number of samples.

update(preds, labels, sample_weight=None)

Add the predictions and labels.

Parameters:
  • preds – The predictions.

  • labels – The labels corresponding to the predictions.

  • sample_weight – The sample weight.

reset()

Clear the predictions and labels.

result()

Compute the accuracy.

class neural_compressor.metric.metric.PyTorchLoss

A dummy PyTorch Metric.

A dummy metric that computes the average of predictions and prints it directly.

reset()

Reset the number of samples and total cases to zero.

update(output)

Add the predictions.

Parameters:

output – The predictions.

compute()

Compute the average of predictions.

Raises:

ValueError – There must have at least one example.

Returns:

The dummy loss.

class neural_compressor.metric.metric.Loss

Bases: BaseMetric

A dummy Metric.

A dummy metric that computes the average of predictions and prints it directly.

sample

The number of samples.

sum

The sum of prediction.

update(preds, labels, sample_weight=None)

Add the predictions and labels.

Parameters:
  • preds – The predictions.

  • labels – The labels corresponding to the predictions.

  • sample_weight – The sample weight.

reset()

Reset the number of samples and total cases to zero.

result()

Compute the average of predictions.

Returns:

The dummy loss.

class neural_compressor.metric.metric.MAE(compare_label=True)

Bases: BaseMetric

Computes Mean Absolute Error (MAE) loss.

Mean Absolute Error (MAE) is the mean of the magnitude of difference between the predicted and actual numeric values.

pred_list

List of prediction to score.

label_list

List of references corresponding to the prediction result.

compare_label

Whether to compare label. False if there are no labels and will use FP32 preds as labels.

Type:

bool

update(preds, labels, sample_weight=None)

Add the predictions and labels.

Parameters:
  • preds – The predictions.

  • labels – The labels corresponding to the predictions.

  • sample_weight – The sample weight.

reset()

Clear the predictions and labels.

result()

Compute the MAE score.

Returns:

The MAE score.

class neural_compressor.metric.metric.RMSE(compare_label=True)

Bases: BaseMetric

Computes Root Mean Squared Error (RMSE) loss.

mse

The instance of MSE Metric.

update(preds, labels, sample_weight=None)

Add the predictions and labels.

Parameters:
  • preds – The predictions.

  • labels – The labels corresponding to the predictions.

  • sample_weight – The sample weight.

reset()

Clear the predictions and labels.

result()

Compute the RMSE score.

Returns:

The RMSE score.

class neural_compressor.metric.metric.MSE(compare_label=True)

Bases: BaseMetric

Computes Mean Squared Error (MSE) loss.

Mean Squared Error(MSE) represents the average of the squares of errors. For example, the average squared difference between the estimated values and the actual values.

pred_list

List of prediction to score.

label_list

List of references corresponding to the prediction result.

compare_label

Whether to compare label. False if there are no labels and will use FP32 preds as labels.

Type:

bool

update(preds, labels, sample_weight=None)

Add the predictions and labels.

Parameters:
  • preds – The predictions.

  • labels – The labels corresponding to the predictions.

  • sample_weight – The sample weight.

reset()

Clear the predictions and labels.

result()

Compute the MSE score.

Returns:

The MSE score.

class neural_compressor.metric.metric.TensorflowTopK(k=1)

Bases: BaseMetric

Compute Top-k Accuracy classification score for Tensorflow model.

This metric computes the number of times where the correct label is among the top k labels predicted.

k

The number of most likely outcomes considered to find the correct label.

Type:

int

num_correct

The number of predictions that were correct classified.

num_sample

The total number of predictions.

update(preds, labels, sample_weight=None)

Add the predictions and labels.

Parameters:
  • preds – The predictions.

  • labels – The labels corresponding to the predictions.

  • sample_weight – The sample weight.

reset()

Reset the number of samples and correct predictions.

result()

Compute the top-k score.

Returns:

The top-k score.

class neural_compressor.metric.metric.GeneralTopK(k=1)

Bases: BaseMetric

Compute Top-k Accuracy classification score.

This metric computes the number of times where the correct label is among the top k labels predicted.

k

The number of most likely outcomes considered to find the correct label.

Type:

int

num_correct

The number of predictions that were correct classified.

num_sample

The total number of predictions.

update(preds, labels, sample_weight=None)

Add the predictions and labels.

Parameters:
  • preds – The predictions.

  • labels – The labels corresponding to the predictions.

  • sample_weight – The sample weight.

reset()

Reset the number of samples and correct predictions.

result()

Compute the top-k score.

Returns:

The top-k score.

class neural_compressor.metric.metric.COCOmAPv2(anno_path=None, iou_thrs='0.5:0.05:0.95', map_points=101, map_key='DetectionBoxes_Precision/mAP', output_index_mapping={'num_detections': -1, 'boxes': 0, 'scores': 1, 'classes': 2})

Bases: BaseMetric

Compute mean average precision of the detection task.

update(predicts, labels, sample_weight=None)

Add the predictions and labels.

Parameters:
  • predicts – The predictions.

  • labels – The labels corresponding to the predictions.

  • sample_weight – The sample weight. Defaults to None.

reset()

Reset the prediction and labels.

result()

Compute mean average precision.

Returns:

The mean average precision score.

class neural_compressor.metric.metric.TensorflowMAP(anno_path=None, iou_thrs=0.5, map_points=0, map_key='DetectionBoxes_Precision/mAP')

Bases: BaseMetric

Computes mean average precision.

update(predicts, labels, sample_weight=None)

Add the predictions and labels.

Parameters:
  • predicts – The predictions.

  • labels – The labels corresponding to the predictions.

  • sample_weight – The sample weight.

reset()

Reset the prediction and labels.

result()

Compute mean average precision.

Returns:

The mean average precision score.

class neural_compressor.metric.metric.TensorflowCOCOMAP(anno_path=None, iou_thrs=None, map_points=None, map_key='DetectionBoxes_Precision/mAP')

Bases: TensorflowMAP

Computes mean average precision using algorithm in COCO.

class neural_compressor.metric.metric.TensorflowVOCMAP(anno_path=None, iou_thrs=None, map_points=None, map_key='DetectionBoxes_Precision/mAP')

Bases: TensorflowMAP

Computes mean average precision using algorithm in VOC.

class neural_compressor.metric.metric.SquadF1

Bases: BaseMetric

Evaluate for v1.1 of the SQuAD dataset.

update(preds, labels, sample_weight=None)

Add the predictions and labels.

Parameters:
  • preds – The predictions.

  • labels – The labels corresponding to the predictions.

  • sample_weight – The sample weight.

reset()

Reset the score list.

result()

Compute F1 score.

class neural_compressor.metric.metric.mIOU(num_classes=21)

Bases: BaseMetric

Compute the mean IOU(Intersection over Union) score.

update(preds, labels)

Add the predictions and labels.

Parameters:
  • preds – The predictions.

  • labels – The labels corresponding to the predictions.

reset()

Reset the hist.

result()

Compute mean IOU.

Returns:

The mean IOU score.

class neural_compressor.metric.metric.ONNXRTGLUE(task='mrpc')

Bases: BaseMetric

Compute the GLUE score.

update(preds, labels)

Add the predictions and labels.

Parameters:
  • preds – The predictions.

  • labels – The labels corresponding to the predictions.

reset()

Reset the prediction and labels.

result()

Compute the GLUE score.

class neural_compressor.metric.metric.ROC(task='dlrm')

Bases: BaseMetric

Computes ROC score.

update(preds, labels)

Add the predictions and labels.

Parameters:
  • preds – The predictions.

  • labels – The labels corresponding to the predictions.

reset()

Reset the prediction and labels.

result()

Compute the ROC score.