neural_compressor.experimental.metric.metric

Neural Compressor metrics.

Module Contents

Classes

TensorflowMetrics

Tensorflow metrics collection.

PyTorchMetrics

PyTorch metrics collection.

MXNetMetrics

MXNet metrics collection.

ONNXRTQLMetrics

ONNXRT QLinear metrics collection.

ONNXRTITMetrics

ONNXRT Integer metrics collection.

METRICS

Intel Neural Compressor Metrics.

BaseMetric

The base class of Metric.

WrapPyTorchMetric

The wrapper of Metric class for PyTorch.

WrapMXNetMetric

The wrapper of Metric class for MXNet.

WrapONNXRTMetric

The wrapper of Metric class for ONNXRT.

F1

F1 score of a binary classification problem.

Accuracy

The Accuracy for the classification tasks.

PyTorchLoss

A dummy PyTorch Metric.

Loss

A dummy Metric.

MAE

Computes Mean Absolute Error (MAE) loss.

RMSE

Computes Root Mean Squared Error (RMSE) loss.

MSE

Computes Mean Squared Error (MSE) loss.

TensorflowTopK

Compute Top-k Accuracy classification score for Tensorflow model.

GeneralTopK

Compute Top-k Accuracy classification score.

COCOmAPv2

Compute mean average precision of the detection task.

TensorflowMAP

Computes mean average precision.

TensorflowCOCOMAP

Computes mean average precision using algorithm in COCO.

TensorflowVOCMAP

Computes mean average precision using algorithm in VOC.

SquadF1

Evaluate for v1.1 of the SQuAD dataset.

mIOU

Compute the mean IOU(Intersection over Union) score.

ONNXRTGLUE

Compute the GLUE score.

ROC

Computes ROC score.

Functions

metric_registry(metric_type, framework)

Decorate for registering all Metric subclasses.

class neural_compressor.experimental.metric.metric.TensorflowMetrics[source]

Tensorflow metrics collection.

metrics[source]

A dict to maintain all metrics for Tensorflow model.

class neural_compressor.experimental.metric.metric.PyTorchMetrics[source]

PyTorch metrics collection.

metrics[source]

A dict to maintain all metrics for PyTorch model.

class neural_compressor.experimental.metric.metric.MXNetMetrics[source]

MXNet metrics collection.

metrics[source]

A dict to maintain all metrics for MXNet model.

class neural_compressor.experimental.metric.metric.ONNXRTQLMetrics[source]

ONNXRT QLinear metrics collection.

metrics[source]

A dict to maintain all metrics for ONNXRT QLinear model.

class neural_compressor.experimental.metric.metric.ONNXRTITMetrics[source]

ONNXRT Integer metrics collection.

metrics[source]

A dict to maintain all metrics for ONNXRT Integer model.

class neural_compressor.experimental.metric.metric.METRICS(framework: str)[source]

Intel Neural Compressor Metrics.

metrics[source]

The collection of registered metrics for the specified framework.

neural_compressor.experimental.metric.metric.metric_registry(metric_type: str, framework: str)[source]

Decorate for registering all Metric subclasses.

The cross-framework metric is supported by specifying the framework param as one of tensorflow, pytorch, mxnet, onnxrt.

Parameters:
  • metric_type – The metric type.

  • framework – The framework name.

Returns:

The function to register metric class.

Return type:

decorator_metric

class neural_compressor.experimental.metric.metric.BaseMetric(metric, single_output=False, hvd=None)[source]

The base class of Metric.

class neural_compressor.experimental.metric.metric.WrapPyTorchMetric(metric, single_output=False, hvd=None)[source]

The wrapper of Metric class for PyTorch.

class neural_compressor.experimental.metric.metric.WrapMXNetMetric(metric, single_output=False, hvd=None)[source]

The wrapper of Metric class for MXNet.

class neural_compressor.experimental.metric.metric.WrapONNXRTMetric(metric, single_output=False, hvd=None)[source]

The wrapper of Metric class for ONNXRT.

class neural_compressor.experimental.metric.metric.F1[source]

F1 score of a binary classification problem.

The F1 score is the harmonic mean of the precision and recall. It can be computed with the equation: F1 = 2 * (precision * recall) / (precision + recall)

class neural_compressor.experimental.metric.metric.Accuracy[source]

The Accuracy for the classification tasks.

The accuracy score is the proportion of the total number of predictions that were correct classified.

pred_list[source]

List of prediction to score.

label_list[source]

List of labels to score.

sample[source]

The total number of samples.

class neural_compressor.experimental.metric.metric.PyTorchLoss[source]

A dummy PyTorch Metric.

A dummy metric that computes the average of predictions and prints it directly.

class neural_compressor.experimental.metric.metric.Loss[source]

A dummy Metric.

A dummy metric that computes the average of predictions and prints it directly.

sample[source]

The number of samples.

sum[source]

The sum of prediction.

class neural_compressor.experimental.metric.metric.MAE(compare_label=True)[source]

Computes Mean Absolute Error (MAE) loss.

Mean Absolute Error (MAE) is the mean of the magnitude of difference between the predicted and actual numeric values.

pred_list[source]

List of prediction to score.

label_list[source]

List of references corresponding to the prediction result.

compare_label[source]

Whether to compare label. False if there are no labels and will use FP32 preds as labels.

Type:

bool

class neural_compressor.experimental.metric.metric.RMSE(compare_label=True)[source]

Computes Root Mean Squared Error (RMSE) loss.

mse[source]

The instance of MSE Metric.

class neural_compressor.experimental.metric.metric.MSE(compare_label=True)[source]

Computes Mean Squared Error (MSE) loss.

Mean Squared Error(MSE) represents the average of the squares of errors. For example, the average squared difference between the estimated values and the actual values.

pred_list[source]

List of prediction to score.

label_list[source]

List of references corresponding to the prediction result.

compare_label[source]

Whether to compare label. False if there are no labels and will use FP32 preds as labels.

Type:

bool

class neural_compressor.experimental.metric.metric.TensorflowTopK(k=1)[source]

Compute Top-k Accuracy classification score for Tensorflow model.

This metric computes the number of times where the correct label is among the top k labels predicted.

k[source]

The number of most likely outcomes considered to find the correct label.

Type:

int

num_correct[source]

The number of predictions that were correct classified.

num_sample[source]

The total number of predictions.

class neural_compressor.experimental.metric.metric.GeneralTopK(k=1)[source]

Compute Top-k Accuracy classification score.

This metric computes the number of times where the correct label is among the top k labels predicted.

k[source]

The number of most likely outcomes considered to find the correct label.

Type:

int

num_correct[source]

The number of predictions that were correct classified.

num_sample[source]

The total number of predictions.

class neural_compressor.experimental.metric.metric.COCOmAPv2(anno_path=None, iou_thrs='0.5:0.05:0.95', map_points=101, map_key='DetectionBoxes_Precision/mAP', output_index_mapping={'num_detections': -1, 'boxes': 0, 'scores': 1, 'classes': 2})[source]

Compute mean average precision of the detection task.

class neural_compressor.experimental.metric.metric.TensorflowMAP(anno_path=None, iou_thrs=0.5, map_points=0, map_key='DetectionBoxes_Precision/mAP')[source]

Computes mean average precision.

class neural_compressor.experimental.metric.metric.TensorflowCOCOMAP(anno_path=None, iou_thrs=None, map_points=None, map_key='DetectionBoxes_Precision/mAP')[source]

Computes mean average precision using algorithm in COCO.

class neural_compressor.experimental.metric.metric.TensorflowVOCMAP(anno_path=None, iou_thrs=None, map_points=None, map_key='DetectionBoxes_Precision/mAP')[source]

Computes mean average precision using algorithm in VOC.

class neural_compressor.experimental.metric.metric.SquadF1[source]

Evaluate for v1.1 of the SQuAD dataset.

class neural_compressor.experimental.metric.metric.mIOU(num_classes=21)[source]

Compute the mean IOU(Intersection over Union) score.

class neural_compressor.experimental.metric.metric.ONNXRTGLUE(task='mrpc')[source]

Compute the GLUE score.

class neural_compressor.experimental.metric.metric.ROC(task='dlrm')[source]

Computes ROC score.