neural_compressor.metric.bleu

Script for BLEU metric.

Classes

UnicodeRegex

Ad-hoc hack to recognize all punctuation and symbols.

BLEU

Computes the BLEU (Bilingual Evaluation Understudy) score.

Functions

bleu_tokenize(→ List[str])

Tokenize a string following the official BLEU implementation.

Module Contents

class neural_compressor.metric.bleu.UnicodeRegex[source]

Ad-hoc hack to recognize all punctuation and symbols.

nondigit_punct_re[source]

The compiled regular expressions to recognize punctuation preceded with a digit.

punct_nondigit_re[source]

The compiled regular expressions to recognize punctuation followed by a digit.

symbol_re[source]

The compiled regular expressions to recognize symbols.

neural_compressor.metric.bleu.bleu_tokenize(string: str) List[str][source]

Tokenize a string following the official BLEU implementation.

See https://github.com/moses-smt/mosesdecoder/

“blob/master/scripts/generic/mteval-v14.pl#L954-L983

Parameters:

string – The string to be tokenized.

Returns:

A list of tokens.

Return type:

tokens

class neural_compressor.metric.bleu.BLEU[source]

Computes the BLEU (Bilingual Evaluation Understudy) score.

BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. This implementent approximate the BLEU score since we do not glue word pieces or decode the ids and tokenize the output. By default, we use ngram order of 4 and use brevity penalty. Also, this does not have beam search.

predictions[source]

List of translations to score.

labels[source]

List of the reference corresponding to the prediction result.