chakki-works/seqeval

F1 score for missing class prediction

Opened this issue · 0 comments

I'm using the Huggingface implementation of seqeval, example codes as follows:

from datasets import load_metric

predictions = [['B-E', 'I-E', 'O', 'O', 'O', 'O', 'O', 'O']]
references = [['B-E', 'I-E', 'O', 'O', 'B-C', 'I-C', 'I-C', 'O']]

metric = load_metric('seqeval')
for i in range(len(predictions)):
    metric.add(
        prediction=predictions[i],
        reference=references[i] 
    )
results = metric.compute()
print(results)

My question is about the reported Overall F1 score. Given the references = [['B-E', 'I-E', 'O', 'O', 'B-C', 'I-C', 'I-C', 'O']]:

  • predictions = [['B-E', 'I-E', 'O', 'O', 'O', 'B-C', 'I-C', 'O']]: Returns overall_f1': 0.5, 'overall_accuracy': 0.75
  • predictions = [['B-E', 'I-E', 'O', 'O', 'O', 'O', 'O', 'O']]: Returns 'overall_f1': 0.6666666666666666, 'overall_accuracy': 0.625

Why is the F1 score higher for the second case with missing "C" class predictions? Shouldn't both cases return the same Overall F1 score? By the way, in both cases, F1 score for "C" is 0.

Thanks!