kentonl/e2e-coref

error in ceafe metric

Opened this issue · 0 comments

Hi, thanks for releasing the code.
I recently found that there is a potential error in ceafe metric.
As shown below, when the predicted and gold cluster are identical, the ceafe metric will not output 1.0 as others do.

My piece of code

    def get_event2cluster(clusters):
        event2cluster = {}
        for cluster in clusters:
            for eid in cluster:
                # if eid not in even2cluster:
                event2cluster[eid] = tuple(cluster)
        return event2cluster
    
    def evaluate_documents(documents, metric, beta=1):
        evaluator = Evaluator(metric, beta=beta)
        for document in documents:
            evaluator.update(document.clusters, document.gold, document.mention_to_cluster, document.mention_to_gold)
        return evaluator.get_precision(), evaluator.get_recall(), evaluator.get_f1()

    class Doc:
        def __init__(self, mention2cluster, mention2gold, clusters, gold):
            self.mention_to_cluster = mention2cluster
            self.mention_to_gold = mention2gold
            self.clusters = clusters
            self.gold = gold

    gold = [[1,2,3,4,5], [6,7], [8,9,10, 11,12], [13]]
    pred = [[1,2,3,4,5], [6,7], [8,9,10,11,12], [13]]
    mention2cluster = get_event2cluster(pred)
    mention2gold = get_event2cluster(gold)
    doc= Doc(mention2cluster, mention2gold, pred, gold)
    p, r, f = evaluate_documents([doc], muc)
    print(p, r, f)
    p, r, f = evaluate_documents([doc], b_cubed)
    print(p, r, f)
    p, r, f = evaluate_documents([doc], ceafe)
    print(p, r, f)
    p, r, f = evaluate_documents([doc], lea)
    print(p, r, f)

the output is

1.0 1.0 1.0
1.0 1.0 1.0
1.0 0.75 0.8571428571428571
1.0 1.0 1.0

One way to fix this is to remove the line

clusters = [c for c in clusters if len(c) != 1]