craffel/mir_eval

New fingerprinting module

nkundiushuti opened this issue · 2 comments

Hi everyone,

We want to add a new evaluation module for audio fingerprinting. We want to display IR metrics related to the correct identification/retrieval of songs but we are also interested in segment-based metrics. The latter should measure the overlapping between the annotated boundaries for a piece of music in a real-life recording/tv/radio broadcast and the music track. Thus, we may have song-based metrics and segment-based metrics related to true-positives, false-positives, true-negatives, false-negatives.
In addition, we want to account for multiple annotators, so the same metric may be computed for unanimity (all annotators agree) or majority (the majority of the annotators agree).

The song-based metrics are traditional IR metrics related to precision and recall.
The closest example to the segment-based metrics are here: https://github.com/craffel/mir_eval/blob/master/mir_eval/segment.py
So where should we start?

Best,

Marius Miron

Hi Marius, are these metrics already published/widely used? Are there any existing implementations?

I don't have a full grasp of the spec from your message, but the IR metrics sound like they would not be different from any standard IR metrics, so I'm not sure there would be a compelling reason to add them to mir_eval. If the segment/song-based metrics are different from the current segment-based metrics, I could see a reason to add them.

Hi everyone, @protyposis's Aurio library implements most common/used fingerprint algos (and not only).

If you need more/others we've collected some for our HyMPS project under the AUDIO category \ Visualization/analisys page \ Fingerprinting subsection).

Hope that inspires !