/stable-audio-metrics

Metrics for evaluating music and audio generative models – with a focus on long-form, full-band, and stereo generations.

Primary LanguagePythonMIT LicenseMIT

stable-audio-metrics

Collection of metrics for evaluating music and audio generative models:

  • Fréchet Distance at 48kHz, based on Openl3.
  • Kullback–Leibler divergence at 32kHz, based on PaSST.
  • CLAP score at 48kHz, based on CLAP-LAION.

stable-audio-metrics adapted established metrics to assess the more realistic use case of long-form full-band stereo generations. All metrics can deal with variable-length inputs.

Installation

Clone this repository, and create a python virtual environment python3 -m venv env, activate it source env/bin/activate, and install the dependencies pip install -r requirements.txt.

  • GPU SUPPORT –We only support GPU usage, because it can be too slow on CPU.
  • TROUBLESHOOTING – It might require an older version of cuda because of Openl3 dependencies. Try cuda 11.8 if you find it does not run on GPU as expected.

Documentation

Main documentation is available in:

Each example script (with musiccaps) further details how to use it:

Our documentation includes examples on how to evaluate with:

  • MusicCaps dataset
  • AudiocCaps dataset
  • Song Describer dataset

Usage

Modify our examples such that they point to the folder you want to evaluate and run it. For example, modify and run: CUDA_VISIBLE_DEVICES=6 python examples/audiocaps_no-audio.py to evaluate with audiocaps. Check more examples in our documentation.

  • METRICS WITHOUT DATASETS – The no-audio examples allow running the evaluations without downloading the datasets, because reference statistics and embeddings are already computed in load. We do not provide any pre-computed embedding for the CLAP score, because is fast to compute.
  • COMPARING WITH STABLE AUDIO – To compare against Stable Audio, you must set all parameters as in the no-audio examples. Even if your model outputs mono audio at a different sampling rate. stable-audio-metrics will do the resampling and mono/stereo handling to deliver a fair comparison.

Data structure

Generate an audio for every prompt in each dataset, and name each generated audio by its corresponding id.

Our musiccaps examples assume the following structure, where 5,521 generations are named after the ytid from the prompts file load/musiccaps-public.csv: your_model_outputsfolder/-kssA-FOzU.wav, your_model_outputs_folder/_0-2meOf9qY.wav, ... your_model_outputs_folder/ZzyWbehtt0M.wav.

Our audiocaps examples assume the following structure, where 4,875 generations are named after the audiocap_id from the prompts file load/audiocaps-test.csv: your_model_outputsfolder/3.wav, your_model_outputs_folder/481.wav, ... your_model_outputs_folder/107432.wav.

Extend this data structure to your dataset, like we also did with the song describer dataset as an additional example. Check the examples' documentation.