/whisper-prep

Data preparation utility for the finetuning of OpenAI's Whisper model.

Primary LanguagePythonMIT LicenseMIT

Issues MIT License


whisper-prep

Data preparation utility for the finetuning of OpenAI's Whisper model.

Table of Contents
  1. About The Project
  2. License
  3. Contact

About The Project

This package assists in generating training data for fine-tuning Whisper by synthesizing .srt files from sentences, mimicking real data through sentence concatenation.

(back to top)

Data Preparation Guide

  1. Data File (.tsv):

    • Create a .tsv file with two required columns:
      • path: The relative path to the .mp3 file.
      • sentence: The text corresponding to the audio file.
    • Optional: If a client_id is included, it can be used to increase the probability that following sentences are from the same speaker. Refer to generate_fold in src/whisper_prep/generation/generate.py for additional features.
  2. Configuration File (.yaml):

    • Set up a .yaml configuration file. An example can be found at example.yaml.
  3. Running the Generation Script:

    • Run whisper_prep -c <path_to_your_yaml_file>.
  4. Upload to Huggingface.com:

(back to top)

Contact

Vincenzo Timmel - vincenzo.timmel@fhnw.ch

(back to top)

License

Distributed under the MIT License. See LICENSE for more information.

(back to top)