snakers4/open_stt

Files with poor annotation

snakers4 opened this issue ยท 18 comments

I will be posting here some lists of files to be exluded from the dataset from time to time
Such lists are obtained via training models and seeping through files with higher than expected CER

So far we believe that 15-20% of our files may be of poor annotation quality
We will not be excluding them from the dataset for now, but we will be posting such lists here

@buriy
These are files in the file db, that most likely have poor annotation, according to my model

bad_trainval_v03.zip
bad_public_train_v03.zip

stats by source fom bad_trainval_v03.zip and bad_public_train_v03.zip:

bad_public_train_v03.csv:

Counter({
'asr_public_phone_calls_2': 170911, 
'private_buriy_audiobooks_2': 128318, 
'public_youtube700': 115683, 
'asr_public_phone_calls_1': 83432, 
'public_series_1': 4823, 
'asr_public_stories_2': 2902, 
'asr_public_stories_1': 2225, 
'tts_russian_addresses_rhvoice_4voices': 235, 
'public_lecture_1': 201, 
'voxforge_ru': 190, 
'ru_ru': 99, 
'russian_single': 55
})

bad_trainval_v03.csv:

Counter({
'private_buriy_audiobooks_2': 4895, 
'public_youtube700': 4217, 
'public_series_1': 185, 
'private_buriy_audiobooks_1': 166, 
'public_lecture_1': 15, 
'voxforge_ru': 7, 
'ru_ru': 6
})

From both files: 518560 utterances

For the public - all of this is already old and should / will be updated

New round of data distillation

A bit more detailed file pointing out files with poor annotation with some meta-data

  • CER threshold;
  • One of best CERs so far;

It looks like that ~2m utterances out of 7m are to be discarded this way
Pretty good yield for annotation w/o using money

To use these files note that this is a multi-part zip file
You have to change the names of the part files from *.z0?.zip to *.z0?

public_exclude_file_v5.zip
public_exclude_file_v5.z02.zip
public_exclude_file_v5.z03.zip
public_exclude_file_v5.z01.zip

For those looking for to download and unzip correctly these exclude files:

wget https://github.com/snakers4/open_stt/files/3348311/public_exclude_file_v5.zip
wget https://github.com/snakers4/open_stt/files/3348314/public_exclude_file_v5.z01.zip
wget https://github.com/snakers4/open_stt/files/3348312/public_exclude_file_v5.z02.zip
wget https://github.com/snakers4/open_stt/files/3348313/public_exclude_file_v5.z03.zip

mv public_exclude_file_v5.z01.zip public_exclude_file_v5.z01
mv public_exclude_file_v5.z02.zip public_exclude_file_v5.z02
mv public_exclude_file_v5.z03.zip public_exclude_file_v5.z03

cat public_exclude_file_v5.z01 public_exclude_file_v5.z02 public_exclude_file_v5.z03 public_exclude_file_v5.zip > public_exclude_files_v5_.zip

unzip public_exclude_files_v5_.zip

Yeah, I guess the README.md requires some refining

@buriy There are a few files less than 20Kb, among which ru_open_stt/public_youtube700/d/a3/9a3ee5e6b4b0.wav fails to load with scipy.io.wavfile. It would be nice if you could exclude them in the next update of the exclude file.

buriy commented

@vadimkantorov that's a known issue: this file length is 44 bytes, which is .wav header size.
scipy.io.wavfile refuses to load empty files.
We'll look into it at some moment later.

Actually this is due to really empty files but whatever)

Yeah I forgot to exclude bad files for youtube_1120

image

hope it comes online soon! :)

Exclude file for YouTube1120

Compared with previous YouTube dataset, this one is much more challenging
image

To be on the safe side - I would exclude all files with current CER>0.4 (for this dataset we have ~40% of such files, unlike 25-30% as before)

Such files usually fall into 3 categories

  • 1/3 - just plain wrong annotation
  • 1/3 - correct annotation, but very noisy domain
  • 1/3 - under-performing network

exclude_df_youtube_1120.zip

I would refer this issue here #5 (comment) for discussion

@buriy @snakers4

Here's the complete exclude file for v5:
https://github.com/snakers4/open_stt/releases/download/v0.5-beta/public_exclude_file_v5.tar.gz

exclude_df_youtube_1120.zip

Both .csv files have paths from youtube_1120 dataset. What is the difference between them?

exclude_df_youtube_1120.zip has more files than public_exclude_file_v5.tar.gz or it is the same?

UPD: grep public_youtube1120/ public_exclude_file_v5.csv | wc -l gives 191020 lines, but exclude_df_youtube_1120.csv has 541872 lines