基本用不了啊,好多依赖数据都没有,也没有生成逻辑。
Closed this issue · 8 comments
IEMOCAP_PATH = "data/iemocap.pickle"
IEMOCAP_BALANCED_PATH = "data/iemocap_balanced.pickle"
IEMOCAP_BALANCED_ASR_PATH = "data/iemocap_balanced_asr.pickle"
IEMOCAP_FULL_PATH = "data/IEMOCAP_full_release"
LINGUISTIC_DATASET_PATH = "data/linguistic_features.npy"
LINGUISTIC_LABELS_PATH = "data/linguistic_labels.npy"
ACOUSTIC_FEATURES_PATH = "data/acoustic_features.npy"
ACOUSTIC_LABELS_PATH = "data/acoustic_labels.npy"
LINGUISTIC_DATASET_ASR_PATH = "data/linguistic_features_asr.npy"
LINGUISTIC_LABELS_ASR_PATH = "data/linguistic_labels_asr.npy"
SPECTROGRAMS_FEATURES_PATH = "data/spectrograms_features.npy"
SPECTROGRAMS_LABELS_PATH = "data/spectrograms_labels.npy"
MAPPING_ID_TO_SAMPLE_PATH = "data/id_to_sample.json"
这些文件都是怎么生成的??
IEMOCAP_PATH = "data/iemocap.pickle"
IEMOCAP_BALANCED_PATH = "data/iemocap_balanced.pickle"
IEMOCAP_BALANCED_ASR_PATH = "data/iemocap_balanced_asr.pickle"
IEMOCAP_FULL_PATH = "data/IEMOCAP_full_release"
LINGUISTIC_DATASET_PATH = "data/linguistic_features.npy"
LINGUISTIC_LABELS_PATH = "data/linguistic_labels.npy"
ACOUSTIC_FEATURES_PATH = "data/acoustic_features.npy"
ACOUSTIC_LABELS_PATH = "data/acoustic_labels.npy"
LINGUISTIC_DATASET_ASR_PATH = "data/linguistic_features_asr.npy"
LINGUISTIC_LABELS_ASR_PATH = "data/linguistic_labels_asr.npy"
SPECTROGRAMS_FEATURES_PATH = "data/spectrograms_features.npy"
SPECTROGRAMS_LABELS_PATH = "data/spectrograms_labels.npy"
MAPPING_ID_TO_SAMPLE_PATH = "data/id_to_sample.json"这些文件都是怎么生成的??
What's this?
这个文件是怎么生成的 ?iemocap_balanced_asr.pickle
代码里看不见生成这个文件的逻辑。
Hi, I would appreciate english :)
I will add info in docstrings to the module and let you know
I added the comments to github.com/PiotrSobczak/speech-emotion-recognition/blob/master/speech_emotion_recognition/data_loader.py.
"""
1.Download IEMOCAP dataset from https://sail.usc.edu/iemocap/
2.Use github.com/didi/delta/blob/master/egs/iemocap/emo/v1/local/python/mocap_data_collect.py to get dataset pickle
3.Use create_balanced_iemocap() to get balanced version of iemocap dataset containing 4 classes
4.Use load_<DATASET_TYPE>_dataset to load a specific dataset.
*The first time you use load functions it will be created from pickle. This might take a while...
*The next time you use load functions you will load cached .npy files for faster loading
"""
I'm closing the issue, reopen if need more info. :)
thanks ~
@hanling6889580 I moved the info to README, turned out its quite important :D
I added the comments to github.com/PiotrSobczak/speech-emotion-recognition/blob/master/speech_emotion_recognition/data_loader.py.
""" 1.Download IEMOCAP dataset from https://sail.usc.edu/iemocap/ 2.Use github.com/didi/delta/blob/master/egs/iemocap/emo/v1/local/python/mocap_data_collect.py to get dataset pickle 3.Use create_balanced_iemocap() to get balanced version of iemocap dataset containing 4 classes 4.Use load_<DATASET_TYPE>_dataset to load a specific dataset. *The first time you use load functions it will be created from pickle. This might take a while... *The next time you use load functions you will load cached .npy files for faster loading """
But a new problem is here:
How to get the file of embeddings_array.numpy
and word_to_index.pickle
?
😭 I have no any way to get them...