How to train/decode on reverberant speech?
kevinmchu opened this issue · 1 comments
kevinmchu commented
I'd like to train a model on reverberant speech using the alignments generated from the corresponding anechoic data. Currently, I'm doing something similar to TIMIT_joint_training_liGRU_fbank.cfg, where I am using the reverberant TIMIT recipe to extract the features and the anechoic recipe for lab_folder and lab_graph. I noticed that decode_dnn.sh uses the lab_graph to generate the lattice rather than the graph constructed from the reverberant acoustic model.
What is the easiest way to specify using the anechoic alignments and reverberant graph?
kevinmchu commented
I just wanted to follow up an ask if anyone has suggestions.