How to get alignment?
LLianJJun opened this issue · 8 comments
Hi~
Alignment information was obtained using a tacorton2 or transformer model,
but it has been removed from git. Could you please tell me why?
As you know, alignment information is required to use a DB other than ljspeech.
Hi~
Alignment information was obtained using a tacorton2 or transformer model,
but it has been removed from git. Could you please tell me why?
As you know, alignment information is required to use a DB other than ljspeech.
I didn't spend time writing a note about how to extract alignment from Tacotron2, so I remove it from new commit.
@xcmyz 我这边已经对齐过了,通过kaldi,但是npy是怎么生成的,我看是用tensorflow版本的tacotron2生成的吗?是training_data里的吗?
@xcmyz 我这边已经对齐过了,通过kaldi,但是npy是怎么生成的,我看是用tensorflow版本的tacotron2生成的吗?是training_data里的吗?
我用nvidia版本的tacotron2或者每一个character对应的mel谱的帧。
嗯嗯,谢谢,请问这个npy是再tensorflow版本的tacotron2的training_data里吗?是audio 还是linear 还是mels里面,这个困惑我好久了,谢谢!!
嗯嗯,谢谢,请问这个npy是再tensorflow版本的tacotron2的training_data里吗?是audio 还是linear 还是mels里面,这个困惑我好久了,谢谢!!
我是通过tacotron2的模型中提取的,比方说:
- nvidia tacotron2的模型输入时character,输出是mel spectrogram;
- tacotron2中location sensitive attention会输出一个矩阵(shape:[length_mel, length_character])
- 上面这个矩阵包含了每个character和mel spectrogram的对齐(每一个character的attention到哪一个帧的mel spectrogram)
我这里的duration即为每个character对应的帧长度,比如[h, a, t]的duration就是[2, 3, 4]。(我用的是pytorch版本的nvidia tacotron2)
好的好的,谢谢,不胜感激!