primepake/wav2lip_288x288

Why can’t training start?

piwawa opened this issue · 4 comments

image

image

image

The epoch 0 has been going on for 2 days. I have filled in the path correctly in train.txt and text.txt, the dataset has 330k videos and has been preprocessed. If I select some data from train.txt, it can start training immediately. Why can't use a large dataset to train?

DataSet里的continue打一下断点,原项目写的数据加载方式总会跳过异常,变成无效死循环

DataSet里的continue打一下断点,原项目写的数据加载方式总会跳过异常,变成无效死循环

Syncnet能训了,不过loss一直在0.69。

python hq_wav2lip_sam_train.py 现在训wav2lip又出问题了,用的一模一样的数据集,一直卡在这不动,4090显卡,200多G内存,卡好几天没反应。

image

DataSet里的continue打一下断点,原项目写的数据加载方式总会跳过异常,变成无效死循环

大佬,打断点是指调试?还是说直接break?我的也是,用了你的stream-wav2lip的训练syncnet,调好了index error后,一直卡在epoch 0不动。