Mael-zys/T2M-GPT

training from scratch

deeptimhe opened this issue · 2 comments

For VQVAE, I have reproduced the test results from the released checkpoints, but I cannot train one with similar performance by myself.

I use the following command:
python3 train_vq.py
--batch-size 256
--lr 2e-4
--total-iter 300000
--lr-scheduler 200000
--nb-code 512
--down-t 2
--depth 3
--dilation-growth-rate 3
--out-dir output
--dataname t2m
--vq-act relu
--quantizer ema_reset
--loss-vel 0.5
--recons-loss l1_smooth
--exp-name VQVAE

After training, I only got FID ~= 0.11 for both net.last & net_best_fid

Did you test that on the testset? I got the same result as the paper

Did you test that on the testset? I got the same result as the paper

Thank you. It seems the number comes from val set.