JCruan519/VM-UNet

loss.backward is too slow

Opened this issue · 2 comments

Hello,that is mainly because there is something wrong which lead the loss.backward so slow, the batch_size is five,one iter need 725s

same issue here, any updates?

@tmax-cn @fengchuanpeng
Hello, when setting the batch size to 32, training one epoch on an A6000GPU takes 100s. I suggest first checking for any environmental issues (ensure that mamba-related libraries are correctly installed), and then verify whether the GPU is being fully utilized during training.