Issues
- 0
Can this project continue to be updated
#23 opened by NEUdeep - 1
- 1
How can you get the meta files: .tex > Thanks for your feedback. You can set trainer.no_partial_bn = True if batch size >= 6 in each gpu and retry it, this will not affect the accuracy. That module exists some bug with distributed training, we will fix it quickly.
#17 opened by SceneRec - 1
`AverageMeter` is not correct
#19 opened by CarolineCheng233 - 1
When training,the log file stop at 'save_dir: checkpoint/' and with no update again
#20 opened by SceneRec - 1
Found Two fresh BUGs and a Solution
#21 opened by SceneRec - 1
issues with pytorch version higher than 1.4.0
#18 opened by lininglouis - 2
- 4
About the model result
#15 opened by wwnbbd - 5
Pretrained model config file
#14 opened by wwnbbd - 4
Abort the multi-label
#13 opened by simobupt - 2
RuntimeError: DataLoader worker (pid(s) 1995171, 1996371) exited unexpectedly
#12 opened by zeng-lingyun - 0
Loss is zero when using single GPU
#11 opened by JaywongWang - 1
Feature extraction
#9 opened by Finspire13 - 1
forkserver
#7 opened by haooooooqi - 1
- 7
evaluate on test dataset when training?
#4 opened by HC-2016 - 1
- 0
meta_file when training?
#5 opened by simobupt - 2
Could you release trained models?
#3 opened by HC-2016 - 1
Can I run without GPU?
#2 opened by Li-Shu14 - 6
./train.sh for TSM stop at the first log infor. : Freezing BatchNorm2D except...
#1 opened by Amazingren