gulvarol/bsl1k

WLASL experiments, training log

chevalierNoir opened this issue · 2 comments

Hi,

Regarding the experiments of transfer learning on WLASL, using a baseline Kinetics pre-trained model seems to converge very slow. After 10 epochs, the training loss is still > 7, which should be around 3.6 according to the provided training log. I downloaded WLASL data from https://www.robots.ox.ac.uk/~vgg/research/bsl1k/data/info/wlasl.tar, the pre-trained model using misc/pretrained_models/download.sh and also used the same running script:
python main.py
--checkpoint checkpoint/wlasl_i3d_pkinetics
--datasetname wlasl
--num-classes 2000
--num_gpus 2
--num_in_frames 64
--pretrained misc/pretrained_models/kinetics.pth
--ram_data 1
--workers 16

Thanks.

Hi,

Have you checked whether the videos in the figs folder look okay, as in there was no problem with the preprocessing of the frames? Maybe try debugging with --ram_data 0 to eliminate the problems with preprocessing.

Thanks. It was an issue with the data. It works after redownloading.