tanakataiki/ssd_kerasV2

training process is slow

E12dward opened this issue · 12 comments

i think class Generator runs on cpu ,maybe this is why the training process is so slow .
is there any solution ?
how many time is taken within one epoch on your computer ?
thanks

Is threre any ways to run generator on GPUs?
I also think generator has to have maybe more workers to speed up generation

https://keras.io/ja/models/sequential/#fit_generator

similar issue

#16

hi , when using the pretrained weights file (ssdmobilenetv2) shared in google drive , the train-loss value has been reduced while the val-loss value is almost constant . after testing it on VOC2007 , i found the mAP value is only 53.
Have you encountered this problem ? what should been done about training strategy?

Is it the same as my repo?
I havent tested map but can you share or PR Map tester and weights?
I might find something if possible.

i would be glad to receive your answer , thx!

the code is almost the same as your repo , maybe there is something diffierent in SSD_training.ipnb . Perhaps you can find something wrong by running the training code.

Thanks I cant immidiately find something good but I will someday.
I assume the difference would be weight multiplier for each layer or initialization details that keras would not support + implementation details which is very deep, by the way Pull request with details is always welcome !

@E12dward
Oh... Is your drive file broken ?
i couldnt unzip rar file.
and maybe ipython and weight is enough to share

@tanakataiki
i am sorry , the link above should be useful .

any way to speed up the training?