sfzhang15/SFD

Can share the caffemodel with corrected prototxt?

Closed this issue · 2 comments

Hi @sfzhang15, in this issue, you mentioned that the fc6 padding should be 1 not 3. But the caffemodel you uploaded on baiduyun is still based on padding 3. I wish to know what is the performance if the network architecture is same as the architecture published in the paper. Can you upload your trained caffemodel that is padding 1??

Another note, I saw that in the solver.protoxt you upload is iter_size: 1. Based on the paper, if training is with 2 gpu, maybe it should be corrected to iter_size: 2? because the batch size in train.prototxt is 8.

Thank you~

@cassie101
Ok, I will find a pretrained model and share with you as fast as possible.
If you train with 4 GPUs, iter_size=1, batch_size=8, so total batch size is 32; If you train with 2 GPUs, iter_size=2, batch_size=8, so total batch size is also 32.

@cassie101
Here is your requested model.