lxg2015/faceboxes

CReLU and BatchNorm

yulizhou opened this issue · 3 comments

Hi, I'm reading the paper and curious about your implementation.

CReLU layer seems defined but not used. Instead, the code implements it again in the layer construction.

Also, the paper has a batch norm layer but it's not implemented.

What is the consideration for this implementation? Better performance?

Thanks

Whichever way is ok about CReLU . With the bn layer, faceboxes should have better results, I just forget to add the batch norm layer, thanks

@yulizhou @lxg2015
Hi, I fix network design in this repo, little changes made better performance, such as conv to conv_bn_relu~

@xiongzihua
hi, I think the predict code is wrong.We can't resize the original image.
See this repo , the output is closer to the paper.