Questions on inference Latency
lvniqi opened this issue · 3 comments
Hello~
I'm trying to reproduce the PingAn GammaLab & PingAn Cloud team's work, which is the No.1 in inference latency benchmark. This work uses this model to evaluate the inference time.
I notice that this model is not the original resnet50, and the network architecture is quite different from resnet50.
So, could you help me to confirm their real network architecture?
And i'm wondering is it allowed to use these light network like mobilenet there?
Thanks for your concern! Just to clarify and make sure we are on the same page, DAWNBench v1 doesn't require a specific architecture for any of its categories or tasks. The name of the model is purely meant as a human-readable description of the model. All that matters is that the submitted model satisfies the specified accuracy threshold. For example, if mobilenet reached the 93% top-5 validation accuracy, we would welcome a submission.
Going back to PingAn GammaLab & PingAn Cloud team's work, can you be more specific about the discrepancies you're concerned about? ResNet has become a bit of a suitcase word for the original ResNet models as well as minor variants. Briefly looking at the prototxt, it looks like there are 4 sections each with the appropriate number of convolutional layers.
@codyaustun Thanks for your explanation!
I got that we can use any architecture we want in this benchmark.
And I think it's better if the author can change the model name from ResNet50 to some name else. Because in this model, the number of hidden layers is more than 50.
Because of this name, researcher may assume that the computational complexity in this model is as same as the original resnet50. but actually, In this model, the first 7x7 convolution layer is replaced with three 3x3 convolution layers. Compare to original ResNet, the number of channels in every layer is reduced. This make the computational complexity much more smaller.
@lvniqi I believe @GammaLab-HPC or @Kay-Tian was the author of PR #104, so I'll leave it to them to consider updating the model name based on your feedback and close this issue for now.