We found that in 2019 it took 44x less compute to train a neural net to AlexNet-level performance than in 2012. (Moore’s Law would have only yielded an 11x change in cost over this period).
Going forward, we're going to use this git repository to help publicly track state of the art (SOTA) algorithmic efficiency. We're beginning by tracking training efficiency SOTA's in image recognition and translation at two levels.
79.1% top 5 accuracy on ImageNet
Publication | Compute(tfs-s/days) | Reduction Factor | Analysis | Date |
---|---|---|---|---|
AlexNet | 3.1 | 1 | AI and Efficiency | 6/1/2012 |
GoogLeNet | 0.71 | 4.3 | AI and Efficiency | 9/17/2014 |
MobileNet | 0.28 | 11 | AI and Efficiency | 4/17/2017 |
ShuffeNet | 0.15 | 21 | AI and Efficiency | 7/3/2017 |
ShuffleNet_v2 | 0.12 | 25 | AI and Efficiency | 6/30/2018 |
EfficientNet | 0.069 | 44 | EfficientNet | 5/28/2019 |
92.9% top 5 accuracy on ImageNet
Publication | Compute(tfs-s/days) | Reduction Factor | Analysis | Date |
---|---|---|---|---|
ResNet-50 | 17 | 1 | AI and Efficiency | 1/10/2015 |
EfficientNet | 0.75 | 10 | EfficientNet | 5/28/2019 |
34.8 BLEU on WMT-14 EN-FR
Publication | Compute(tfs-s/days) | Reduction Factor | Analysis | Date |
---|---|---|---|---|
Seq2Seq (Ensemble) | 465 | 1 | AI and Compute | 1/10/2014 |
Transformer(Base) | 8 | 61 | Attention is all you need | 1/12/2017 |
39.92 BLEU on WMT-14 EN-FR
Publication | Compute(tfs-s/days) | Reduction Factor | Analysis | Date |
---|---|---|---|---|
GNMT | 1620 | 1 | Attention is all you need | 1/26/2016 |
Transformer (Big) | 181 | 9 | Attention is all you need | 1/12/2017 |
##In order to make an entry please submit a pull request in which you:
- Make the appropriate update to efficiency_sota.csv
- Make the appropriate update to the tables in this file, README.MD
- Add the relevant calculations/supporting information to the analysis folder. To get examples of calculations please see AI and Compute and Appendix A and B in Measuring the Algorithmic Efficiency of Neural Networks.
FAQ
- We're interested in tracking progress on additional benchmarks that have been of interest for many years and continue to be of interest. Please send thoughts or analysis on such benchmarks to danny@openai.com.
- ImageNet is the only training data source allowed for the vision benchmark. No human captioning, other images, or other data is allowed. Automated augmentation is ok.
- We currently place no restrictions on training data used for translation, but may split results by appropriate categories in the future.
- A tf-s/day equals a teraflop/s worth of compute run a day.
To cite this work please use the following bibtex entry.
@misc{hernandez2020efficiency title = {Measuring the Algorithmic Efficiency of Neural Networks}, author = {Danny Hernandez, Tom B. Brown}, year = {2020}, eprint={2005.04305}, archivePrefix={arXiv}, primaryClass={cs.LG}, }