The source came from the following repository: Autocoding Injury Narratives with BERT
All code and models are released under the Apache 2.0 license. See the
LICENSE
file for more information.
For now, cite the Arxiv paper:
@article{devlin2018bert,
title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
If we submit the paper to a conference or journal, we will update the BibTeX.
This is not an official Google product.
For help or issues using BERT, please submit a GitHub issue.
For personal communication related to BERT, please contact Jacob Devlin
(jacobdevlin@google.com
), Ming-Wei Chang (mingweichang@google.com
), or
Kenton Lee (kentonl@google.com
).