/ProtTrans

ProtTrans is providing state of the art pre-trained models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using Transformers Models.

Primary LanguageJupyter Notebook


ProtTrans



ProtTrans is providing state of the art pre-trained models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using various Transformers Models.

Have a look at our paper ProtTrans: cracking the language of life’s code through self-supervised deep learning and high performance computing for more information about our work.


ProtTrans Attention Visualization


This repository will be updated regulary with new pre-trained models for proteins as part of supporting bioinformatics community in general, and Covid-19 research specifically through our Accelerate SARS-CoV-2 research with transfer learning using pre-trained language modeling models project.

Table of Contents

βŒ›οΈ  Models Availability

Model Pytorch
ProtT5-XL-BFD Download
ProtBert-BFD Config - Model - Vocab
ProtBert Config - Model - Vocab
ProtAlbert Config - Model - SPM
ProtXLNet Config - Model - SPM
ProtElectra-Generator coming soon
ProtElectra-Discriminator coming soon
ProtTXL coming soon
ProtTXL-BFD coming soon

πŸš€  Usage

How to use ProtTrans:

  • 🧬  Feature Extraction (FE):
    Please check: Embedding Section. More information coming soon.

  • πŸ’₯  Fine Tuning (FT):
    Please check: Fine Tuning Section. More information coming soon.

  • βš—οΈ  Protein Sequences Generation:
    Please check: Generate Section. More information coming soon.

  • πŸ“ˆ  Benchmark:
    Please check: Benchmark Section. More information coming soon.

πŸ“Š  Expected Results

  • 🧬  Secondary Structure Prediction (Q3):
Model CASP12 TS115 CB513
ProtT5-XL-BFD 77 85 84
ProtBert-BFD 76 84 83
ProtBert 75 83 81
ProtAlbert 74 82 79
ProtXLNet 73 81 78
ProtElectra-Generator 73 78 76
ProtElectra-Discriminator 74 81 79
ProtTXL 71 76 74
ProtTXL-BFD 72 75 77

  • 🧬  Secondary Structure Prediction (Q8):
Model CASP12 TS115 CB513
ProtT5-XL-BFD 66 74 71
ProtBert-BFD 65 73 70
ProtBert 63 72 66
ProtAlbert 62 70 65
ProtXLNet 62 69 63
ProtElectra-Generator 60 66 61
ProtElectra-Discriminator 62 69 65
ProtTXL 59 64 59
ProtTXL-BFD 60 65 60

  • 🧬  Membrane-bound vs Water-soluble (Q2):
Model DeepLoc (FE) DeepLoc (FT) Prediction
ProtT5-XL-BFD 91 comming soon comming soon
ProtBert-BFD 89 91 Online Prediction
ProtBert 89 91 comming soon
ProtAlbert 88 comming soon comming soon
ProtXLNet 87 comming soon comming soon
ProtElectra-Generator 85 comming soon comming soon
ProtElectra-Discriminator 86 comming soon comming soon
ProtTXL 85 comming soon comming soon
ProtTXL-BFD 86 comming soon comming soon

  • 🧬  Subcellular Localization (Q10):
Model DeepLoc (FE) DeepLoc (FT)
ProtT5-XL-BFD 77 comming soon
ProtBert-BFD 74 78
ProtBert 74 79
ProtAlbert 74 comming soon
ProtXLNet 68 comming soon
ProtElectra-Generator 59 comming soon
ProtElectra-Discriminator 70 comming soon
ProtTXL 66 comming soon
ProtTXL-BFD 65 comming soon

❀️  Community and Contributions

The ProtTrans project is a open source project supported by various partner companies and research institutions. We are committed to share all our pre-trained models and knowledge. We are more than happy if you could help us on sharing new ptrained models, fixing bugs, proposing new feature, improving our documentation, spreading the word, or support our project.

πŸ“«  Have a question?

We are happy to hear your question in our issues page ProtTrans! Obviously if you have a private question or want to cooperate with us, you can always reach out to us directly via our RostLab email

🀝  Found a bug?

Feel free to file a new issue with a respective title and description on the the ProtTrans repository. If you already found a solution to your problem, we would love to review your pull request!.

βœ…  Requirements

For protein feature extraction or fine-tuninng our pre-trained models, Pytorch and Transformers library from huggingface is needed. For model visualization, you need to install BertViz library.

🀡  Team

  • Technical University of Munich:
Ahmed Elnaggar Michael Heinzinger Christian Dallago Ghalia Rehawi Burkhard Rost
  • Med AI Technology:
Yu Wang
  • Google:
Llion Jones
  • Nvidia:
Tom Gibbs Tamas Feher Christoph Angerer
  • Seoul National University:
Martin Steinegger
  • ORNL:
Debsindhu Bhowmik

πŸ’°  Sponsors

Nvidia Google Google ORNL Software Campus

πŸ“˜  License

The ProtTrans pretrained models are released under the under terms of the MIT License.

✏️  Citation

If you use this code or our pretrained models for your publication, please cite the original paper:

@article {Elnaggar2020.07.12.199554,
	author = {Elnaggar, Ahmed and Heinzinger, Michael and Dallago, Christian and Rehawi, Ghalia and Wang, Yu and Jones, Llion and Gibbs, Tom and Feher, Tamas and Angerer, Christoph and Steinegger, Martin and BHOWMIK, DEBSINDHU and Rost, Burkhard},
	title = {ProtTrans: Towards Cracking the Language of Life{\textquoteright}s Code Through Self-Supervised Deep Learning and High Performance Computing},
	elocation-id = {2020.07.12.199554},
	year = {2020},
	doi = {10.1101/2020.07.12.199554},
	publisher = {Cold Spring Harbor Laboratory},
	abstract = {Computational biology and bioinformatics provide vast data gold-mines from protein sequences, ideal for Language Models (LMs) taken from Natural Language Processing (NLP). These LMs reach for new prediction frontiers at low inference costs. Here, we trained two auto-regressive language models (Transformer-XL, XLNet) and two auto-encoder models (Bert, Albert) on data from UniRef and BFD containing up to 393 billion amino acids (words) from 2.1 billion protein sequences (22- and 112 times the entire English Wikipedia). The LMs were trained on the Summit supercomputer at Oak Ridge National Laboratory (ORNL), using 936 nodes (total 5616 GPUs) and one TPU Pod (V3-512 or V3-1024). We validated the advantage of up-scaling LMs to larger models supported by bigger data by predicting secondary structure (3-states: Q3=76-84, 8 states: Q8=65-73), sub-cellular localization for 10 cellular compartments (Q10=74) and whether a protein is membrane-bound or water-soluble (Q2=89). Dimensionality reduction revealed that the LM-embeddings from unlabeled data (only protein sequences) captured important biophysical properties governing protein shape. This implied learning some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC to larger data sets slightly reduced the gap between models trained on evolutionary information and LMs. Availability ProtTrans: \<a href="https://github.com/agemagician/ProtTrans"\>https://github.com/agemagician/ProtTrans\</a\>Competing Interest StatementThe authors have declared no competing interest.},
	URL = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554},
	eprint = {https://www.biorxiv.org/content/early/2020/07/21/2020.07.12.199554.full.pdf},
	journal = {bioRxiv}
}