Pinned Repositories
advanced_algorithms
Antibody
assignment_1
assignment_1_solutions
solutions for programming assignment 1
assignment_2
assignment_3
assignment_4
AttnPacker
Code and Pre-Trained Models for "AttnPacker: An end-to-end deep learning method for protein side-chain packing"
EDGEPack
protein-docking
Code for "A Deep Learning Framework for Flexible Docking and Design"
MattMcPartlon's Repositories
MattMcPartlon/AttnPacker
Code and Pre-Trained Models for "AttnPacker: An end-to-end deep learning method for protein side-chain packing"
MattMcPartlon/protein-docking
Code for "A Deep Learning Framework for Flexible Docking and Design"
MattMcPartlon/EDGEPack
MattMcPartlon/Antibody
MattMcPartlon/advanced_algorithms
MattMcPartlon/assignment_4
MattMcPartlon/assignment_5
MattMcPartlon/assignment_7
MattMcPartlon/assignment_8
MattMcPartlon/assignment_9
MattMcPartlon/En-transformer
Implementation of E(n)-Transformer, which extends the ideas of Welling's E(n)-Equivariant Graph Neural Network to attention
MattMcPartlon/graph-transformer
MattMcPartlon/hw4
MattMcPartlon/hw5
MattMcPartlon/hw6
MattMcPartlon/InteractionTVGL
MattMcPartlon/midterm
MattMcPartlon/Networks
MattMcPartlon/OmegaFold
OmegaFold Release Code
MattMcPartlon/pdb-utils
MattMcPartlon/perceiver-pytorch
Implementation of Perceiver, General Perception with Iterative Attention, in Pytorch
MattMcPartlon/programming_assignment_4
MattMcPartlon/programming_assignment_5
MattMcPartlon/programming_assignment_6
MattMcPartlon/programming_assignment_7
MattMcPartlon/programming_assignment_8
MattMcPartlon/protein_learningv2
MattMcPartlon/RaptorX-3DModeling
Note that current version does not include search of very large metagenome data. For some proteins, metagenome data is important. We will update this as soon as possible.
MattMcPartlon/se3-transformer-pytorch
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.
MattMcPartlon/x-transformers
A simple but complete full-attention transformer with a set of promising experimental features from various papers