self-attention
There are 334 repositories under self-attention topic.
datawhalechina/leedl-tutorial
《李宏毅深度学习教程》(李宏毅老师推荐👍,苹果书🍎),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases
zhouhaoyi/Informer2020
The GitHub repository for the paper "Informer" accepted by AAAI 2021.
cmhungsteve/Awesome-Transformer-Attention
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
PetarV-/GAT
Graph Attention Networks (https://arxiv.org/abs/1710.10903)
Diego999/pyGAT
Pytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)
gordicaleksa/pytorch-GAT
My implementation of the original GAT paper (Veličković et al.). I've additionally included the playground.py file for visualizing the Cora dataset, GAT embeddings, an attention mechanism, and entropy histograms. I've supported both Cora (transductive) and PPI (inductive) examples!
github/CodeSearchNet
Datasets, tools, and benchmarks for representation learning of code.
microsoft/DeBERTa
The implementation of DeBERTa
NVlabs/MambaVision
[CVPR 2025] Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone
speedinghzl/CCNet
CCNet: Criss-Cross Attention for Semantic Segmentation (TPAMI 2020 & ICCV 2019).
DirtyHarryLYL/Transformer-in-Vision
Recent Transformer-based CV and related works.
The-AI-Summer/self-attention-cv
Implementation of various self-attention mechanisms focused on computer vision. Ongoing repository.
Separius/awesome-fast-attention
list of efficient attention modules
brightmart/bert_language_understanding
Pre-training of Deep Bidirectional Transformers for Language Understanding: pre-train TextCNN
NVlabs/FasterViT
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention
xxxnell/how-do-vits-work
(ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"
prakashpandey9/Text-Classification-Pytorch
Text classification using deep learning models in Pytorch
kaituoxu/Speech-Transformer
A PyTorch implementation of Speech Transformer, an End-to-End ASR with Transformer network on Mandarin Chinese.
daiquocnguyen/Graph-Transformer
Universal Graph Transformer Self-Attention Networks (TheWebConf WWW 2022) (Pytorch and Tensorflow)
jayparks/transformer
A Pytorch Implementation of "Attention is All You Need" and "Weighted Transformer Network for Machine Translation"
kaushalshetty/Structured-Self-Attention
A Structured Self-attentive Sentence Embedding
NVlabs/FAN
Official PyTorch implementation of Fully Attentional Networks
WenjieDu/SAITS
The official PyTorch implementation of the paper "SAITS: Self-Attention-based Imputation for Time Series". A fast and state-of-the-art (SOTA) deep-learning neural network model for efficient time-series imputation (impute multivariate incomplete time series containing NaN missing data/values with machine learning). https://arxiv.org/abs/2202.08516
leaderj1001/Stand-Alone-Self-Attention
Implementing Stand-Alone Self-Attention in Vision Models using Pytorch
Tixierae/deep_learning_NLP
Keras, PyTorch, and NumPy Implementations of Deep Learning Architectures for NLP
binli123/dsmil-wsi
DSMIL: Dual-stream multiple instance learning networks for tumor detection in Whole Slide Image
NVlabs/GCVit
[ICML 2023] Official PyTorch implementation of Global Context Vision Transformers
naver-ai/rope-vit
[ECCV 2024] Official PyTorch implementation of RoPE-ViT "Rotary Position Embedding for Vision Transformer"
jw9730/tokengt
[NeurIPS'22] Tokenized Graph Transformer (TokenGT), in PyTorch
fudan-zvg/SOFT
[NeurIPS 2021 Spotlight] & [IJCV 2024] SOFT: Softmax-free Transformer with Linear Complexity
aravindsankar28/DySAT
Representation learning on dynamic graphs using self-attention networks
wangxiao5791509/MultiModal_BigModels_Survey
[MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models
wenwenyu/MASTER-pytorch
Code for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)
alohays/awesome-visual-representation-learning-with-transformers
Awesome Transformers (self-attention) in Computer Vision
kushalj001/pytorch-question-answering
Important paper implementations for Question Answering using PyTorch
cbaziotis/neat-vision
Neat (Neural Attention) Vision, is a visualization tool for the attention mechanisms of deep-learning models for Natural Language Processing (NLP) tasks. (framework-agnostic)