vit
There are 294 repositories under vit topic.
lukas-blecher/LaTeX-OCR
pix2tex: Using a ViT to convert images of equations into LaTeX code.
cmhungsteve/Awesome-Transformer-Attention
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
towhee-io/towhee
Towhee is a framework that is dedicated to making neural data processing pipelines simple and fast.
hila-chefer/Transformer-Explainability
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
BR-IDL/PaddleViT
:robot: PaddleViT: State-of-the-art Visual Transformer and MLP Models for PaddlePaddle 2.0+
yitu-opensource/T2T-ViT
ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
roboflow/inference
A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
Yangzhangcst/Transformer-in-Computer-Vision
A paper list of some recent Transformer-based CV works.
sail-sg/Adan
Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models
open-compass/VLMEvalKit
Open-source evaluation toolkit of large vision-language models (LVLMs), support GPT-4v, Gemini, QwenVLPlus, 50+ HF models, 20+ benchmarks
chinhsuanwu/mobilevit-pytorch
A PyTorch implementation of "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer"
v-iashin/video_features
Extract video features from raw videos using multiple GPUs. We support RAFT flow frames as well as S3D, I3D, R(2+1)D, VGGish, CLIP, and TIMM models.
zgcr/SimpleAICV_pytorch_training_examples
SimpleAICV:pytorch training and testing examples.
vatz88/FFCSonTheGo
FFCS course registration made hassle free for VITians. Search courses and visualize the timetable on the go!
gupta-abhay/pytorch-vit
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
PaddlePaddle/PASSL
PASSL包含 SimCLR,MoCo v1/v2,BYOL,CLIP,PixPro,simsiam, SwAV, BEiT,MAE 等图像自监督算法以及 Vision Transformer,DEiT,Swin Transformer,CvT,T2T-ViT,MLP-Mixer,XCiT,ConvNeXt,PVTv2 等基础视觉算法
megvii-research/RevCol
Official Code of Paper "Reversible Column Networks" "RevColv2"
eeyhsong/EEG-Transformer
i. A practical application of Transformer (ViT) on 2-D physiological signal (EEG) classification tasks. Also could be tried with EMG, EOG, ECG, etc. ii. Including the attention of spatial dimension (channel attention) and *temporal dimension*. iii. Common spatial pattern (CSP), an efficient feature enhancement method, realized with Python.
qanastek/HugsVision
HugsVision is a easy to use huggingface wrapper for state-of-the-art computer vision
yaoxiaoyuan/mimix
Mimix: A Text Generation Tool and Pretrained Chinese Models
implus/mae_segmentation
reproduction of semantic segmentation using masked autoencoder (mae)
PaddlePaddle/PLSC
Paddle Large Scale Classification Tools,supports ArcFace, CosFace, PartialFC, Data Parallel + Model Parallel. Model includes ResNet, ViT, Swin, DeiT, CaiT, FaceViT, MoCo, MAE, ConvMAE, CAE.
xmindflow/Awesome-Transformer-in-Medical-Imaging
[MedIA Journal] An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
hunto/LightViT
Official implementation for paper "LightViT: Towards Light-Weight Convolution-Free Vision Transformers"
zwcolin/EEG-Transformer
A ViT based transformer applied on multi-channel time-series EEG data for motor imagery classification
kyegomez/NaViT
My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"
vitjs/vit
🚀 React application framework inspired by UmiJS / 类 UmiJS 的 React 应用框架
kyegomez/Vit-RGTS
Open source implementation of "Vision Transformers Need Registers"
kamalkraj/Vision-Transformer
Vision Transformer using TensorFlow 2.0
jaehyunnn/ViTPose_pytorch
An unofficial implementation of ViTPose [Y. Xu et al., 2022]
rasbt/pytorch-memory-optim
This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog post.
ssitvit/Code-Canvas
A hub for innovation through web development projects
s-chh/PyTorch-Vision-Transformer-ViT-MNIST-CIFAR10
Simplified Pytorch implementation of Vision Transformer (ViT) for small datasets like MNIST, FashionMNIST, SVHN and CIFAR10.
daniel-code/TubeViT
An unofficial implementation of TubeViT in "Rethinking Video ViTs: Sparse Video Tubes for Joint Image and Video Learning"
hunto/image_classification_sota
Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.
uta-smile/TVT
Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation, WACV 2023