Pinned Repositories
ChiTransformer
The official implementation of "ChiTransformer: Towards Reliable Stereo from Cues"
dino
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
esvit
EsViT: Efficient self-supervised Vision Transformers
FLSL
PyTorch code for Self-Supervised learning (SSL) method FLSL: Feature-Level Self-supervised Learning (NeurIPS 2024)
llm-compressor
Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM
moco-v3
PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057
SADAJEM
Official code for the paper Towards Bridging the Performance Gaps of Joint Energy-based Models
sam
SAM: Sharpness-Aware Minimization (PyTorch)
udi
Official pytorch implementation of "Unsqueeze [CLS] Bottleneck to Learn Rich Representations " (ECCV 2024)
ViTDet
Unofficial implementation of Exploring Plain Vision Transformer Backbones for Object Detection
ISL-CV's Repositories
ISL-CV/ChiTransformer
The official implementation of "ChiTransformer: Towards Reliable Stereo from Cues"
ISL-CV/FLSL
PyTorch code for Self-Supervised learning (SSL) method FLSL: Feature-Level Self-supervised Learning (NeurIPS 2024)
ISL-CV/udi
Official pytorch implementation of "Unsqueeze [CLS] Bottleneck to Learn Rich Representations " (ECCV 2024)
ISL-CV/ViTDet
Unofficial implementation of Exploring Plain Vision Transformer Backbones for Object Detection
ISL-CV/dino
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
ISL-CV/esvit
EsViT: Efficient self-supervised Vision Transformers
ISL-CV/moco-v3
PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057
ISL-CV/SADAJEM
Official code for the paper Towards Bridging the Performance Gaps of Joint Energy-based Models
ISL-CV/sam
SAM: Sharpness-Aware Minimization (PyTorch)