MaxZanella's Stars
mdausort/Cytology-fine-tuning
Fine-tuning of foundation models with LoRA for cytology classification
batmanlab/Mammo-CLIP
Official Pytorch implementation of MICCAI 2024 paper (early accept, top 11%) Mammo-CLIP: A Vision Language Foundation Model to Enhance Data Efficiency and Robustness in Mammography
SegoleneMartin/transductive-CLIP
vladan-stojnic/ZLaP
Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)
FereshteShakeri/Histo-TransCLIP
DelinteNicolas/UNRAVEL
This repository contains the code of UtiliziNg tRActography to uncoVEr muLti-fixel microstructure (UNRAVEL).
DelinteNicolas/UTracto
Package facilitating the tracking of sub-cortical U-Fibers
MaxZanella/transduction-for-vlms
[NeurIPS2024 - Spotlight] Transduction for Vision-Language Models (TransCLIP): code for the paper "Boosting Vision-Language Models with Transduction".
elkhouryk/RS-TransCLIP
Open-source code for the paper "Enhancing Remote Sensing Vision-Language Models for Zero-Shot Scene Classification"
Mehrdad-Noori/WATT
[NeurIPS2024] WATT: Weight Average Test-Time Adaption of CLIP
TrackingLaboratory/tracklab
A modular end-to-end tracking framework for research and development
VlSomers/bpbreid
[WACV23] A strong baseline for body part-based person re-identification
PierreLambert3/SQuaD-MDS-and-FItSNE-hybrid
sinahmr/NACLIP
PyTorch Implementation of NACLIP in "Pay Attention to Your Neighbours: Training-Free Open-Vocabulary Semantic Segmentation"
bethgelab/frequency_determines_performance
Code for the paper: "No Zero-Shot Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance" [NeurIPS'24]
jusiro/CLAP
[CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP).
Mehrdad-Noori/TFS-ViT_Token-level_Feature_Stylization
[PR2024] TFS-ViT: Token-Level Feature Stylization for Domain Generalization
GustavoVargasHakim/NCTTT
Noise Contrastive Test-Time Training
FereshteShakeri/FewShot-CLIP-Strong-Baseline
ULiege-driving/MSC-TTA
MaxZanella/CLIP-LoRA
An easy way to apply LoRA to CLIP. Implementation of the paper "Low-Rank Few-Shot Adaptation of Vision-Language Models" (CLIP-LoRA) [CVPRW 2024].
MaxZanella/WSBIM2243---Mammography-processing
We present methods to preprocess, detect tumours and segment malignant masses for the INbreast dataset.
MaxZanella/MTA
[CVPR 2024] Zero-shot method for Vision-Language Models based on a robust formulation of the MeanShift algorithm for Test-time Augmentation (MTA).
maxencewynen/PRLSegmentation
Repository with code for 2023 SPIE abstract
maxencewynen/compressed_WMLS
maxencewynen/ConfLUNet
KyanChen/MakeMultiHeadNaive
Use naive MultiheadAttention implement to replace nn.MultiheadAttention in pytorch
chunmeifeng/DiffTPT
【ICCV 2023】Diverse Data Augmentation with Diffusions for Effective Test-time Prompt Tuning
OpenGVLab/CaFo
[CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners
gyhandy/Hierarchy-CLIP
[CVPR 2023] Improving Zero-shot Generalization and Robustness of Multi-modal Models