SolidShen's Stars
openai/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
jacobgil/pytorch-grad-cam
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
mlfoundations/open_clip
An open source implementation of CLIP.
voxel51/fiftyone
Refine high-quality datasets and visual AI models
IDEA-Research/GroundingDINO
[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
ondyari/FaceForensics
Github of the FaceForensics dataset
microsoft/GLIP
Grounded Language-Image Pre-training
hendrycks/test
Measuring Massive Multitask Language Understanding | ICLR 2021
tylin/coco-caption
SunzeY/AlphaCLIP
[CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want
mlfoundations/wise-ft
Robust fine-tuning of zero-shot models
SCLBD/DeepfakeBench
A comprehensive benchmark of deepfake detection
mapooon/SelfBlendedImages
[CVPR 2022 Oral] Detecting Deepfakes with Self-Blended Images https://arxiv.org/abs/2204.08376
akoksal/LongForm
Reverse Instructions to generate instruction tuning data with corpus examples
ethz-spylab/rlhf_trojan_competition
Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.
reds-lab/Narcissus
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
anthropics/sleeper-agents-paper
Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".
AllanYangZhou/nfn
NF-Layers for constructing neural functionals.
Unispac/Circumventing-Backdoor-Defenses
Code Repository for the Paper ---Revisiting the Assumption of Latent Separability for Backdoor Defenses (ICLR 2023)
nctu-eva-lab/AntifakePrompt
This is the official implementation of AntifakePrompt.
SolidShen/RIPPLE_official
KaiyuanZh/OrthogLinearBackdoor
[IEEE S&P 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks
Lyz1213/BadEdit
RU-System-Software-and-Security/BppAttack
Reality-Defender/Research-DD-VQA
DPamK/BadAgent
ZhangZhuoSJTU/LINT
Megum1/ODSCAN
[IEEE S&P'24] ODSCAN: Backdoor Scanning for Object Detection Models
reds-lab/BEEAR
This is the official Gtihub repo for our paper: "BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Language Models".