OliverShang's Stars
SparksJoe/Prism
A Framework for Decoupling and Assessing the Capabilities of VLMs
yyf20001230/sashimi_photos
JFan1997/Awesome_PhD_Opportunities
This repository is used for advertising PhD recruitment opportunities. Contributions are welcome!
YueYANG1996/KnoBo
A Textbook Remedy for Domain Shifts Knowledge Priors for Medical Image Analysis
zju-vipa/MosaicKD
[NeurIPS-2021] Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data
Hao840/manifold-distillation
Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation. NeurIPS 2022.
zou-group/textgrad
Automatic ''Differentiation'' via Text -- using large language models to backpropagate textual gradients.
yzd-v/cls_KD
'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)
xuanlinli17/large_vlm_distillation_ood
Distilling Large Vision-Language Model with Out-of-Distribution Generalizability (ICCV 2023)
vorobeevich/distillation-in-dg
Implementation of "Weight Averaging Improves Knowledge Distillation under Domain Shift" (ICCV 2023 OOD-CV Workshop)
openai/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
vinid/safety-tuned-llamas
ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.
sthalles/SimCLR
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
facebookresearch/dinov2
PyTorch code and models for the DINOv2 self-supervised learning method.
opencsapp/opencsapp.github.io
Open CS Application | 开源CS申请
wzhouad/Contra-OOD
Source code for paper "Contrastive Out-of-Distribution Detection for Pretrained Transformers", EMNLP 2021
renyi-ai/drfrankenstein
LukasRinder/normalizing-flows
Implementation of normalizing flows in TensorFlow 2 including a small tutorial.
kuangliu/pytorch-cifar
95.47% on CIFAR10 with PyTorch
VincentStimper/resampled-base-flows
Normalizing Flows with a resampled base distribution
MoeinSorkhei/glow2
Full-Glow: Fully conditional Glow for more realistic image generation
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
hpcaitech/Open-Sora
Open-Sora: Democratizing Efficient Video Production for All
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
lucidrains/vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
tamlhp/awesome-machine-unlearning
Awesome Machine Unlearning (A Survey of Machine Unlearning)
ydyjya/Awesome-LLM-Safety
A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights into the safety implications, challenges, and advancements surrounding these powerful models.
5yearsKim/Conditional-Normalizing-Flow
Conditional Generative model (Normalizing Flow) and experimenting style transfer using this model
OpenDevin/OpenDevin
🐚 OpenDevin: Code Less, Make More
AI-secure/DBA
DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)