Pinned Repositories
CLIP-FlanT5
Training code for CLIP-FlanT5
continual-learning
cross_modal_adaptation
Cross-modal few-shot adaptation with CLIP
digital_chirality
Testing the chirality of digital imaging operations.
leco
Learning with Ever-Changing Ontology
linzhiqiu-old.github.io
open_active
Open World Active Learning
t2v_metrics
Evaluating text-to-image/video/3D models with VQAScore
visual_gpt_score
VisualGPTScore for visio-linguistic reasoning
QPyTorch
Low Precision Arithmetic Simulation in PyTorch
linzhiqiu's Repositories
linzhiqiu/cross_modal_adaptation
Cross-modal few-shot adaptation with CLIP
linzhiqiu/t2v_metrics
Evaluating text-to-image/video/3D models with VQAScore
linzhiqiu/digital_chirality
Testing the chirality of digital imaging operations.
linzhiqiu/visual_gpt_score
VisualGPTScore for visio-linguistic reasoning
linzhiqiu/CLIP-FlanT5
Training code for CLIP-FlanT5
linzhiqiu/continual-learning
linzhiqiu/open_active
Open World Active Learning
linzhiqiu/leco
Learning with Ever-Changing Ontology
linzhiqiu/linzhiqiu-old.github.io
linzhiqiu/modern-resume-theme
A modern static resume template and theme. Powered by Jekyll and GitHub pages.
linzhiqiu/16-811
Math Fundamentals for Robotics - CMU
linzhiqiu/avalanche
Avalanche: an End-to-End Library for Continual Learning.
linzhiqiu/clear-benchmark-new.github.io
linzhiqiu/clear-benchmark.github.io
linzhiqiu/CLEAR-Challenge
linzhiqiu/cmu-vision.github.io
linzhiqiu/debiased-pseudo-labeling
[CVPR 2022] Debiased Learning from Naturally Imbalanced Pseudo-Labels
linzhiqiu/dino
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
linzhiqiu/examples
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
linzhiqiu/HRNet-Semantic-Segmentation
The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
linzhiqiu/HTML4Vision
A simple HTML visualization tool for computer vision research :hammer_and_wrench:
linzhiqiu/linzhiqiu.github.io
Zhiqiu Lin's site
linzhiqiu/LLaVA
[NeurIPS 2023 Oral] Visual Instruction Tuning: LLaVA (Large Language-and-Vision Assistant) built towards GPT-4V level capabilities.
linzhiqiu/llm-can-optimize-vlm.github.io
linzhiqiu/mmselfsup
OpenMMLab Self-Supervised Learning Toolbox and Benchmark
linzhiqiu/nips_policy_learning
NeuralIPS Policy Learning Scripts
linzhiqiu/PerceptualSimilarity
LPIPS metric. pip install lpips
linzhiqiu/vision-language-models-are-bows
Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR 2023
linzhiqiu/vl_finetuning
Few-shot Finetuning of CLIP
linzhiqiu/why-winoground-hard
Code for 'Why is Winoground Hard? Investigating Failures in Visuolinguistic Compositionality', EMNLP 2022