HenryHZY
Interested in multimodal learning (vision-and-language) and parameter-efficient learning.
LaVi Lab led by Prof. Liwei Wang @ CSE, CUHKHong Kong
Pinned Repositories
annotated_deep_learning_paper_implementations
🧑🏫 59 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
Ask-Anything
[CVPR2024][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
Awesome-Multimodal-LLM
Research Trends in LLM-guided Multimodal Learning.
bolei_awesome_posters
CVPR and NeurIPS poster examples and templates. May we have in-person poster session soon!
cheatsheets-ai
Essential Cheat Sheets for deep learning and machine learning researchers https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5
CLEVA
[EMNLP 2023 Demo] CLEVA: Chinese Language Models EVAluation Platform
Conference-Acceptance-Rate
Acceptance rates for the major AI conferences
CVPR2020_Poster
Speech2Action CVPR Poster Source Code
VL-PET
[ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"
HenryHZY's Repositories
HenryHZY/Awesome-Multimodal-LLM
Research Trends in LLM-guided Multimodal Learning.
HenryHZY/VL-PET
[ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"
HenryHZY/glados_auto_checkin
HenryHZY/GlaDOS-auto-checkin
GLADOS自动签到,多账号签到
HenryHZY/annotated_deep_learning_paper_implementations
🧑🏫 59 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
HenryHZY/Ask-Anything
[CVPR2024][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.
HenryHZY/bolei_awesome_posters
CVPR and NeurIPS poster examples and templates. May we have in-person poster session soon!
HenryHZY/cheatsheets-ai
Essential Cheat Sheets for deep learning and machine learning researchers https://medium.com/@kailashahirwar/essential-cheat-sheets-for-machine-learning-and-deep-learning-researchers-efb6a8ebd2e5
HenryHZY/CLEVA
[EMNLP 2023 Demo] CLEVA: Chinese Language Models EVAluation Platform
HenryHZY/Conference-Acceptance-Rate
Acceptance rates for the major AI conferences
HenryHZY/CVPR2020_Poster
Speech2Action CVPR Poster Source Code
HenryHZY/HawkEye
HenryHZY/helm
Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110).
HenryHZY/latex_paper_writing_tips
Tips for Writing a Research Paper using LaTeX
HenryHZY/LLaMA-VID
Official Implementation for LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models
HenryHZY/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning: LLaVA (Large Language-and-Vision Assistant) built towards GPT-4V level capabilities.
HenryHZY/MyArxiv
HenryHZY/pydata-sphinx-theme
A clean, three-column Sphinx theme with Bootstrap for the PyData community
HenryHZY/sglang
SGLang is a structured generation language designed for large language models (LLMs). It makes your interaction with models faster and more controllable.
HenryHZY/ST-LLM
Official implementation of the paper "ST-LLM: Large Language Models Are Effective Temporal Learners"
HenryHZY/TempCompass
A benchmark to evaluate the temporal perception ability of Video LLMs
HenryHZY/Video-ChatGPT
"Video-ChatGPT" is a video conversation model capable of generating meaningful conversation about videos. It combines the capabilities of LLMs with a pretrained visual encoder adapted for spatiotemporal video representation. We also introduce a rigorous 'Quantitative Evaluation Benchmarking' for video-based conversational models.
HenryHZY/Visual-Table
Stay tuned!