Pinned Repositories
DeepSeek-MoE
DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
Anim-Director
The codes of Siggraph Asia 2024 paper "Anim-Director: A Large Multimodal Model Powered Agent for Controllable Animation Video Generation"
Cognitive-Visual-Language-Mapper
The codes and datasets about our ACL 2024 Main Conference paper titled "Cognitive Visual-Language Mapper: Advancing Multimodal Comprehension with Enhanced Visual Knowledge Alignment"
UMOE-Scaling-Unified-Multimodal-LLMs
The codes about "Uni-MoE: Scaling Unified Multimodal Models with Mixture of Experts"
VisionGraph
The benchmark and datasets of the ICML 2024 paper "VisionGraph: Leveraging Large Multimodal Models for Graph Theory Problems in Visual Context"
LingCloud
Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large Language Model""
Multimodal-Context-Reasoning
A multimodal context reasoning approach that introduce the multi-view semantic alignment information via prefix tuning.
NDCR
A Neural Divide-and-Conquer Reasoning Framework for Multimodal Reasoning on Linguistically Complex Text and Similar Images
Training-LLMs-Towards-Holistic-Learning
Training Language Models From Fragmentation Learning To Holistic Learning
yunxinli.github.io
Introduction of Yunxin Li
YunxinLi's Repositories
YunxinLi/LingCloud
Attaching human-like eyes to the large language model. The codes of IEEE TMM paper "LMEye: An Interactive Perception Network for Large Language Model""
YunxinLi/Multimodal-Context-Reasoning
A multimodal context reasoning approach that introduce the multi-view semantic alignment information via prefix tuning.
YunxinLi/NDCR
A Neural Divide-and-Conquer Reasoning Framework for Multimodal Reasoning on Linguistically Complex Text and Similar Images
YunxinLi/Training-LLMs-Towards-Holistic-Learning
Training Language Models From Fragmentation Learning To Holistic Learning
YunxinLi/yunxinli.github.io
Introduction of Yunxin Li