Pinned Repositories
ATP-AMR
Source code for paper "ATP: AMRize Than Parse! Enhancing AMR Parsing with PseudoAMRs" @NAACL-2022
CGAN
李宏毅GAN课程cGAN动漫人物头像生成实现代码(含训练数据)
DnD-Transformer
Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegrained Image Generation"
MLS
Source code of our paper "Focus on the Target’s Vocabulary: Masked Label Smoothing for Machine Translation" @ACL-2022
MMEvalPro
Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs
ParetoMNMT
Source code for paper "On the Pareto Front of Multilingual Neural Machine Translation" @ NeurIPS 2023
MIC
MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU
Awesome-Multimodal-Next-Token-Prediction
Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey
FastV
[ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models
PCA-EVAL
[ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
chenllliang's Repositories
chenllliang/DnD-Transformer
Source code for paper "A Spark of Vision-Language Intelligence: 2-Dimensional Autoregressive Transformer for Efficient Finegrained Image Generation"
chenllliang/CGAN
李宏毅GAN课程cGAN动漫人物头像生成实现代码(含训练数据)
chenllliang/MMEvalPro
Source code for MMEvalPro, a more trustworthy and efficient benchmark for evaluating LMMs
chenllliang/MLS
Source code of our paper "Focus on the Target’s Vocabulary: Masked Label Smoothing for Machine Translation" @ACL-2022
chenllliang/ParetoMNMT
Source code for paper "On the Pareto Front of Multilingual Neural Machine Translation" @ NeurIPS 2023
chenllliang/ATP-AMR
Source code for paper "ATP: AMRize Than Parse! Enhancing AMR Parsing with PseudoAMRs" @NAACL-2022
chenllliang/Gradient-Vaccine
(Unofficial) Implementation of ICLR 2021 paper "Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models"
chenllliang/FastV
Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models
chenllliang/Two-Stage-CAMRP
Source code for paper "A Two-Stage Method for Chinese AMR Parsing" @ CAMRP-2022 & CCL-2022
chenllliang/Off-Target-MNMT
Code For Paper "On the Off-Target Problem of Zero-Shot Multilingual Neural Machine Translation" @ACL2023
chenllliang/EnvInteractiveLMPapers
Paper collections of methods that using language to interact with environment, including interact with real world, simulated world or WWW(🏄).
chenllliang/Robust-Diffusion
Source code for project report "On The Robustness of Diffusion-Based Text-to-Image Generation" in CV-2022-Fall.
chenllliang/chenllliang
chenllliang/pkunlp-icler.github.io
chenllliang/Qwen2-VL
Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
chenllliang/AMRBART
Code for our paper "Graph Pre-training for AMR Parsing and Generation" in ACL2022
chenllliang/AntiFraudChatBot
A simple prompt-chatting AI based on wechaty and fintuned NLP model
chenllliang/Awesome-Multimodal-Large-Language-Models
:sparkles::sparkles:Latest Papers and Datasets on Multimodal Large Language Models, and Their Evaluation.
chenllliang/camel
🐫 CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society
chenllliang/chenliang.github.io
Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes
chenllliang/chenllliang.github.io
chenllliang/ChID_baseline
计算语言学22-23学年秋季学期 课程大作业baseline实现
chenllliang/datasets
🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
chenllliang/FSQ-pytorch
A Pytorch Implementation of Finite Scalar Quantization
chenllliang/Llama-X
Open Academic Research on Improving LLaMA to SOTA LLM
chenllliang/LLM-Agent-Paper-List
The paper list of the 86-page paper "The Rise and Potential of Large Language Model Based Agents: A Survey" by Zhiheng Xi et al.
chenllliang/lmms-eval
Accelerating the development of large multimodal models (LMMs) with lmms-eval
chenllliang/Open-Sora-Plan
This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.
chenllliang/UltraEdit
Source code for "UltraEdit: Instruction-based Fine-Grained Image Editing at Scale"
chenllliang/xtreme
XTREME is a benchmark for the evaluation of the cross-lingual generalization ability of pre-trained multilingual models that covers 40 typologically diverse languages and includes nine tasks.