LidongBing
Lidong Bing is a research director at Alibaba DAMO Academy, Singapore Office, where he is leading the multilingual NLP team of the Language Technology Lab.
Alibaba DAMO AcademySingapore
LidongBing's Stars
DAMO-NLP-SG/Video-LLaMA
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
DAMO-NLP-SG/VideoLLaMA2
VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs
DAMO-NLP-SG/Auto-Arena-LLMs
DAMO-NLP-SG/SeaExam
SeaExam: Benchmarking LLMs for Southeast Aisa languages with Human Exam Questions
Auto-Arena/Auto-Arena-LLMs
DAMO-NLP-SG/chain-of-knowledge
[ICLR2024] Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous Sources
DAMO-NLP-SG/VCD
[CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding
DAMO-NLP-SG/contrastive-cot
Contrastive Chain-of-Thought Prompting
DAMO-NLP-SG/SeaLLMs
[ACL 2024 Demo] SeaLLMs - Large Language Models for Southeast Asia
SeaLLMs/SeaLLMs
SeaLLMs - Large Language Models for Southeast Asia
DAMO-NLP-SG/CLEX
[ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models
DAMO-NLP-SG/multilingual-safety-for-LLMs
[ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"
DAMO-NLP-SG/LLM-Zoo
LLM Zoo collects information of various open- and close-sourced LLMs
DAMO-NLP-SG/GPT4-as-DataAnalyst
Data and code for the paper "Is GPT-4 a Good Data Analyst?".
DAMO-NLP-SG/LLM-Data-Annotator
DAMO-NLP-SG/SSTuning
Code for ACL paper "Zero-Shot Text Classification via Self-Supervised Tuning"
DAMO-NLP-SG/LLM-Sentiment
[NAACL 2024] Data and code for our paper "Sentiment Analysis in the Era of Large Language Models: A Reality Check"
DAMO-NLP-SG/M3Exam
Data and code for paper "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models"
DAMO-NLP-SG/BGCA
[ACL 2023] Code and Data for "Bidirectional Generative Framework for Cross-domain Aspect-based Sentiment Analysis"
DAMO-NLP-SG/IE-E2H
Easy-to-Hard Learning for Information Extraction (ACL 2023 Findings)
DAMO-NLP-SG/PeerDA
Source code of "PeerDA: Data Augmentation via Modeling Peer Relation for Span Identification Tasks" (ACL23)
DAMO-NLP-SG/PMR
[NeurIPS 2023] Pre-training Machine-Reader (Instead of Masked Language Model) at Scale
DAMO-NLP-SG/MVCR
Source code of "Towards Robust Low-Resource Fine-Tuning with Multi-View Compressed Representation" (ACL23)
DAMO-NLP-SG/MT-LLaMA
Multi-Task instruction-tuned LLaMA
AGI-Edgerunners/LLM-Adapters
Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"
Yifan-Gao/Distractor-Generation-RACE
[AAAI 2019] Generating Distractors for Reading Comprehension Questions from Real Examinations
fuzihaofzh/most-cited-papers
Statistics on most cited papers in recent years of each conferences
lixin4ever/TNet
Transformation Networks for Target-Oriented Sentiment Classification (ACL 2018)
thunlp/OpenNRE
An Open-Source Package for Neural Relation Extraction (NRE)
gaussic/text-classification-cnn-rnn
CNN-RNN中文文本分类,基于TensorFlow