number | Title | speaker | year | Keywords | quality |
---|---|---|---|---|---|
1 | 预训练,指令微调,对齐,专业化:论大语言模型能力的来源(bilibili) | yao fu University of Edinburgh |
2023.02 | LLM, pre-training, instruction tuning, alignment, specialization | ★★★★★ |
number | Title | Conference/journel + year | Code | Keywords | Benenit for us |
---|---|---|---|---|---|
5 | BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs (paper) | arvix 3023.07 | code | output with position | new setting |
4 | ChatSpot: Bootstrapping Multimodal LLMs via Precise Referring Instruction Tuning (paper) | arvix 2023.07 | demo | input with position | new setting |
3 | Kosmos-2: Grounding Multimodal Large Language Models to the World (paper) | arvix 3023.06 | code | grounding | new setting |
2 | Shikra: Unleashing Multimodal LLM’s Referential Dialogue Magic (paper) | arvix 2023.06 | code | both input and output with position | new setting |
1 | LLaVA: Large Language and Vision Assistant (paper) | arvix 2023.04 | code | New dataset, novel method | the pioneering work |
number | Title | Conference/journel + year | Code | Keywords | Benenit for us |
---|---|---|---|---|---|
4 | INSTRUCTZERO: EFFICIENT INSTRUCTION OPTIMIZATION FOR BLACK-BOX LARGE LANGUAGE MODELS(paper) | arvix 2023.06 | distill | good idea | |
3 | Self-Instruct: Aligning Language Models with Self-Generated Instructions (paper) | arvix 2023.08 | novel method for dataset generation | good idea | |
2 | INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models(paper) | arvix 2023.06 | |||
1 | Scaling Instruction-Finetuned Language Models |
number | Title | Conference/journel + year | Code | Keywords | Benenit for us |
---|---|---|---|---|---|
1 | Specializing Smaller Language Models towards Multi-Step Reasoning(paper) | ICML 2023 | Multi-step reasoning | template |