In this seminar, we will explore the fundamentals of conversational AI, from understanding the underlying technologies to hands-on demonstrations of building chatbot applications.
Title | Paper / Resource | Year | Why is it interesting? | Asignee | Recording | Slides |
---|---|---|---|---|---|---|
Large Languague Models | GPT2, GPT4, InstructGPT | 2023 | read whyA review of the greatest and latest LLMs. |
@ganitk | zoom(oY$3#=&W) | slides |
Large Languague Models | Llama 2 | 2023 | read whyA review of the greatest and latest LLMs. |
@Tal Ben Haim | zoom(cH?85a^6) | slides |
Hands-on vectorDB (pinecode) | Vector db summary, Code example | 2023 | read whyA short turtorial of how to use open source libraries to retrive documents. |
Self-work | zoom(code) | slides |
LangChain & AutoGPT | Introduction to langchain, AutoGPT | 2023 | read whyA turtorial on the latest and greatest apis for conversational ai. |
@Sagi | zoom(code) | slides |
Adapter models | K-adapters, AdapterHub | 2020 | read whyModel specialization technique which trains only small components on top of the existing model layers. |
@Shira | zoom | slides |
Parameter-efficient fine-tuning (PEFT) | LoRA, QLoRA, AdaLoRA | 2021 - 2023 | read whyFine-tune technique that do not require full model finetuning. The idea behind LoRA is that fine-tuning a foundation model on a downstream task does not require updating all of its parameters. There is a low-dimension matrix that can represent the space of the downstream task with very high accuracy. |
@Mengi | zoom(code) | slides |
Retrieval-Augmented Language Modeling (RALM) | In-Context Retrieval-Augmented Language Models | 2023 | read whyA method for incoporating the retrived documents for the generation process of the LM |
AI21 Presenter | zoom(code) | slides |