colintoal's Stars
nmslib/nmslib
Non-Metric Space Library (NMSLIB): An efficient similarity search library and a toolkit for evaluation of k-NN methods for generic non-metric spaces.
langchain-ai/langchain
π¦π Build context-aware reasoning applications
ggerganov/llama.cpp
LLM inference in C/C++
SinMDM/SinMDM
Single Motion Diffusion Model
fixie-ai/fixie-examples
Examples of how to use the Fixie AI platform.
guardrails-ai/guardrails
Adding guardrails to large language models.
dair-ai/Prompt-Engineering-Guide
π Guides, papers, lecture, notebooks and resources for prompt engineering
NVIDIA/NeMo
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
NEUIR/P3Ranker
Code for our SIGIR 2022 accepted paper : P3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning
thunlp/Prompt-Transferability
On Transferability of Prompt Tuning for Natural Language Processing
Zeng-WH/PrefixTuning-Fix
complete the code of prefix-tuning in low data setting
PaddlePaddle/PaddleNLP
π Easy-to-use and powerful NLP and LLM library with π€ Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including πText Classification, π Neural Search, β Question Answering, βΉοΈ Information Extraction, π Document Intelligence, π Sentiment Analysis etc.
OFA-Sys/OFA
Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
thunlp/OpenPrompt
An Open-Source Framework for Prompt-Learning.
alibaba/EasyNLP
EasyNLP: A Comprehensive and Easy-to-use NLP Toolkit
rtmaww/EntLM
Codes for "Template-free Prompt Tuning for Few-shot NER".
microsoft/UniSumm
UNISUMM: Unified Few-shot Summarization with Multi-Task Pre-Training and Prefix-Tuning
minicheshire/Robust-Prefix-Tuning
code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification
Zeng-WH/DOP-Tuning
[NAACL 2022] Domain-Oriented Prefix-Tuning: Towards Efficient and Generalizable Fine-tuning for Zero-Shot Dialogue Summarization
cooelf/CompassMTL
Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)
ychen-stat-ml/kernel-adapters
Code for "Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning" (EMNLP 2022) and "Empowering Parameter-Efficient Transfer Learning by Recognizing the Kernel Structure in Attention" (NAACL 2022 Findings)
jordiclive/ControlPrefixes
google-research/prompt-tuning
Original Implementation of Prompt Tuning from Lester, et al, 2021
rmokady/CLIP_prefix_caption
Simple image captioning model
AkariAsai/ATTEMPT
This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)
HappyGu0524/MultiControl
zjunlp/MolGen
[ICLR 2024] Domain-Agnostic Molecular Generation with Chemical Feedback
woojeongjin/FewVLM
A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models (ACL 2022)
shizhediao/DaVinci
Source code for the paper "Prefix Language Models are Unified Modal Learners"
THUDM/P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks