jyhong836's Stars
lm-sys/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
openai/chatgpt-retrieval-plugin
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
jina-ai/clip-as-service
🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
ShishirPatil/gorilla
Gorilla: An API store for LLMs
huggingface/chat-ui
Open source codebase powering the HuggingChat app
openai/consistency_models
Official repo for consistency models.
microsoft/promptbench
A unified evaluation framework for large language models
Maluuba/nlg-eval
Evaluation code for various unsupervised automated metrics for Natural Language Generation.
MLGroupJLU/LLM-eval-survey
The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".
keirp/automatic_prompt_engineer
chatarena/chatarena
ChatArena (or Chat Arena) is a Multi-Agent Language Game Environments for LLMs. The goal is to develop communication and collaboration capabilities of AIs.
Victorwz/LongMem
Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".
privacytrustlab/ml_privacy_meter
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
ofirpress/attention_with_linear_biases
Code for the ALiBi method for transformer language models (ICLR 2022)
mit-han-lab/offsite-tuning
Offsite-Tuning: Transfer Learning without Full Model
urvashik/knnlm
alexa/dialoglue
DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue
nelson-liu/lost-in-the-middle
Code and data for "Lost in the Middle: How Language Models Use Long Contexts"
AI-secure/DecodingTrust
A Comprehensive Assessment of Trustworthiness in GPT Models
mgalley/DSTC7-End-to-End-Conversation-Modeling
Grounded conversational dataset for end-to-end conversational AI (official DSTC7 data)
sachit-menon/classify_by_description_release
ryanwebster90/snip-dedup
microsoft/deep-language-networks
We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts at each layer. We stack two such layers, feeding the output of one layer to the next. We call the stacked architecture a Deep Language Network - DLN
liuyugeng/ML-Doctor
Code for ML Doctor
exe1023/DialEvalMetrics
facebookresearch/online_dialog_eval
Code for the paper "Learning an Unreferenced Metric for Online Dialogue Evaluation", ACL 2020
ganeshdg95/Leveraging-Adversarial-Examples-to-Quantify-Membership-Information-Leakage
xu1998hz/SEScore2
FabienRoger/concistency-lenses