l0he1g's Stars
unslothai/unsloth
Finetune Llama 3.3, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
netease-youdao/QAnything
Question and Answer based on Anything.
Unstructured-IO/unstructured
Open source libraries and APIs to build custom preprocessing pipelines for labeling, training, or production machine learning pipelines.
explodinggradients/ragas
Supercharge Your LLM Application Evaluations 🚀
run-llama/rags
Build ChatGPT over your data, all with natural language
mlfoundations/open_flamingo
An open-source framework for training large multimodal models.
weaigc/bingo
Bingo,一个让你呼吸顺畅 New Bing。
openai/weak-to-strong
truera/trulens
Evaluation and Tracking for LLM Experiments
ha0z1/New-Bing-Anywhere
💬 New-Bing-Anywhere extension's source Always use Bing GPT-4
Tongji-KGLLM/RAG-Survey
deepseek-ai/DeepSeek-LLM
DeepSeek LLM: Let there be answers
stanford-oval/WikiChat
WikiChat is an improved RAG. It stops the hallucination of large language models by retrieving data from a corpus.
zjunlp/KnowledgeEditingPapers
Must-read Papers on Knowledge Editing for Large Language Models.
ContextualAI/HALOs
A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).
TencentARC/LLaMA-Pro
[ACL 2024] Progressive LLaMA with Block Expansion.
wangcunxiang/LLM-Factuality-Survey
The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>
shmsw25/FActScore
A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation"
ofirpress/self-ask
Code and data for "Measuring and Narrowing the Compositionality Gap in Language Models"
Re-Align/URIAL
lm-sys/llm-decontaminator
Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"
fenwii/huaweimind
华为视角看世界,任总思维看问题,华为任正非**之路,Huawei Renzhengfei speech,email& article,整理自1994年开始的讲话稿,涉及财经、人力资源、战略、内控与公共关系,从交换机、通讯设备、移动终端到人工智能、物联网,从2G、3G到4G、5G,从物理学、化学、数学到心理学、哲学,是创业,学习的标杆素材。
StonyBrookNLP/ircot
Repository for Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions, ACL23
AlexTMallen/adaptive-retrieval
liziniu/ReMax
Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)
thu-coai/CritiqueLLM
snu-mllab/DPPO
Official implementation of "Direct Preference-based Policy Optimization without Reward Modeling" (NeurIPS 2023)
liziniu/policy_optimization
Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)
syncdoth/Chain-of-Hindsight-PyTorch
Unofficial implementation of Chain of Hindsight (https://arxiv.org/abs/2302.02676) using pytorch and huggingface Trainers.
najoungkim/QAQA
Repository for the paper (QA)^2: Question Answering with Questionable Assumptions