berktinaz's Stars
langchain-ai/langchain
🦜🔗 Build context-aware reasoning applications
nomic-ai/gpt4all
GPT4All: Chat with Local LLMs on Any Device
tloen/alpaca-lora
Instruct-tune LLaMA on consumer hardware
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Hannibal046/Awesome-LLM
Awesome-LLM: a curated list of Large Language Model
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
unslothai/unsloth
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
huggingface/trl
Train transformer language models with reinforcement learning.
facebookresearch/ImageBind
ImageBind One Embedding Space to Bind Them All
voxel51/fiftyone
The open-source tool for building high-quality datasets and computer vision models
openlm-research/open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
EleutherAI/lm-evaluation-harness
A framework for few-shot evaluation of language models.
Lightning-AI/lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
Instruction-Tuning-with-GPT-4/GPT-4-LLM
Instruction Tuning with GPT-4
yizhongw/self-instruct
Aligning pretrained language models with instruction data generated by themselves.
FreedomIntelligence/LLMZoo
⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡
johnma2006/mamba-minimal
Simple, minimal implementation of the Mamba SSM in one file of PyTorch.
FranxYao/chain-of-thought-hub
Benchmarking large language models' complex reasoning ability with chain-of-thought prompting
noahshinn/reflexion
[NeurIPS 2023] Reflexion: Language Agents with Verbal Reinforcement Learning
openai/consistencydecoder
Consistency Distilled Diff VAE
northwesternfintech/2025QuantInternships
Public quant internship repository, maintained by NUFT but available for everyone.
SinclairCoder/Instruction-Tuning-Papers
Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).
dingran/quant-notes
Quantitative Interview Preparation Guide, updated version here ==>
sylinrl/TruthfulQA
TruthfulQA: Measuring How Models Imitate Human Falsehoods
madaan/self-refine
LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
teacherpeterpan/self-correction-llm-papers
This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.
yxuansu/OpenAlpaca
OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA
bupticybee/FastLoRAChat
Instruct-tune LLaMA on consumer hardware with shareGPT data
LituRout/stsl-inverse-edit
Second-order Tweedie from Surrogate Loss
kanishkg/stream-of-search
Repository for the paper Stream of Search: Learning to Search in Language