finetuning
There are 209 repositories under finetuning topic.
unslothai/unsloth
Finetune Llama 3, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
meta-llama/llama-recipes
Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
microsoft/FLAML
A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
h2oai/h2o-llmstudio
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
Dataherald/dataherald
Interact with your SQL database, Natural Language to SQL using LLMs
learnables/learn2learn
A PyTorch Library for Meta-learning Research
stochasticai/xTuring
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
jina-ai/finetuner
:dart: Task-oriented embedding tuning for BERT, CLIP, etc.
eosphoros-ai/Awesome-Text2SQL
Curated tutorials and resources for Large Language Models, Text2SQL, Text2DSL、Text2API、Text2Vis and more.
georgian-io/LLM-Finetuning-Toolkit
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
minosvasilias/godot-dodo
Finetuning large language models for GDScript generation.
xing61/zzz-api
优质稳定的OpenAI的API接口-For企业和开发者。OpenAI的api proxy,支持ChatGPT的API调用,支持openai的API接口,支持:gpt-4,gpt-3.5。不需要openai Key, 不需要买openai的账号,不需要美元的银行卡,通通不用的,直接调用就行,稳定好用!!智增增
junxia97/awesome-pretrain-on-molecules
[IJCAI 2023 survey track]A curated list of resources for chemical pre-trained models
Xirider/finetune-gpt2xl
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
daswer123/xtts-webui
Webui for using XTTS and for finetuning it
microsoft/AzureML-BERT
End-to-End recipes for pre-training and fine-tuning BERT using Azure Machine Learning Service
helixml/helix
Multi-node production AI stack. Run the best of open source AI easily on your own servers. Create your own AI by fine-tuning open source models. Integrate LLMs with APIs. Run gptscript securely on the server
sozercan/aikit
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
kingTLE/literary-alpaca2
从词表到微调这就是你所需的一切
baidubce/bce-qianfan-sdk
Provide best practices for LMOps, as well as elegant and convenient access to the features of the Qianfan MaaS Platform. (提供大模型工具链最佳实践,以及优雅且便捷地访问千帆大模型平台)
gyunggyung/KoGPT2-FineTuning
🔥 Korean GPT-2, KoGPT2 FineTuning cased. 한국어 가사 데이터 학습 🔥
LHRLAB/ChatKBQA
[ACL 2024] Official resources of "ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models".
promptslab/LLMtuner
Tune LLM in few lines of code
rasbt/dora-from-scratch
LoRA and DoRA from Scratch Implementations
git-cloner/llama2-lora-fine-tuning
llama2 finetuning with deepspeed and lora
ssbuild/chatglm2_finetuning
chatglm2 6b finetuning and alpaca finetuning
woctezuma/finetune-detr
Fine-tune Facebook's DETR (DEtection TRansformer) on Colaboratory.
git-cloner/llama-lora-fine-tuning
llama fine-tuning with lora
adithya-s-k/LLM-Alchemy-Chamber
a friendly neighborhood repository with diverse experiments and adventures in the world of LLMs
kuutsav/llm-toys
Small(7B and below) finetuned LLMs for a diverse set of useful tasks
Trainy-ai/llm-atc
Fine-tuning and serving LLMs on any cloud
US-Artificial-Intelligence/praetor-data
Praetor is a lightweight finetuning data and prompt management tool
kamalkraj/e5-mistral-7b-instruct
Finetune mistral-7b-instruct for sentence embeddings
yifanzhang-pro/AutoMathText
Official implementation of DPFM @ ICLR 2024 paper "AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts" (Huggingface Daily Papers: https://huggingface.co/papers/2402.07625)
speediedan/finetuning-scheduler
A PyTorch Lightning extension that accelerates and enhances foundation model experimentation with flexible fine-tuning schedules.
kyegomez/Finetuning-Suite
Finetune any model on HF in less than 30 seconds