Pherenice1125's Stars
Pherenice1125/Mini-PEFT
Simple version of MoE-PEFT, good for newcomers.
TsinghuaDatabaseGroup/AIDB
ai4db and db4ai work
scu-covariant/CSGuidance
本仓库旨在传授刚进入计算机领域的新同学,面临的种种配环境问题,或者通用技术不知道从哪学的困境,尽可能提供一个索引式的教程库,为快速上手入门,降低学习门槛提供帮助。
mlabonne/llm-course
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
TUDB-Labs/MoE-PEFT
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
TUDB-Labs/MixLoRA
State-of-the-art Parameter-Efficient MoE Fine-tuning Method
Pherenice1125/mLoRA
In an attempt to make more model adaption and algorithm optimization.
zhangbihan999/Stream-A-Steam-Like-Game-Recommendation-Platform
Try our project via the URL below:
huggingface/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
windingwind/zotero-pdf-translate
Translate PDF, EPub, webpage, metadata, annotations, notes to the target language. Support 20+ translate services.
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
covscript/covscript
Make Programming Easier
mikecovlee/mLoRA
This repository has transferred to https://github.com/TUDB-Labs/MoE-PEFT
karpathy/nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
hunkim/PyTorchZeroToAll
Simple PyTorch Tutorials Zero to ALL!
openai/openai-python
The official Python library for the OpenAI API
TUDB-Labs/mLoRA
An Efficient "Factory" to Build Multiple LoRA Adapters