StyxXuan's Stars
EricLBuehler/xlora
X-LoRA: Mixture of LoRA Experts
JayZhang42/FederatedGPT-Shepherd
Shepherd: A foundational framework enabling federated instruction tuning for large language models
aHuiWang/plot_demo
论文里可以用到的实验图示例
JetRunner/TuPaTE
Code for EMNLP 2022 paper "Efficiently Tuned Parameters are Task Embeddings"
TUDB-Labs/mLoRA
An Efficient "Factory" to Build Multiple LoRA Adapters
thunlp/OpenNE
An Open-Source Package for Network Embedding (NE)
XueFuzhao/awesome-mixture-of-experts
A collection of AWESOME things about mixture-of-experts
predibase/lorax
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
xlang-ai/instructor-embedding
[ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings
punica-ai/punica
Serving multiple LoRA finetuned LLM as one
S-LoRA/S-LoRA
S-LoRA: Serving Thousands of Concurrent LoRA Adapters
calpt/awesome-adapter-resources
Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning
for-ai/parameter-efficient-moe
SkunkworksAI/hydra-moe
allenai/hyperdecoders
Codebase for Hyperdecoders https://arxiv.org/abs/2203.08304
allenai/hyper-task-descriptions
Learning adapter weights from task descriptions
lucidrains/mixture-of-experts
A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models
freshtaste/CausalModel
CausalModel implements widely used casual inference methods as well as an interference based method proposed by our paper.
bigscience-workshop/promptsource
Toolkit for creating, sharing and using natural language prompts.
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
AI4Finance-Foundation/FinGPT
FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.
kojima-takeshi188/zero_shot_cot
Prod Env
Anni-Zou/Meta-CoT
Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models
kbressem/medAlpaca
LLM finetuned for medical question answering
google-research/FLAN
suzgunmirac/BIG-Bench-Hard
Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
InfiAgent/InfiAgent.github.io
InfiAgent website
sail-sg/lorahub
[COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition
abacusai/Long-Context
This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and benchmark tasks that evaluate a model’s information retrieval capabilities with context expansion. We also include key experimental results and instructions for reproducing and building on them.
CStanKonrad/long_llama
LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transformer (FoT) method.