Pinned Repositories
NanoDSP
Audio Enhancer with Bass amplification using a quadratic curve
RWKV-infctx-trainer-LoRA
RWKV v5, v6 infctx LoRA trainer with 4bit quantization,Cuda and Rocm supported, for training arbitary context sizes, to 10k and beyond!
RWKV-LM-LISA
Layerwise Importance Sampled AdamW for RWKV, aiming for RWKV-5 and 6. SFT, Aligning(DPO,ORPO). Cuda and Rocm6.0. can train 7b on 24GB GPU!
RWKV-LM-RLHF-DPO-LoRA
Direct Preference Optimization LoRA for RWKV, aiming for RWKV-5 and 6.
RWKV-LM-State-4bit-Orpo
State tuning with Orpo of RWKV v6 can be performed with 4-bit quantization. Every model can be trained with Orpo on Single 24GB GPU!
rwkv.cpp
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
RWKV5-LM-LoRA
RWKV v5,v6 LoRA Trainer on Cuda and Rocm Platform. RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
web-rwkv-inspector
OpenMOSE's Repositories
OpenMOSE/RWKV-LM-LISA
Layerwise Importance Sampled AdamW for RWKV, aiming for RWKV-5 and 6. SFT, Aligning(DPO,ORPO). Cuda and Rocm6.0. can train 7b on 24GB GPU!
OpenMOSE/RWKV5-LM-LoRA
RWKV v5,v6 LoRA Trainer on Cuda and Rocm Platform. RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
OpenMOSE/RWKV-infctx-trainer-LoRA
RWKV v5, v6 infctx LoRA trainer with 4bit quantization,Cuda and Rocm supported, for training arbitary context sizes, to 10k and beyond!
OpenMOSE/RWKV-LM-State-4bit-Orpo
State tuning with Orpo of RWKV v6 can be performed with 4-bit quantization. Every model can be trained with Orpo on Single 24GB GPU!
OpenMOSE/NanoDSP
Audio Enhancer with Bass amplification using a quadratic curve
OpenMOSE/RWKV-LM-RLHF-DPO-LoRA
Direct Preference Optimization LoRA for RWKV, aiming for RWKV-5 and 6.
OpenMOSE/rwkv.cpp
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
OpenMOSE/web-rwkv-inspector