Pinned Repositories
Adam-mini-rwkv
Bone
opencompass-rwkv
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
RWKV-batch-infer
rwkv-demo-annotation
RWKV-PEFT
RWKV5-infctxLM
rwkv_cuda
rwkv_numba
rwkv_padding
JL-er's Repositories
JL-er/RWKV-PEFT
JL-er/RWKV5-infctxLM
JL-er/Bone
JL-er/rwkv_cuda
JL-er/rwkv_padding
JL-er/RWKV-batch-infer
JL-er/rwkv-demo-annotation
JL-er/rwkv_numba
JL-er/opencompass-rwkv
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
JL-er/Adam-mini-rwkv
JL-er/ChatRWKV
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
JL-er/faster-whisper
JL-er/flash-linear-attention
Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
JL-er/RWKV5-infctx-fallback
JL-er/LLaMA-Factory
Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
JL-er/PEFT-Bone
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
JL-er/PiSSA
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
JL-er/RWKV-LM-statetuning
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
JL-er/rwkv5-infctx-kernel
JL-er/RWKV_LM_EXT
This project is to extend RWKV LM's capabilities including sequence classification/embedding/peft/cross encoder/bi encoder/multi modalities, etc.