zwshan's Stars
howard-hou/RWKV-TS
RWKV-TS: Beyond Traditional Recurrent Neural Network for Time Series Tasks
hithqd/PointRWKV
Yaziwel/Restore-RWKV
Restore-RWKV: Efficient and Effective Medical Image Restoration with RWKV
00ffcc/chunkRWKV6
continous batching and parallel acceleration for RWKV6
One-sixth/flash-linear-attention-pytorch
A Python implementation of flash linear attention operators in TransnormerLLM.
HqWu-HITCS/Awesome-Chinese-LLM
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
ray-project/llmperf
LLMPerf is a library for validating and benchmarking LLMs
ninehills/llm-inference-benchmark
LLM Inference benchmark
yale-sys/prompt-cache
Modular and structured prompt caching for low-latency LLM inference
SpursGoZmy/Tabular-LLM
本项目旨在收集开源的表格智能任务数据集(比如表格问答、表格-文本生成等),将原始数据整理为指令微调格式的数据并微调LLM,进而增强LLM对于表格数据的理解,最终构建出专门面向表格智能任务的大型语言模型。
JL-er/RWKV-batch-infer
BBuf/flash-rwkv
macrozheng/mall-admin-web
mall-admin-web是一个电商后台管理系统的前端项目,基于Vue+Element实现。 主要包括商品管理、订单管理、会员管理、促销管理、运营管理、内容管理、统计报表、财务管理、权限管理、设置等功能。
lin-xin/vue-manage-system
Vue3、Element Plus、typescript后台管理系统
sustcsonglin/flash-linear-attention
Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
EleutherAI/lm-evaluation-harness
A framework for few-shot evaluation of language models.
JL-er/RWKV-PEFT
ssbuild/rwkv_finetuning
rwkv finetuning
pappacena/pytorch-rocm-gtt
Patch pytorch to allow ROCM APUs (like Ryzen processors' iGPU) to use more memory than the reserved
datawhalechina/llms-from-scratch-cn
仅需Python基础,从0构建大语言模型;从0逐步构建GLM4\Llama3\RWKV6, 深入理解大模型原理
yuunnn-w/RWKV_Pytorch
This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation is overly complex and lacks extensibility. Let's join the flexible PyTorch ecosystem and open-source it together!
AnshulRanjan2004/MicroRWKV
Implementation of a custom architecture on nanoRWKV: A nanoGPT-style adaptation of the RWKV Language Model, which combines the simplicity of RNNs with GPT-level performance for large language models (LLMs).
Hannibal046/nanoRWKV
The nanoGPT-style implementation of RWKV Language Model - an RNN with GPT-level LLM performance.
deepglint/RWKV-CLIP
[EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner
BlinkDL/RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Hannibal046/RWKV-howto
possibly useful materials for learning RWKV language model.
Sharpiless/Yolov5-Flask-VUE
基于Flask+VUE前后端,在阿里云公网WEB端部署YOLOv5目标检测模型
charent/ChatLM-mini-Chinese
中文对话0.2B小模型(ChatLM-Chinese-0.2B),开源所有数据集来源、数据清洗、tokenizer训练、模型预训练、SFT指令微调、RLHF优化等流程的全部代码。支持下游任务sft微调,给出三元组信息抽取微调示例。
Hello-MLClub/ChatGLM-Finetuning
本项目主要针对ChatGLM、ChatGLM2和ChatGLM3模型进行不同方式的微调(Freeze方法、Lora方法、P-Tuning方法、全量参数等),并对比大模型在不同微调方法上的效果,主要针对信息抽取任务、生成任务、分类任务等。
AmberLJC/LLMSys-PaperList
Large Language Model (LLM) Systems Paper List