無法在 Linux 下執行?
Bills135 opened this issue · 31 comments
看說明只有 mac / windows 模式是嗎?
支持linux,安装python依赖并编译可执行程序后直接可用,我还没写说明文档
近期会更新相关安装说明
感謝等好消息
非常期待有安装说明!谢谢
@Bills135 @Karen0103 从以上链接下载linux程序后
参考此处说明执行命令,安装依赖:https://github.com/josStorer/RWKV-Runner/blob/master/build/linux/Readme_Install.txt, 即可直接使用
如果需要在无gui的环境下使用,安装依赖后,直接执行python3 main.py即可
调用http://127.0.0.1:8000/switch-model 传入使用的模型和配置进行读取
后续我会进一步给出服务器部署配合chatgpt应用使用的示例
能从源码安装吗?没有configure文件
目前没写configure,你需要自己安装 wails: https://wails.io/docs/gettingstarted/installation
你好,好像看到linux的使用教程还没有更新完成,正在在尝试部署,可以帮忙给个指示吗?https://github.com/josStorer/RWKV-Runner/blob/master/build/linux/Readme_Install.txt
@baofengqqwwff gui按照这个说明安装依赖后即可使用
有办法用python的方式启动webui吗?直接用可执行文件启动会报错Gtk-WARNING **: 16:57:37.766: cannot open display:
@baofengqqwwff 后续会出说明
@Bills135 @Karen0103 @baofengqqwwff
服务器部署示例脚本,注意脚本中的模型是最小规模的0.1B,且纯cpu执行
https://github.com/josStorer/RWKV-Runner/tree/master/deploy-examples
I wrote an AUR pkg config: https://gist.github.com/BoyanXu/9961a27587984073458d15cfa47a0ab0.
The problem I met is the strict requirement of torch version to be torch-1.13.1+cu117 triggers issue #17 on my machine with torch-2.0.1-2
The program seems to rely on default python3
interpreter at /usr/bin/python
whose package is managed at system level by pacman
. Keeping an outdated PyTorch at system level with a package manager like pacman
is extremely tricky, as that will break the dependency of other packages.
Currently, a workaround, maybe use a virtual environment, and remove the PyTorch dependency from PKGBUILD accordingly. But I wonder what prevented the project to support the latest PyTorch? Will Pytorch 2.0.X be supported in the future?
@boyanxu
Actually, this program does not have a strict requirement for torch version. The requirement mentioned in #17 are limited to Windows because the Windows version has a built-in custom CUDA kernel accelerator. The kernel is compiled under torch-1.13.1+cu117, and you can customize the Python interpreter used in the settings page and use other torch versions. Linux users must compile the CUDA kernel themselves or not use the acceleration.
Visit https://github.com/BlinkDL/ChatRWKV to learn about how to compile the kernel.
@boyanxu
Just tried your gist, for Arch WSL it works fine.
Perhaps you need to disable Use Custom CUDA kernel to Accelerate
in Configs page
I wrote an AUR pkg config: https://gist.github.com/BoyanXu/9961a27587984073458d15cfa47a0ab0.
The problem I met is the strict requirement of torch version to be torch-1.13.1+cu117 triggers issue #17 on my machine with torch-2.0.1-2
The program seems to rely on default
python3
interpreter at/usr/bin/python
whose package is managed at system level bypacman
. Keeping an outdated PyTorch at system level with a package manager likepacman
is extremely tricky, as that will break the dependency of other packages.Currently, a workaround, maybe use a virtual environment, and remove the PyTorch dependency from PKGBUILD accordingly. But I wonder what prevented the project to support the latest PyTorch? Will Pytorch 2.0.X be supported in the future?
➜ rwkv-runner makepkg -si
==> Making package: RWKV-Runner 1.2.0-1 (Fri 14 Jul 2023 09:50:56 PM CST)
==> Checking runtime dependencies...
==> Installing missing dependencies...
[sudo] password for greenhandzdl:
error: target not found: python-sse-starlette
error: target not found: python-gputil
==> ERROR: 'pacman' failed to install missing dependencies.
==> Missing dependencies:
-> python-pytorch
-> python-sse-starlette
-> python-gputil
==> Checking buildtime dependencies...
==> ERROR: Could not resolve all dependencies.
@boyanxu Actually, this program does not have a strict requirement for torch version. The requirement mentioned in #17 are limited to Windows because the Windows version has a built-in custom CUDA kernel accelerator. The kernel is compiled under torch-1.13.1+cu117, and you can customize the Python interpreter used in the settings page and use other torch versions. Linux users must compile the CUDA kernel themselves or not use the acceleration. Visit https://github.com/BlinkDL/ChatRWKV to learn about how to compile the kernel.
@josStorer 我目前正在尝试在“揽睿星舟”的算力服务器上部署Runner的仅后端,我创建的基础环境是 Linux / PyTorch / official-1.12.1-cuda11.6-cudnn8-devel
,我仍需要进行CUDA的自编译吗?
如果需要,我不确定这个编译动作需要如何操作(即使我已经详细阅读并能独立在BlinkDL的ChatRWKV项目中编译CUDA,但我仍不知道如何将其用于Runner项目)。
@eyaeya 编译是可选的, 在runner的后端推理服务中, 调用 /switch-model
载入模型时, 传入customCuda: true
即可开启自定义cuda算子, 此时会自动使用你安装好的环境进行编译, 你只需要确保正确安装了gcc, ninja, py依赖, cuda库即可
问题1:
@josStorer 请教,但我在“揽睿星舟”的算力服务器上部署Runner的仅后端之后(未Switch模型),试图用 URL/docs 来验证接口是否运行起来时报错:
复现步骤:
1、按照文档执行以下命令,且成功完成。
sudo apt install python3-dev
git clone https://github.com/josStorer/RWKV-Runner --depth=1
python3 -m pip install torch torchvision torchaudio
python3 -m pip install -r RWKV-Runner/backend-python/requirements.txt
cd RWKV-Runner
python3 ./backend-python/main.py --webui > log.txt &
2、在浏览器打开 http://URL:8000/docs
,得到以下提示。
问题2:
当我在Linux Server部署仅后端之后,Switch模型时报错。
环境:
- GPU:NVIDIA 3090
- CPU:Intel Xeon(Icelake) 2.6GHz, 12 Core
- 内存:40G
- 显存:24G
- 系统盘:150G SSD
- 浮点算力:35.6 TFLOPS
CUDA版本
nvcc --version
- nvcc: NVIDIA (R) Cuda compiler driver
- Copyright (c) 2005-2022 NVIDIA Corporation
- Built on Wed_Jun__8_16:49:14_PDT_2022
- Cuda compilation tools, release 11.7, V11.7.99
- Build cuda_11.7.r11.7/compiler.31442593_0
复现步骤:
1、新建的服务器环境,安装依赖后,启动后端服务
user@lsp-ws:~ /netdisk/data/RWKV-Runner$ python3 ./backend-python/main.py --webui > log.txt &
[1] 7957
user@lsp-ws:~/netdisk/data/RWKV-Runner$ INFO: Started server process [7957]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
2、切换模型
user@lsp-ws:~ /netdisk/data/ninja$ curl http://127.0.0.1:8000/switch-model -X POST -H "Content-Type: application/json" -d '{"model":"./models/rwkv_v5.2_7B_role_play_16k.pth","strategy":"cuda fp32","customCuda":"true","deploy":"true"}'
{"detail":"failed to load: CUDA out of memory. Tried to allocate 224.00 MiB (GPU 0; 23.70 GiB total capacity; 21.72 GiB already allocated; 202.56 MiB free; 22.76 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"}
附件:
我所参考的服务器端部署方法如下
#install git python3.10 npm by yourself
#change model and strategy according to your hardware
#下步未执行,因环境已自带Python
#sudo apt install python3-dev
git clone https://github.com/josStorer/RWKV-Runner --depth=1
python3 -m pip install torch torchvision torchaudio
python3 -m pip install -r RWKV-Runner/backend-python/requirements.txt
cd RWKV-Runner
python3 ./backend-python/main.py --webui > log.txt &
#安装cuda支持
#1.安装re2c
sudo apt install re2c
#2.clone Ninja 源码
git clone http://github.com/ninja-build/ninja
#3.Configure
./configure.py --bootstrap
#4.复制文件到系统
sudo cp ninja /usr/bin/
#5.build项目
apt-get install ninja-build
curl http://127.0.0.1:8000/switch-model -X POST -H "Content-Type: application/json" -d '{"model":"./models/rwkv_v5.2_7B_role_play_16k.pth","strategy":"cuda fp32","customCuda":"true","deploy":"true"}'
> log.txt &
不要这么做, 那只是个示例脚本, 你应该用screen或tmux之类的工具让程序在后台运行- 不要用
cuda fp32
, 这没有意义, 你应该用cuda fp16
, 错误写了CUDA out of memory
, 显存不足 - 装了python依然要装python3-dev, 这是用来编译cyac, 以启用state缓存需要的
- 直接
apt-get install gcc ninja-build
即可 - cuda算子编译你还得装这个: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=deb_local, 注意正确选择你的系统版本
> log.txt &
Don't do it this way; that's just a sample script. You should use tools like screen or tmux to run the program in the background.- Don't use
cuda fp32
; it doesn't make sense. You should usecuda fp16
. The error "CUDA out of memory" indicates insufficient VRAM. - Even if you have installed Python, you still need to install python3-dev. This is necessary for compiling cyac to enable the state cache.
- Simply run
apt-get install gcc ninja-build
. - For compiling CUDA kernel, you also need to install this: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=deb_local. Be sure to select the correct version for your system.
谢谢 @josStorer 的深夜回复,我明天将按此再次尝试并反馈,圣诞快乐🎄
> log.txt &
不要这么做, 那只是个示例脚本, 你应该用screen或tmux之类的工具让程序在后台运行- 不要用
cuda fp32
, 这没有意义, 你应该用cuda fp16
, 错误写了CUDA out of memory
, 显存不足- 装了python依然要装python3-dev, 这是用来编译cyac, 以启用state缓存需要的
- 直接
apt-get install gcc ninja-build
即可- cuda算子编译你还得装这个: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=20.04&target_type=deb_local, 注意正确选择你的系统版本
反馈:成功运行
以下是运行记录
环境
如需注册该平台,可使用我的邀请码获得优惠券
注册邀请码:1006359338
- “揽睿星舟”算力平台的D1区3090显卡服务器。
- 选择环境:基础镜像/Pytorc/official-torch2.0-cu1117
- GPU:NVIDIA 3090
- CPU:Intel Xeon(Icelake) 2.6GHz, 12 Core
- 内存:40G
- 显存:24G
- 系统盘:150G SSD
- 浮点算力:35.6 TFLOPS
- 选择环境:基础镜像/Pytorc/official-torch2.0-cu1117
进入VSCode在线调试界面。
运行命令
该环境已自带Python3-dev,无需安装
sudo apt install python3-dev
安装后端
git clone https://github.com/josStorer/RWKV-Runner --depth=1
python3 -m pip install torch torchvision torchaudio
python3 -m pip install -r RWKV-Runner/backend-python/requirements.txt
安装前端
cd RWKV-Runner/frontend
npm ci
npm run build
cd ..
安装CUDA依赖
sudo apt-get install gcc ninja-build
运行服务端
注意:对于揽睿星舟算力平台,需要将端口指向27777,并将host设置为0.0.0.0才可暴露外部访问。
python3 ./backend-python/main.py --port 27777 --host 0.0.0.0 --webui
切换模型
注意将下方命令中的模型名改为自己所用的。
新建一个终端执行:
curl http://127.0.0.1:27777/switch-model -X POST -H "Content-Type: application/json" -d '{"model":"./models/rwkv_v5.2_7B_role_play_16k.pth","strategy":"cuda fp16","customCuda":"true","deploy":"true"}'
API调用
- 打开“揽睿星舟”算力平台的“工作空间”,复制你服务器的「调试地址」。
- 打开你所需要的WebUI(例如 https://risuai.xyz 角色扮演平台),在 设置/Reverse Proxy中设置API地址为上一步复制的「调试地址」。
- 在WebUI完成其他的设置和模型调参,并开始玩耍。
备忘说明
该平台该环境,已自带CUDA算子无需安装以下步骤
查看Ubuntu版本
lsb_release -a
根据自己的环境选择版本并修改下方的CUDA算子安装方法:
安装CUDA算子
Base Installer
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/12.3.1/local_installers/cuda-repo-ubuntu2204-12-3-local_12.3.1-545.23.08-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2204-12-3-local_12.3.1-545.23.08-1_amd64.deb
sudo cp /var/cuda-repo-ubuntu2204-12-3-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-3
Driver Installer
sudo apt-get install -y nvidia-kernel-open-545
sudo apt-get install -y cuda-drivers-545
@josStorer 但我仍未解决打开 http://URL:8000/docs 时的报错,不过似乎不影响API调用的使用。
@josStorer 请教:在Linux的(cuda fp16)中,我是否还需要设置 os.environ["RWKV_CUDA_ON"]='1'
以使得编译CUDA更快?如果需要,我应如何设置?
@josStorer 请教:在Linux的(cuda fp16)中,我是否还需要设置
os.environ["RWKV_CUDA_ON"]='1'
以使得编译CUDA更快?如果需要,我应如何设置?
能写一个完整的吗,我觉得这个Linux的安装真的搞得太混乱了
%cd /content/RWKV-Runner/frontend
!npm ci
!npm run build
!npm install -g typescript
!npm run build
%cd .. 这一段出错 找不到tsc
Traceback (most recent call last):
File "/content/RWKV-Runner/backend-python/main.py", line 114, in
from webui_server import webui_server
File "/content/RWKV-Runner/backend-python/webui_server.py", line 10, in
"/", StaticFiles(directory="frontend/dist", html=True), name="static"
File "/usr/local/lib/python3.10/dist-packages/starlette/staticfiles.py", line 57, in init
raise RuntimeError(f"Directory '{directory}' does not exist")
RuntimeError: Directory 'frontend/dist' does not exist