shengxia/RWKV_Role_Playing

安装依赖时 No matching distribution found for torch==1.13.1

Closed this issue · 15 comments

安装依赖
pip install torch==1.13.1 --extra-index-url https://download.pytorch.org/whl/cu117 --upgrade
日志输出
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu117
ERROR: Could not find a version that satisfies the requirement torch==1.13.1 (from versions: 2.0.0, 2.0.0+cu117)
ERROR: No matching distribution found for torch==1.13.1
没有找到1.13.1版本,可以用最新版吗

我这边试了一下这个命令,是可以获取的,你要不再试试,可能是网络问题?不建议使用torch的2.0版本,会有很多不兼容的问题。

感谢回复,确实是网络问题,科学上网就行了。
但是我碰到了另一个问题,因为我只有4G显存,所以下载了1b5的模型
模型地址:https://huggingface.co/BlinkDL/rwkv-4-pile-1b5/blob/main/RWKV-4b-Pile-1B5-20230217-7954.pth
我下载后放到了model目录下,运行命令是:python webui.py --listen --model model/RWKV-4b-Pile-1B5-20230217-7954
运行后报错:
Traceback (most recent call last):
File "D:\projects\RWKV_Role_Playing\webui.py", line 17, in
from modules.model_utils import ModelUtils
File "D:\projects\RWKV_Role_Playing\modules\model_utils.py", line 6, in
from rwkv.model import RWKV
ModuleNotFoundError: No module named 'rwkv'
请问还缺少什么?代码已经是pull过最新的了。

感谢回复,确实是网络问题,科学上网就行了。 但是我碰到了另一个问题,因为我只有4G显存,所以下载了1b5的模型 模型地址:https://huggingface.co/BlinkDL/rwkv-4-pile-1b5/blob/main/RWKV-4b-Pile-1B5-20230217-7954.pth 我下载后放到了model目录下,运行命令是:python webui.py --listen --model model/RWKV-4b-Pile-1B5-20230217-7954 运行后报错: Traceback (most recent call last): File "D:\projects\RWKV_Role_Playing\webui.py", line 17, in from modules.model_utils import ModelUtils File "D:\projects\RWKV_Role_Playing\modules\model_utils.py", line 6, in from rwkv.model import RWKV ModuleNotFoundError: No module named 'rwkv' 请问还缺少什么?代码已经是pull过最新的了。

好像是因为我之前没先安装torch的时候就先安装requirements的关系,我再安装一次requirements试试

我看这个情况像是缺少rwkv库,但是我的确把这个库给放进requirements里面了,你要不手动执行一下pip install rwkv试试。

嗯嗯,重新安装之后好了,但是我又双叒碰到了问题orz
显卡4G显存,运行时一共占用到3G显存
使用1b5模型,顺利打开了网页
载入猫娘角色之后,看到第一条打招呼
但是无论我回复什么都会报错,日志如下
Traceback (most recent call last):
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\routes.py", line 395, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1193, in process_api
result = await self.call_function(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 916, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\projects\RWKV_Role_Playing\modules\chat.py", line 54, in on_message
return self.gen_msg(out, chatbot, top_p, temperature, presence_penalty, frequency_penalty)
File "D:\projects\RWKV_Role_Playing\modules\chat.py", line 57, in gen_msg
new_reply, out, self.model_tokens, self.model_state = self.model_utils.get_reply(self.model_tokens, self.model_state, out, temperature, top_p, presence_penalty, frequency_penalty)
File "D:\projects\RWKV_Role_Playing\modules\model_utils.py", line 76, in get_reply
token = self.pipeline.sample_logits(out, temperature=x_temp, top_p=x_top_p)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\rwkv\utils.py", line 78, in sample_logits
out = torch.multinomial(probs, num_samples=1)[0]
RuntimeError: probability tensor contains either inf, nan or element < 0

我想问一下你的显卡是什么型号的,我的默认启动策略是fp16i8,这个好像只适合20系之后的显卡,你可能需要用fp16这样的策略。

我的显卡是1650,CPU是12400
我把策略修改成了cuda fp16 *20 -> cpu fp32,跑起来显存占用3G
回复第一句后,报错变成了这样
Traceback (most recent call last):
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\routes.py", line 395, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1193, in process_api
result = await self.call_function(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 916, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\projects\RWKV_Role_Playing\modules\chat.py", line 54, in on_message
return self.gen_msg(out, chatbot, top_p, temperature, presence_penalty, frequency_penalty)
File "D:\projects\RWKV_Role_Playing\modules\chat.py", line 57, in gen_msg
new_reply, out, self.model_tokens, self.model_state = self.model_utils.get_reply(self.model_tokens, self.model_state, out, temperature, top_p, presence_penalty, frequency_penalty)
File "D:\projects\RWKV_Role_Playing\modules\model_utils.py", line 76, in get_reply
token = self.pipeline.sample_logits(out, temperature=x_temp, top_p=x_top_p)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\rwkv\utils.py", line 65, in sample_logits
out = np.random.choice(a=len(probs), p=probs)
File "mtrand.pyx", line 935, in numpy.random.mtrand.RandomState.choice
ValueError: probabilities contain NaN

我本来以为是config.json文件里面,top_p或temperature为0会造成这个问题,但是我试了试,temperature为0会报错,但是不是你这个错误,不过我的rwkv的utils.py文件里面的out = np.random.choice(a=len(probs), p=probs)这段和你这里面报错的那一行不一样,我不太确定是不是咱俩的rwkv版本不一致。我的rwkv的版本号是0.7.0,要不你试试pip install rwkv==0.7.0

我的config.json文件没有修改,是默认的0.7,2,0.5,0.5
我之前使用的rwkv的版本是0.7.3
我把版本退回0.7.0了,仍然报错,只是报错的行数不同了
Traceback (most recent call last):
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\routes.py", line 395, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1193, in process_api
result = await self.call_function(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 916, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\projects\RWKV_Role_Playing\modules\chat.py", line 54, in on_message
return self.gen_msg(out, chatbot, top_p, temperature, presence_penalty, frequency_penalty)
File "D:\projects\RWKV_Role_Playing\modules\chat.py", line 57, in gen_msg
new_reply, out, self.model_tokens, self.model_state = self.model_utils.get_reply(self.model_tokens, self.model_state, out, temperature, top_p, presence_penalty, frequency_penalty)
File "D:\projects\RWKV_Role_Playing\modules\model_utils.py", line 76, in get_reply
token = self.pipeline.sample_logits(out, temperature=x_temp, top_p=x_top_p)
File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\rwkv\utils.py", line 58, in sample_logits
out = np.random.choice(a=len(probs), p=probs)
File "mtrand.pyx", line 935, in numpy.random.mtrand.RandomState.choice
ValueError: probabilities contain NaN

我这边也尝试着把rwkv给更新到最新版,但是也没有碰到这个问题,我明天试试你的启动策略,看看我能不能把问题在我这里复现。

安装依赖 pip install torch==1.13.1 --extra-index-url https://download.pytorch.org/whl/cu117 --upgrade 日志输出 Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu117 ERROR: Could not find a version that satisfies the requirement torch==1.13.1 (from versions: 2.0.0, 2.0.0+cu117) ERROR: No matching distribution found for torch==1.13.1 没有找到1.13.1版本,可以用最新版吗

我遇到的问题和你这个一样。科学上网也没办法解决,还是报这个错。有没有其它方法可以安装这个依赖的?
我去https://download.pytorch.org/whl/cu117这个地址下载了对应版本的 .whl文件,我该放哪?

安装依赖 pip install torch==1.13.1 --extra-index-url https://download.pytorch.org/whl/cu117 --upgrade 日志输出 Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu117 ERROR: Could not find a version that satisfies the requirement torch==1.13.1 (from versions: 2.0.0, 2.0.0+cu117) ERROR: No matching distribution found for torch==1.13.1 没有找到1.13.1版本,可以用最新版吗

我遇到的问题和你这个一样。科学上网也没办法解决,还是报这个错。有没有其它方法可以安装这个依赖的? 我去https://download.pytorch.org/whl/cu117这个地址下载了对应版本的 .whl文件,我该放哪?

其实我也不熟悉python,根据我查到的资料,可以使用以下命令安装你下载的whl文件
pip install package_name.whl

嗯嗯,重新安装之后好了,但是我又双叒碰到了问题orz 显卡4G显存,运行时一共占用到3G显存 使用1b5模型,顺利打开了网页 载入猫娘角色之后,看到第一条打招呼 但是无论我回复什么都会报错,日志如下 Traceback (most recent call last): File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\routes.py", line 395, in run_predict output = await app.get_blocks().process_api( File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1193, in process_api result = await self.call_function( File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 916, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "D:\projects\RWKV_Role_Playing\modules\chat.py", line 54, in on_message return self.gen_msg(out, chatbot, top_p, temperature, presence_penalty, frequency_penalty) File "D:\projects\RWKV_Role_Playing\modules\chat.py", line 57, in gen_msg new_reply, out, self.model_tokens, self.model_state = self.model_utils.get_reply(self.model_tokens, self.model_state, out, temperature, top_p, presence_penalty, frequency_penalty) File "D:\projects\RWKV_Role_Playing\modules\model_utils.py", line 76, in get_reply token = self.pipeline.sample_logits(out, temperature=x_temp, top_p=x_top_p) File "C:\Users\ASUS\AppData\Local\Programs\Python\Python310\lib\site-packages\rwkv\utils.py", line 78, in sample_logits out = torch.multinomial(probs, num_samples=1)[0] RuntimeError: probability tensor contains either inf, nan or element < 0

我这边尝试着用了一下你的策略,也没有报出这样的错误,另外纠正我之前说的那个错误,10/16系显卡也是可以用f16i8的策略的。这里你可以试试这个项目

https://github.com/BlinkDL/ChatRWKV

这是RWKV的官方项目,看看他们的项目你能跑不能。

另外说一下我这边的环境,我这边CUDA用的版本是11.8,环境用的是conda的虚拟环境,python版本是3.10.0,希望这些对你有帮助。

不好意思,之前是我下载错模型了,没注意到这句提示
Use RWKV-4 models (NOT RWKV-4a, NOT RWKV-4b) unless you know what you are doing.
换了个模型之后可以聊了,但是回复完全不搭边,因为用的是1b5模型吗
image

不好意思,之前是我下载错模型了,没注意到这句提示 Use RWKV-4 models (NOT RWKV-4a, NOT RWKV-4b) unless you know what you are doing. 换了个模型之后可以聊了,但是回复完全不搭边,因为用的是1b5模型吗 image

哈哈……1b5是有点儿不太行,感觉这个模型吧,3b开始能聊上,7b开始像个人,照这个趋势,14b应该会更强可惜现在没中文。