Clouditera/SecGPT

源码安装部署, device type报错

Opened this issue · 5 comments

if torch.cuda.is_available(): device = "auto" else: device = "CPU"
当device被设置为auto时候会报以下错误:

RuntimeError: Expected one of cpu, cuda, ipu, xpu, mkldnn, opengl, opencl, ideep, hip, ve, fpga, ort, xla, lazy, vulkan, mps, meta, hpu, mtia, privateuseone device type at start of device string: auto

尝试将device 设置成为 cuda,会报 torch.cuda.OutOfMemoryError. 好像仅仅从单张4090显卡上申请内存,无法使用第二张4090显卡。

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 268.00 MiB (GPU 0; 23.65 GiB total capacity; 23.04 GiB already allocated; 169.81 MiB free; 23.04 GiB reserved in total by PyTorch)

一样的问题 楼主解决了吗

@xky1998 试试强制把 webdemo/webdemo.py 108行 inputs = tokenizer(prompt, return_tensors="pt").to(device) 修改成inputs = tokenizer(prompt, return_tensors="pt").to('cuda') 试一下。

1 终于跑起来了,不容易啊。

这个是直接部署的secGPT 运行webdemo吗?

这个是直接部署的secGPT 运行webdemo吗?

不是,直接部署的secGPT-mini