tensorchord/modelz-llm

bug: Install cuda again in the image

Opened this issue · 2 comments

#13 62.52 Installing collected packages: tokenizers, sentencepiece, msgpack, mpmath, lit, cpm_kernels, cmake, zipp, typing-extensions, sympy, regex, pyyaml, psutil, Pillow, packaging, nvidia-nvtx-cu11, nvidia-nccl-cu11, nvidia-cusparse-cu11, nvidia-curand-cu11, nvidia-cufft-cu11, nvidia-cuda-runtime-cu11, nvidia-cuda-nvrtc-cu11, nvidia-cuda-cupti-cu11, nvidia-cublas-cu11, numpy, networkx, msgspec, mosec, MarkupSafe, h11, fsspec, filelock, falcon, click, uvicorn, nvidia-cusolver-cu11, nvidia-cudnn-cu11, llmspec, jinja2, importlib-metadata, huggingface-hub, transformers, diffusers, triton, torch, accelerate

The image already has the cuda.

I guess it's related to the version of cuda image?

I think it is related to the conda package.

It's in low priority. Please take a look at #62 first. Now we cannot run chatglm model.