tnwei/vqgan-clip-app

how to select the gpu (in a multi gpu setup)

edstoica opened this issue · 6 comments

hello,
I like to use my 2nd GPU because it has mor VRAM.
Where can I configure that?

btw: thank for sharing this brilliant project

tnwei commented

Hi! At the moment there isn't a way to select specific GPUs, but I think it should be possible to add an option for that. Let's see.

maybe this will help:
execute export CUDA_VISIBLE_DEVICES=1 before running vqgan...
I will check that later.
(source)

so here is my current workaround:

in the python script, insert:

import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "1"

BEFORE import torch or anything else.
where "1" is the GPU id you like to use.

tnwei commented

I'm trying to think of a way to incorporate this into the script gracefully. Custom modifications to the Python scripts work but can be overrided in the next update. I personally think the export CUDA_VISIBLE_DEVICES=1 method is cleanest, but can be a bit of a hassle to run repeatedly.

At the moment I'm thinking one of the following:

  • Allow passing CUDA device as command line arguments, something like streamlit run app.py -- --device 1 (ref for additional double dash: streamlit/streamlit#337)
  • Allow storing CUDA device in the YAML file so no extra commands are needed

Let me know what you think

I think a command line argument is helpful in any case.
on other programs I am used to type:
--gpu=0,1 for using gpu 0 and 1
--gpu=1 for only using gpu 1
default would be gpu=0

tnwei commented

Implemented, see the added section under Usage in README:

If you have multiple GPUs, specify the GPU you want to use by adding -- --gpu X. An extra double dash is required to bypass Streamlit argument parsing. Example commands:

# Use 2nd GPU
streamlit run app.py -- --gpu 1
 
# Use 3rd GPU
streamlit run diffusion_app.py -- --gpu 2