multi gpus for full finetune
qiqiApink opened this issue · 3 comments
qiqiApink commented
I want to run the full.py on multi gpus, but only one GPU was used.
Using bfloat16 Automatic Mixed Precision (AMP)
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 1 processes
----------------------------------------------------------------------------------------------------
[rank: 0] Global seed set to 1337
Can you help me to solve this?
rasbt commented
qiqiApink commented
No. All the settings are right. Btw, I use slurm to run the code. Is this the problem?
rasbt commented
There might be SLURM (not Lit-LLaMA-specific) problem with requesting the GPUs. You could add the following PyTorch code at the top to see if the machine indeed has multiple GPUs that are usable in PyTorch:
import torch
num_gpus = torch.cuda.device_count()
print("Number of GPUs available:", num_gpus)