The requirement of compute resource.
Zi-Jian-Gao opened this issue · 0 comments
Zi-Jian-Gao commented
Thank you for the great work. I tried to run the project with the command "CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --nproc_per_node=8 run_me.py --config ./configs/continual/7task_vl_checklist.yaml --eval_every 1 --agent_type lora --agent_name AdvTextMultiLoRa --mu 16 --external_lr 0.00125 --loss_alpha 2.0 --num_adv_iters 10 --adv_step_sz 0.01 --freeze_text_emb --output_dir _outputs/cvpr/ours" on eight NVIDIA GeForce RTX 3090 GPUs and get the error torch.cuda.OutOfMemoryError: CUDA out of memory.
Could you please tell me the ideal requirement of the compute resource?