DeepGraphLearning/KnowledgeGraphEmbedding

RuntimeError: CUDA out of memory.

DonnieZhang586 opened this issue · 1 comments

dear bro, very lucky to be able to read such a good paper, and open source, when I run the program, some errors occurred, the same server, when I use the data set FB15K, he is working, when I changed to wn18 , Then RuntimeError: CUDA out of memory

my code:
CUDA_VISIBLE_DEVICES=1 python -u codes/run.py --do_train --cuda --do_valid --do_test --data_path data/wn18 --model RotatE -n 256 -b 256 -d 1000 -g 24.0 -a 1.0 -adv -lr 0.0001 --max_steps 80000 -save models/RotatE_wn18_0 --test_batch_size 16 -de
-b{64,128,256,512}I have tried using these values,I also asked for help

Hi Donnie,

wn18 has more entities than fb15k, so it takes more GPU memory. To reproduce the results on wn18, the recommended command is in best_config.sh:
bash run.sh train RotatE wn18 0 0 512 1024 500 12.0 0.5 0.0001 80000 8 -de

It's equivalent to:
CUDA_VISIBLE_DEVICES=$GPU_DEVICE python -u codes/run.py --do_train
--cuda
--do_valid
--do_test
--data_path data/wn18
--model RotatE
-n 1024 -b 512 -d 500
-g 12.0 -a 0.5 -adv
-lr 0.0001 --max_steps 80000
-save $SAVE --test_batch_size 8
-de