YoungXiyuan/DCA

Unable to run on google colab

Closed this issue · 2 comments

I am trying to run the code on google colab. CUDA exits with error: CUDA out of memory. Could you please help me which parameters could be changed for this error.

Result:

load conll at ../data/generated/test_train_data
load csv
370United News of India
process coref
load conll
reorder mentions within the dataset
create model
tcmalloc: large alloc 1181786112 bytes == 0xb04c000 @ 0x7efca71911e7 0x7efca15535e1 0x7efca15bc90d 0x7efca15bd522 0x7efca1654bce 0x50a7f5 0x50cfd6 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x507f24 0x509c50 0x50a64d 0x50c1f4 0x509918 0x50a64d 0x50c1f4 0x507f24 0x50b053 0x634dd2 0x634e87 0x63863f 0x6391e1 0x4b0dc0 0x7efca6d8eb97 0x5b26fa
--- create EDRanker model ---
prerank model
--- create NTEE model ---
--- create AbstractWordEntity model ---
main model
create new model
--- create MulRelRanker model ---
--- create LocalCtxAttRanker model ---
--- create AbstractWordEntity model ---
^C

Thank you for your interest in our work.

I am sorry for that I am not familiar with google colab.

We trained and evaluated the DCA framework on a GeForce GTX 1080 card with 8GB memory which is enough for the whole process.

As for the parameters that may potentially influence the memory usage, I remember that no matter how the parameters are changed, the memory usage of DCA framework remains stable.

Maybe you could have a try on a local workstation, and feel free to contact me if you have any questions. (:

I changed nothing and all of a sudden I am able to run it now. Thank you so much for the response. :-) Closing the issue.