markdtw/soft-attention-image-captioning

MemoryError when trying to save the features in extract all

Closed this issue · 3 comments

As the title suggest, I am getting a MemoryError when trying to save the features in extract_all.
I have an i7-6402p, gtx 1060gb, and 8gb of ram. The next question might be silly, as I am just a beginner.

I was running utils.py with tensorflow on the gpu. I am assuming the RAM memory is used when saving the features. As such, if I were to run utils.py with tensorflow on the CPU, am I correct to say that I would get the same error when attempting to save the file ?
I should mention that on the GPU I do receive some notification / errors, which do not show up on the cpu.

notiferrors

How could I fix this issue ? Would it be possible for you to share the npy file ?

Thank you for your time and I await your response.

Hi,

Extracting features with batch size 64 requires at least 12 GB of GPU memory (1060 only has 6 GB) and 16 GB+ CPU ram (you have 8gb ram), a normal PC would have troubles extracting features.

I'll upload my .npy file to cloud in a few days for you to download.

If you want to extract the features yourself, decrease the batch size in extract_all and save them to different .npy files.

Thanks,

Ok, thank you, I will wait for the .npy file. Have a nice day :)

No problem, the link is now updated in README.md, good luck!