facebookresearch/SentAugment

High RAM usage during sentence encoding

LukasDBaker opened this issue · 0 comments

Hello,
I've been running the example in the readme with 100m sentences, and I've been trying to run the following code:

python src/sase.py --input $input --model data/sase.pth --spm_model data/sase.spm --batch_size 64 --cuda "True" --output $output

Within sase.py, the program reaches the loop at line 70 but never completes it. It processes about 12 million sentences before utilising all the RAM available (24GB) and the program quits. I've tried using different batch sizes, but clearly since the portion of code in question doesn't reference the batch size that can't be the issue.

Is there any workaround for this (besides working with fewer sentences)?