Textual Entailment on entire retrieved document using pretrained model with decomposable attention
j6mes opened this issue · 2 comments
j6mes commented
Textual Entailment on entire retrieved document using pretrained model with decomposable attention
j6mes commented
This can be run with:
PYTHONPATH=src:lib/allennlp/:lib/DrQA/ python src/scripts/rte/da/train.py data/fever/drqa.db config/fever.json logs
You might have to adjust the batch size depending on your GPU. (I had to turn it down from 64 to 4 to fit on Maki's 1080Ti).
Currently training and should have an indication of the dev accuracy soon
@glampouras - thanks for letting me borrow your PC. Without the GPU it takes 1.5 hours per epoch. With the GPU it takes 6 mins
andreasvlachos commented
Sounds promising! Glad to hear money well spent on that GPU! Will give it a
spin myself too when I am back!
…On Wed, 6 Dec 2017 at 10:32 James Thorne ***@***.***> wrote:
@andreasvlachos <https://github.com/andreasvlachos>
This can be run with:
PYTHONPATH=src:lib/allennlp/:lib/DrQA/ python src/scripts/rte/da/train.py
data/fever/drqa.db config/fever.json logs
You might have to adjust the batch size depending on your GPU. (I had to
turn it down from 64 to 4 to fit on Maki's 1080Ti).
At epoch 3, we're getting dev accuracy of >.79
@glampouras <https://github.com/glampouras> - thanks for letting me
borrow your PC. Without the GPU it takes 1.5 hours per epoch. With the GPU
it takes 6 mins
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABbUhWANST4dgFqgfyU_F5n_xIcp5zdzks5s9m02gaJpZM4QrfH0>
.