llamazing/numnet_plus

Warning: masked_fill_

sabyasachibisoyi opened this issue · 1 comments

While training, I am getting this warning infinitely and timed out.

[W LegacyDefinitions.cpp:28] Warning: masked_fill_ received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (function masked_fill__cuda)
Seems some version mismatch issue.
How to fix this?

I was able to solve this by replacing
"mask =something.byte()"
to
"mask = something.bool()"
in tools/allennlp.py