RuntimeError: expected type torch.cuda.FloatTensor but got torch.cuda.HalfTensor
manjunaths opened this issue · 3 comments
manjunaths commented
Hello,
When I run the mnist example with the command-lines below I get the following error, is fp16 not supported yet ?
# python main.py --fp16 --data=cifar10 --model=wrn-28-2
<snip>
Traceback (most recent call last):
File "main.py", line 235, in <module>
main()
File "main.py", line 211, in main
mask.add_module(model, density=args.density)
File "/root/workdir/SparseLearning/sparse_learning/sparselearning/core.py", line 206, in add_module
self.init(mode=sparse_init, density=density)
File "/root/workdir/SparseLearning/sparse_learning/sparselearning/core.py", line 115, in init
self.apply_mask()
File "/root/workdir/SparseLearning/sparse_learning/sparselearning/core.py", line 240, in apply_mask
tensor.data = tensor.data*self.masks[name]
RuntimeError: expected type torch.cuda.FloatTensor but got torch.cuda.HalfTensor
Regards.
TimDettmers commented
I implemented fp16 support before and it worked for CIFAR-10 and MNIST, but not for the ImageNet code. I removed this functionality for now (forgot to remove this artifact). I plan to add fp16 to an upcoming release.
TimDettmers commented
TimDettmers commented
I could verify that FP16 yields the same performance across models for MNIST and CIFAR-10. Thus the library has fully functional and correct FP16 support.