Evaluating the 300W using the pre-trained model on cpu
john-bao-git opened this issue · 6 comments
Hello,
Thank you so much for your code and paper. I'm working on evaluating the pre-trained model you provided on the 300W dataset. However, I don't have GPU, so I can't use CUDA.
AssertionError: Torch not compiled with CUDA enabled
I found this part here in evaluate_detector.py:
target = target.cuda() mask = mask.cuda()
I've already edited the code in train_detector.py to take the CPU:
torch.load(opt.resume, map_location=torch.device('cpu'))
I've also edited the 300W-EVAL.sh file to do no GPUs through:
--gpu_ids -1
I can evaluate on single images, but I'd like to get an NME myself by evaluating the dataset.
I'm not sure how to proceed from here. If this is possible, please let me know.
Thank you very much for your time.
Sincerely,
John
which project are you using?
If you are using san_eval.py
to evaluate the model, you can use --cpu
augment to evaluate on CPU following https://github.com/D-X-Y/landmark-detection/tree/master/SAN#evaluation-on-the-single-image. If you want to evaluate on the whole 300W test set, you should use this file san_main.py
. If so, some codes should be modified.
I finally got it to work. I didn't edit san_main.py, but changed evaluate_detector.py instead. I commented out:
target = target.duda()
mask = mask.cuda()
Nice!