dwofk/fast-depth

Evaluation error running the package "torch.device"

EstebanTlelo opened this issue · 1 comments

Hi, when i run the evaluation model with CUDA 9.2 and python 3, I got the following error:

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

I tried to solve this by changing the following line in the main code:

checkpoint = torch.load(args.evaluate)

to:

checkpoint = torch.load('modelpath', map_location=torch.device('cpu'))

After that, the current error is:

best_result = checkpoint['best_result']
KeyError: 'best_result'

Can anybody help me to solve this?

dwofk commented

@EstebanTlelo If this was not resolved, it may be that the object loaded with torch.load(...) was itself the model and not a dictionary containing the model. If indexing checkpoint with the best_result key was leading to errors, you could remove that line and instead set best_result = checkpoint directly.