awslabs/gluonts

Deserialize on CPU-only machine a model trained in colab (using gpu)

gmcaixeta opened this issue · 1 comments

Description

Hi, I trained a model using pytorch and gluonts on colab GPU -T4. I saved the model using:

predictor.serialize(model_path)

When I tried to load the model on a CPU machine in order to perform inference, I got this messagem:

Predictor.deserialize(path)

Error message or code output

 File "/usr/local/lib/python3.9/site-packages/torch/serialization.py", line 258, in validate_cuda_device
l_1    |     raise RuntimeError('Attempting to deserialize object on a CUDA '
l_1    | RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU

Environment

  • Operating system: ubuntu
  • Python version: 3.9
  • GluonTS version: 0.14

How to insert this into gluonts deserialize function?
my_model = net.load_state_dict(torch.load('classifier.pt',map_location=torch.device('cpu')))

For those having the same problem:

    m = Predictor.deserialize(Path("./app/train/"), device=torch.device('cpu'))