Change model saving/loading in `Classifier`
jason-fries opened this issue · 1 comments
jason-fries commented
Currently load and save operate directly over pickles. This causes issues when trying to load models across devices (GPU->CPU). These calls should wrap torch.load
and torch. load_state_dict
in some configuration based on what use_cuda
flag is provided to the model.
See.
https://pytorch.org/tutorials/beginner/saving_loading_models.html#saving-loading-model-across-devices
scottfleming commented
Bumping this, as currently saving/logging models with pickle
will fail for any models > 4GB: https://stackoverflow.com/questions/29704139/pickle-in-python3-doesnt-work-for-large-data-saving. This is especially problematic with high-dimensional outputs (e.g. label models that spit out ~1000 different types of labels)