AutoViML/deep_autoviml

How do you load model and cat_vocab_dict after training?

coderdoufu opened this issue · 2 comments

Thanks for your contributions! I have trained my model and I would like to know if there is anyway to load the trained from my saved_path?

Whenever I tried to load the model from my saved_model path, I will always receive this message:

ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements get_configand from_configwhen saving. In addition, please use thecustom_objectsarg when callingload_model().

very simple.
Your model is saved under the project_name folder in .pb (protobuf) format which is the new Tensorflow saved model format.

You can load the model back if you give the project_name folder. It will automatically find the saved model and load associated artifacts which will help in predictions. The keras_model_type below is the same as the type of model that you built above (earlier). That way, you can have multiple model types under different names. The model and cat_vocab_dict are not needed and you can set them as empty strings.

Here is the call:
y_preds = deep.predict(model="", project_name, test_dataset=test, keras_model_type=keras_model_type, cat_vocab_dict="")

If you still having trouble with it, please send an email to the email address in this Github with your details and we will fix it. Please make sure you do:
pip install deep_autoviml --upgrade before you send the email and testing it with the latest version.