jakeret/unet

Unable to restore custom object of type _tf_keras_metric currently while loading previously saved model without custom layers

lsl1229840757 opened this issue · 5 comments

I ran the scripts/oxford_iiit_pet.py and got a saved model in model_path.

now I would like to load this model with:
model = tf.keras.models.load_model(model_path)

but I get:

ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements get_configand from_config when saving. In addition, please use the custom_objects arg when calling load_model().

I searched this question on Stack Overflow and found that it worked when I changed the code to this:
model = tf.keras.models.load_model(model_path, custom_objects = {"mean_iou": mean_iou, "dice_coefficient": dice_coefficient})

so, I think in order to deserialize the model more conveniently, these two metrics should subclass tf.keras.metrics.Metric

Hi @lsl1229840757, thanks for letting me know.
I'll have a look

When I tried to use the same fix, I get 'NameError: name 'mean_iou' is not defined. So I specifically imported metrics from unet, then used this:
model = tf.keras.models.load_model(model_path, custom_objects = {"mean_iou": metrics.mean_iou, "dice_coefficient": metrics.dice_coefficient})
Unfortunately I get a new error: ValueError: Unknown metric function:mean_iou
I'm not using the existing examples, but because I'm using the unet virtual environment I'm confident that the setups are the same.

Why is this question closed? I have a similar issue with HammingLoss metric from TensorFlow Addons. What was the solution that worked?

Hi @NikosSpanos I've merged a PR that was resolving this issue (here)

Essentially it requires to pass a dict with the custom object when loading the model

@jakeret ty for the response. Indeed custom objects could have been a solution. Although didn't work in my case since I use Hamming Loss from Tensorflo Addons (here). However, the trick for me was to set the compile= argument to False and then re-compile the model after the load. Maybe this approach will help others in the future when using metrics from the TensorFlow Addon hub.