Not all checkpoint values were used.
Closed this issue · 2 comments
tensorflow=2.1.0
tensorRT=6.1.0
I saved tf.keras.applications.resnet50
as saved_model and convert with tf-trt with converter.build()
as below.
def input_fn():
for _ in range(16):
input1 = np.random.normal(size=(64, 224, 224, 3)).astype(np.float32)
yield [input1]
print('Converting to TF-TRT FP32...')
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(precision_mode=trt.TrtPrecisionMode.FP32)
converter = trt.TrtGraphConverterV2(input_saved_model_dir='resnet50_saved_model',
conversion_params=conversion_params)
converter.convert()
converter.build(input_fn=input_fn)
converter.save(output_saved_model_dir='resnet50_saved_model_TFTRT_FP32')
print('Done Converting to TF-TRT FP32')
I found there is a performance increase. But .pb size increased. I guess converter.save()
after converter.build()
saves unnecessary checkpoints inside, as WARNING says below.
Step 0: 3.9ms
Step 50: 3.7ms
Step 100: 3.5ms
Step 150: 3.5ms
Step 200: 3.4ms
Step 250: 3.4ms
Throughput: 18351 images/s
WARNING:tensorflow:Unresolved object in checkpoint: (root).trt_engine_resources.TRTEngineOp_0._serialized_trt_resource_filename
WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
I'm also seeing this warning with TF2.1.0 but with TRT7, a frozen_graph and only for INT8 with resnet50. I'm getting decent accuracy as well so I think it is a different issue this warning is pointing to.
I see the same behavior with a custom model, TensorRT 7.1.3, and TF 2.3.0.
If I build the model before saving, I get an identical message when I load it. (WARNING:tensorflow:Unresolved object in checkpoint: (root).trt_engine_resources.TRTEngineOp_0._serialized_trt_resource_filename)
If I build the model at runtime, I don't see the warning. (But I do have to wait for it to build.)
It's not a big deal. Everything seems to work fine, including the TensorRT optimized model.