tensorflow/tensorrt

How to write my input_fn when I conver my tf model to trt?

johnsGuo opened this issue · 5 comments

use nvcr.io/nvidia/tensorflow:19.12-tf2-py3 in docker
my model is

max_batch_size: 512
input [{
	name: "dense_input"
	data_type: TYPE_FP32
	format: FORMAT_NONE
	dims: [-1]
	is_shape_tensor: false
	allow_ragged_batch: false
}, {
	name: "sparse_ids_input"
	data_type: TYPE_INT32
	format: FORMAT_NONE
	dims: [-1]
	is_shape_tensor: false
	allow_ragged_batch: false
}, {
	name: "seq_input"
	data_type: TYPE_INT32
	format: FORMAT_NONE
	dims: [-1, -1]
	is_shape_tensor: false
	allow_ragged_batch: false
}, {
	name: "sparse_wgt_input"
	data_type: TYPE_FP32
	format: FORMAT_NONE
	dims: [-1]
	is_shape_tensor: false
	allow_ragged_batch: false
}]
output: [{
	name: "tf_op_layer_Sigmoid"
	data_type: TYPE_FP32
	dims: [1]
	reshape: {
		shape: []
	}
	label_filename: ""
	is_shape_tensor: false
}, {
	name: "tf_op_layer_pctr"
	data_type: TYPE_FP32
	dims: [1]
	reshape: {
		shape: []
	}
	label_filename: ""
	is_shape_tensor: false
}, {
	name: "tf_op_layer_dapan_action"
	data_type: TYPE_FP32
	dims: [1]
	reshape: {
		shape: []
	}
	label_filename: ""
	is_shape_tensor: false
}, {
	name: "tf_op_layer_pcvr_ctr"
	data_type: TYPE_FP32
	dims: [2]
	label_filename: ""
	is_shape_tensor: false
}, {
	name: "tf_op_layer_pctcvr_1"
	data_type: TYPE_FP32
	dims: [1]
	reshape: {
		shape: []
	}
	label_filename: ""
	is_shape_tensor: false
}, {
	name: "tf_op_layer_delay_time"
	data_type: TYPE_FP32
	dims: [2]
	label_filename: ""
	is_shape_tensor: false
}, {
	name: "tf_op_layer_pcvr"
	data_type: TYPE_FP32
	dims: [1]
	reshape: {
		shape: []
	}
	label_filename: ""
	is_shape_tensor: false
}, {
	name: "tf_op_layer_pctcvr"
	data_type: TYPE_FP32
	dims: [1]
	reshape: {
		shape: []
	}
	label_filename: ""
	is_shape_tensor: false
}, {
	name: "tf_op_layer_Sigmoid_1"
	data_type: TYPE_FP32
	dims: [1]
	reshape: {
		shape: []
	}
	label_filename: ""
	is_shape_tensor: false
}]

I want to know how to write my input_fn when I conver it to trt, my code is blew but not ok :

# -*- coding: utf-8 -*-
import os

from tensorflow import make_tensor_proto
from tensorflow.python.compiler.tensorrt import trt_convert as trt
import tensorflow as tf
import numpy as np

os.environ['TF_CPP_MIN_LOG_LEVEL'] = "3"

if __name__ == "__main__":
    input_saved_model_dir = "./TF-recommend/1634309909"
    output_saved_model_dir = "./TF-recommend-trt/"

    conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS
    conversion_params = conversion_params._replace(
        max_workspace_size_bytes=(1 << 32))
    conversion_params = conversion_params._replace(precision_mode="FP16")
    conversion_params = conversion_params._replace(
        maximum_cached_engines=100)

    converter = trt.TrtGraphConverterV2(
        input_saved_model_dir=input_saved_model_dir,
        conversion_params=conversion_params)

    converter.convert()


    def my_input_fn():
        dense_input = np.ones([438]).astype('float32')
        sparse_ids_input = np.ones([79]).astype('int32')
        sparse_wgt_input = np.ones([79]).astype('float32')

        req = {
            "dense_input": make_tensor_proto(dense_input, shape=(dense_input.shape)),
            "sparse_ids_input": make_tensor_proto(sparse_ids_input, shape=(sparse_ids_input.shape)),
            "sparse_wgt_input": make_tensor_proto(sparse_wgt_input, shape=(sparse_wgt_input.shape)),
        }
        yield req


    converter.build(input_fn=my_input_fn)
    converter.save(output_saved_model_dir)

Any update? I have the same issue with multiple inputs organized as a dict, the error is like

tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute __inference_pruned_11313 as input #0(zero-based) was expected to be a int64 tensor but is a string tensor [Op:__inference_pruned_11313]

My inputs is

inputs = {"10001": np.array(BATCH_SIZE).astype(np.int64), "10002": np.array([1]).astype(np.int64), ...}

"10001" and "10002" are the input names defined in savedmodel. There isn't any string tensor at all, it's very confusing.

@DEKHTIARJonathan Looking forward to your help, greatly appreciated.

@MuYu-zhi what is likely to happen is that the "string" tensors are your dictionary keys ;)

data = {
    "input_A": 1,
    "input_B": 2,
    "input_C": 3,
}

for x in data:
    print(f"x = `{x}`")

>>> x = `input_A`
>>> x = `input_B`
>>> x = `input_C`

https://www.online-python.com/SgNJL0pjbv

We use the following for our benchmarks:

def engine_build_input_fn(num_batches, model_phase):
    dataset, _ = get_dataset()  # you need to implement a dataloader in `get_dataset`

    for idx, data_batch in enumerate(dataset):
        x, y = data_batch 

        if not isinstance(x, (tuple, list, dict)):
            x = [x]

        yield x

        if (idx + 1) >= num_batches:
            break

@GuoGuiRong please use a more recent container that nvcr.io/nvidia/tensorflow:19.12-tf2-py3 it's really outdated.

Closing because the issue is very old

@DEKHTIARJonathan Hi, thanks for your reply. But I still have no idea how to write the input_fn with input as kv format.
I have tried

req = {"dense_input": make_tensor_proto(dense_input, shape=(dense_input.shape)), 
       "sparse_ids_input": make_tensor_proto(sparse_ids_input, shape=(sparse_ids_input.shape)),  
       "sparse_wgt_input": make_tensor_proto(sparse_wgt_input, shape=(sparse_wgt_input.shape)),  }

raise the same error as my first try.

I also found demo as follow in tf-trt document

def my_input_fn():
Inp1 = np.random.normal(size=(8, 16, 16, 3)).astype(np.float32)
inp2 = np.random.normal(size=(8, 16, 16, 3)).astype(np.float32) 
yield (inp1, inp2)

but I have multiple inputs, in my test, the input order after trt conversion is not deterministic.

Any Suggestions?