NVIDIA-AI-IOT/tf_to_trt_image_classification

Build issues with modern libraries

Anthony-Jacques opened this issue · 2 comments

Is this repository still maintained / a recommended way to do things?

Having installed the latest Jetpack (4.5.1), and other contemporary packages, I notice that this code doesn't build as-is any more.

There are a couple of references to OpenCV's cv::imread() which fail due to needing cv::IMREAD_COLOR instead of CV_LOAD_IMAGE_COLOR, and the call to IUFFParser::registerInput fails with needing to specify the dims order (I guess nvuffparser::UffInputOrder::kNCHW).

I notice various other things are marked as deprecated (the use of DimsCHW for example), so probably should be updated to match the latest nv interfaces.

I notice that the registerInput call is already fixed in a pending pull request: #40

There isn't an outstanding pull request for the opencv changes though.

As I've yet to make this "work" I'm not going to submit pull requests for my local fixes (especially as I've also hit further issues outside this repository related to tf1 vs tf2 API breakages in "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py" and elsewhere)

I guess the answer to my initial question of "is this still the recommended way to do things?" is "no".

See https://forums.developer.nvidia.com/t/softmax-layer-in-tensorrt7-0-has-wrong-inference-results/112689/2

I got as far as hitting what appears to be a similar problem as in that link (my model successfully converted and ran, but returned incorrect results with a Softmax layer apparently not working the way I expected).

I've now switched to using tf2onnx and am able to load and run inference using TensorRT using that (the inference speedup is very significant relative to TF).

Leaving this ticket open as it seems to me that at some point someone from nvidia might want to mark this repository as deprecated / obsolete.