tensorflow/tensorrt

Segmentation fault when converting TF model to TF-TRT in TF2.1

yuanzhedong opened this issue · 10 comments

Env

Code

conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(precision_mode=trt.TrtPrecisionMode.FP32, max_workspace_size_bytes=4000000000)                                                           
converter = trt.TrtGraphConverterV2(input_saved_model_dir=model_dir,                                                                                                                                    
                                        conversion_params=conversion_params)                                                                                                                                
converter.convert()                                                                                                                                                                                     
converter.save(output_saved_model_dir=FLAGS.tftrt_model_dir)             

Error message


....Converting to TF-TRT FP32...
2020-02-22 06:24:54.194963: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
INFO:tensorflow:Linked TensorRT version: (6, 0, 1)
I0222 06:24:54.195068 140612687255360 trt_convert.py:200] Linked TensorRT version: (6, 0, 1)
INFO:tensorflow:Loaded TensorRT version: (6, 0, 1)
I0222 06:24:54.195302 140612687255360 trt_convert.py:201] Loaded TensorRT version: (6, 0, 1)
2020-02-22 06:24:57.597559: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-22 06:24:57.597931: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2020-02-22 06:24:57.598014: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-02-22 06:24:57.598443: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-02-22 06:24:57.598817: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: GeForce RTX 2080 with Max-Q Design computeCapability: 7.5
coreClock: 1.095GHz coreCount: 46 deviceMemorySize: 7.79GiB deviceMemoryBandwidth: 357.69GiB/s
2020-02-22 06:24:57.598843: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-02-22 06:24:57.598854: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-02-22 06:24:57.598897: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-02-22 06:24:57.598910: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-02-22 06:24:57.598926: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-02-22 06:24:57.598939: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-02-22 06:24:57.598950: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
....
2020-02-22 06:24:59.442725: I tensorflow/compiler/tf2tensorrt/segment/segment.cc:460] There are 416 ops of 8 different types in the graph that are not converted to TensorRT: BatchToSpaceND, SpaceToBatchND, Placeholder, NoOp, Switch, Identity, Merge, StridedSlice, (For more information see https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#supported-ops).
2020-02-22 06:24:59.590980: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:636] Number of TensorRT candidate segments: 77
2020-02-22 06:24:59.623082: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:737] Replaced segment 0 consisting of 35 nodes by StatefulPartitionedCall/TRTEngineOp_0.
2020-02-22 06:24:59.623329: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:737] Replaced segment 1 consisting of 57 nodes by StatefulPartitionedCall/TRTEngineOp_1.
2020-02-22 06:24:59.623489: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:737] Replaced segment 2 consisting of 9 nodes by StatefulPartitionedCall/single_stage_model0/resnet_block1/conv1d_3/TRTEngineOp_2.
2020-02-22 06:24:59.623567: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:737] Replaced segment 3 consisting of 14 nodes by StatefulPartitionedCall/single_stage_model0/resnet_block1/TRTEngineOp_3.
2020-02-22 06:24:59.623652: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:737] Replaced segment 4 consisting of 9 nodes by StatefulPartitionedCall/single_stage_model0/resnet_block2/conv1d_5/TRTEngineOp_4.

.....



2020-02-22 06:09:26.968512: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:737] Replaced segment 58 consisting of 35 nodes by TRTEngineOp_58.
2020-02-22 06:09:26.968593: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:737] Replaced segment 59 consisting of 9 nodes by StatefulPartitionedCall/single_stage_model3/resnet_block1/conv1d_69/TRTEngineOp_59.
Fatal Python error: Segmentation fault

Current thread 0x00007f0f27e09740 (most recent call first):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/grappler/tf_optimizer.py", line 59 in OptimizeGraph
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py", line 935 in _run_conversion
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/compiler/tensorrt/trt_convert.py", line 992 in convert
  File "/app/models/ms_tcn/tensorflow/inference.py", line 192 in convert_to_tftrt
  File "/app/models/ms_tcn/tensorflow/inference.py", line 198 in run_inference
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250 in _run_main
  File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299 in run
  File "/app/models/ms_tcn/tensorflow/inference.py", line 332 in <module>
  File "/usr/lib/python3.6/runpy.py", line 85 in _run_code
  File "/usr/lib/python3.6/runpy.py", line 193 in _run_module_as_main
Segmentation fault (core dumped)

More log with gdb

Thread 1 "inference.py" received signal SIGSEGV, Segmentation fault.
0x00007fff921bb560 in tensorflow::Node::name() const () from /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/../libtensorflow_framework.so.2
(gdb) where
#0  0x00007fff921bb560 in tensorflow::Node::name() const () from /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/../libtensorflow_framework.so.2
#1  0x00007fff9990cf66 in tensorflow::tensorrt::convert::UpdateToEngineNode(std::vector<tensorflow::tensorrt::convert::EngineInfo, std::allocator<tensorflow::tensorrt::convert::EngineInfo> > const&, unsigned long, std::vector<tensorflow::Node*, std::allocator<tensorflow::Node*> > const&, bool, std::string const&, tensorflow::Node**, int*) ()
   from /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#2  0x00007fff9990efc5 in tensorflow::tensorrt::convert::CreateTRTNode(tensorflow::tensorrt::convert::ConversionParams const&, std::vector<tensorflow::tensorrt::convert::EngineInfo, std::allocator<tensorflow::tensorrt::convert::EngineInfo> > const&, int, int, tensorflow::Graph*, nvinfer1::IGpuAllocator*, std::vector<tensorflow::Node*, std::allocator<tensorflow::Node*> >*) ()
   from /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#3  0x00007fff99914041 in tensorflow::tensorrt::convert::ConvertAfterShapes(tensorflow::tensorrt::convert::ConversionParams const&) ()
   from /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#4  0x00007fff9994e34b in tensorflow::tensorrt::convert::TRTOptimizationPass::Optimize(tensorflow::grappler::Cluster*, tensorflow::grappler::GrapplerItem const&, tensorflow::GraphDef*) ()
   from /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#5  0x00007fff9c2c1937 in tensorflow::grappler::MetaOptimizer::RunOptimizer(tensorflow::grappler::GraphOptimizer*, tensorflow::grappler::Cluster*, tensorflow::grappler::GrapplerItem*, tensorflow::GraphDef*, tensorflow::grappler::MetaOptimizer::GraphOptimizationResult*) () from /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#6  0x00007fff9c2c2ccd in tensorflow::grappler::MetaOptimizer::OptimizeGraph(tensorflow::grappler::Cluster*, tensorflow::grappler::GrapplerItem const&, tensorflow::GraphDef*) ()
   from /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#7  0x00007fff9c2c4714 in tensorflow::grappler::MetaOptimizer::Optimize(tensorflow::grappler::Cluster*, tensorflow::grappler::GrapplerItem const&, tensorflow::GraphDef*) ()
   from /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#8  0x00007fff956bec07 in TF_OptimizeGraph(GCluster, tensorflow::ConfigProto const&, tensorflow::MetaGraphDef const&, bool, std::string const&, bool, TF_Status*) ()
   from /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#9  0x00007fff956c37d6 in _wrap_TF_OptimizeGraph () from /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#10 0x00000000005097cf in ?? ()
#11 0x000000000050b4a9 in _PyEval_EvalFrameDefault ()
#12 0x0000000000507125 in ?? ()
#13 0x0000000000508fa0 in ?? ()
#14 0x000000000050999d in ?? ()
#15 0x000000000050c36e in _PyEval_EvalFrameDefault ()
#16 0x0000000000508c69 in ?? ()
#17 0x000000000050999d in ?? ()
#18 0x000000000050b4a9 in _PyEval_EvalFrameDefault ()
#19 0x0000000000507125 in ?? ()
#20 0x0000000000508fa0 in ?? ()
#21 0x000000000050999d in ?? ()
#22 0x000000000050b4a9 in _PyEval_EvalFrameDefault ()
#23 0x0000000000508c69 in ?? ()
#24 0x000000000050999d in ?? ()
#25 0x000000000050b4a9 in _PyEval_EvalFrameDefault ()
#26 0x0000000000507125 in ?? ()
#27 0x0000000000508fa0 in ?? ()
#28 0x000000000050999d in ?? ()
#29 0x000000000050b4a9 in _PyEval_EvalFrameDefault ()
#30 0x0000000000508c69 in ?? ()
#31 0x000000000050999d in ?? ()
#32 0x000000000050b4a9 in _PyEval_EvalFrameDefault ()
#33 0x0000000000507125 in ?? ()
#34 0x0000000000508fa0 in ?? ()
#35 0x000000000050999d in ?? ()
#36 0x000000000050b4a9 in _PyEval_EvalFrameDefault ()
#37 0x0000000000507125 in ?? ()
#38 0x0000000000515904 in ?? ()
#39 0x00000000005097cf in ?? ()
#40 0x000000000050b4a9 in _PyEval_EvalFrameDefault ()
#41 0x0000000000507125 in ?? ()
---Type <return> to continue, or q <return> to quit---

Tried the same code with TF2.0 + CUDA10 + TRT5.1.5 without docker and still get the segmentation fault error:

(gdb) where
#0  0x00007fff963b7820 in tensorflow::Node::name() const () from /home/landingai/anaconda3/envs/har/lib/python3.6/site-packages/tensorflow_core/python/../libtensorflow_framework.so.2
#1  0x00007fff9c1e4186 in tensorflow::tensorrt::convert::UpdateToEngineNode(std::vector<tensorflow::tensorrt::convert::EngineInfo, std::allocator<tensorflow::tensorrt::convert::EngineInfo> > const&, unsigned long, std::vector<tensorflow::Node*, std::allocator<tensorflow::Node*> > const&, bool, std::string const&, tensorflow::Node**, int*) ()
   from /home/landingai/anaconda3/envs/har/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#2  0x00007fff9c1e6270 in tensorflow::tensorrt::convert::CreateTRTNode(tensorflow::tensorrt::convert::ConversionParams const&, std::vector<tensorflow::tensorrt::convert::EngineInfo, std::allocator<tensorflow::tensorrt::convert::EngineInfo> > const&, int, int, tensorflow::Graph*, nvinfer1::IGpuAllocator*, std::vector<tensorflow::Node*, std::allocator<tensorflow::Node*> >*) ()
   from /home/landingai/anaconda3/envs/har/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#3  0x00007fff9c1eb334 in tensorflow::tensorrt::convert::ConvertAfterShapes(tensorflow::tensorrt::convert::ConversionParams const&) ()
   from /home/landingai/anaconda3/envs/har/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#4  0x00007fff9c223b06 in tensorflow::tensorrt::convert::TRTOptimizationPass::Optimize(tensorflow::grappler::Cluster*, tensorflow::grappler::GrapplerItem const&, tensorflow::GraphDef*) ()
   from /home/landingai/anaconda3/envs/har/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#5  0x00007fff9f5934ac in tensorflow::grappler::MetaOptimizer::RunOptimizer(tensorflow::grappler::GraphOptimizer*, tensorflow::grappler::Cluster*, tensorflow::grappler::GrapplerItem*, tensorflow::GraphDef*, tensorflow::grappler::MetaOptimizer::GraphOptimizationResult*) () from /home/landingai/anaconda3/envs/har/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#6  0x00007fff9f594695 in tensorflow::grappler::MetaOptimizer::OptimizeGraph(tensorflow::grappler::Cluster*, tensorflow::grappler::GrapplerItem const&, tensorflow::GraphDef*) ()
   from /home/landingai/anaconda3/envs/har/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#7  0x00007fff9f596050 in tensorflow::grappler::MetaOptimizer::Optimize(tensorflow::grappler::Cluster*, tensorflow::grappler::GrapplerItem const&, tensorflow::GraphDef*) ()
   from /home/landingai/anaconda3/envs/har/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#8  0x00007fff995cc48f in TF_OptimizeGraph(GCluster, tensorflow::ConfigProto const&, tensorflow::MetaGraphDef const&, bool, std::string const&, TF_Status*) ()
   from /home/landingai/anaconda3/envs/har/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#9  0x00007fff995d37c4 in _wrap_TF_OptimizeGraph () from /home/landingai/anaconda3/envs/har/lib/python3.6/site-packages/tensorflow_core/python/_pywrap_tensorflow_internal.so
#10 0x0000555555665431 in _PyCFunction_FastCallDict () at /tmp/build/80754af9/python_1578429706181/work/Objects/methodobject.c:234
#11 0x00005555556ecdac in call_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4851
#12 0x000055555570f66a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:3335
#13 0x00005555556e6274 in _PyEval_EvalCodeWithName () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4166
#14 0x00005555556e70f1 in fast_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4992
#15 0x00005555556ece85 in call_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4872
#16 0x0000555555710428 in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:3351
#17 0x00005555556e6ebb in _PyFunction_FastCall (globals=<optimized out>, nargs=2, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4933
#18 fast_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4968
#19 0x00005555556ece85 in call_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4872
#20 0x000055555570f66a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:3335
#21 0x00005555556e6274 in _PyEval_EvalCodeWithName () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4166
#22 0x00005555556e70f1 in fast_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4992
#23 0x00005555556ece85 in call_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4872
#24 0x000055555570f66a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:3335
#25 0x00005555556e6ebb in _PyFunction_FastCall (globals=<optimized out>, nargs=0, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4933
#26 fast_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4968
#27 0x00005555556ece85 in call_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4872
#28 0x000055555570f66a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:3335
#29 0x00005555556e657e in _PyEval_EvalCodeWithName () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4166
#30 0x00005555556e70f1 in fast_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4992
#31 0x00005555556ece85 in call_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4872
#32 0x000055555570f66a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:3335
#33 0x00005555556e6ebb in _PyFunction_FastCall (globals=<optimized out>, nargs=2, args=<optimized out>, co=<optimized out>) at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4933
#34 fast_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4968
#35 0x00005555556ece85 in call_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4872
#36 0x000055555570f66a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:3335
#37 0x00005555556e6274 in _PyEval_EvalCodeWithName () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4166
#38 0x00005555556e70f1 in fast_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4992
#39 0x00005555556ece85 in call_function () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:4872
#40 0x000055555570f66a in _PyEval_EvalFrameDefault () at /tmp/build/80754af9/python_1578429706181/work/Python/ceval.c:3335
#41 0x00005555556e7c09 in _PyEval_EvalCodeWithName (qualname=0x0, name=<optimized out>, closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwstep=2, kwcount=<optimized out>, kwargs=0x0, kwnames=0x0, 
---Type <return> to continue, or q <return> to quit---

The same issue here, I've put complete code and a bit more steps to reproduce: tensorflow/tensorflow#37131

@bioothod thanks! Subscribed.

Thanks for the report. The segfault is already corrected here.

But that just treats the first symptoms. The conversion will still abort with a fatal error. I am looking into the underlying problem with the conversion.

Heavily simplified code to reproduce the bug - it is all about tf.nn.swish()

Run it with --crash_me option to crash (or return Node StatefulPartitionedCall/Identity_1 not found in any engine error with your patch)

docker run --ulimit core=-1 --network=host -ti --user=`id -u`:`id -g` --runtime=nvidia -v /home:/home -v `pwd`:`pwd` -w `pwd` --rm -e TF_CPP_VMODULE=segment=2,convert_graph=2,convert_nodes=2,trt_engine=1,trt_logger=2 -e CUDA_DEVICE_ORDER=PCI_BUS_ID -e CUDA_VISIBLE_DEVICES="2" nvcr.io/nvidia/tensorflow:20.02-tf2-py3 python3 ./test.py --output_dir results/test/test --crash_me
import argparse
import logging
import os

import numpy as np
import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt

logger = logging.getLogger('test')

parser = argparse.ArgumentParser()
parser.add_argument('--output_dir', type=str, required=True, help='Output dir where saved models will be stored')
parser.add_argument('--crash_me', action='store_true', help='When present, TRT will crash or exit with "Node StatefulPartitionedCall/Identity_1 not found in any engine." message')
FLAGS = parser.parse_args()

def main():
    image_size = 224

    class Model(tf.keras.Model):
        def __init__(self, **kwargs):
            super().__init__(**kwargs)

            self._conv_stem = tf.keras.layers.Conv2D(
                filters=128,
                kernel_size=[3, 3],
                strides=[2, 2],
                padding='same',
                use_bias=False)

        @tf.function(input_signature=[tf.TensorSpec([None, image_size * image_size * 3], tf.uint8, name='model_input_images')])
        def __call__(self, inputs):
            images = tf.reshape(inputs, [-1, image_size, image_size, 3])
            images = tf.cast(images, tf.float32)
            images -= 128
            images /= 128

            x = self._conv_stem(images)
            x = tf.nn.swish(x)

            if FLAGS.crash_me:
                return x + 1

            # this works fine though
            return x

    m = Model()

    output_dir = os.path.join(FLAGS.output_dir, 'test_saved_model')

    tf.saved_model.save(m, output_dir)

    logger.info('Saved model into {}'.format(output_dir))

    converter = trt.TrtGraphConverterV2(input_saved_model_dir=output_dir)
    converter.convert()
    converter.save('{}_trt'.format(output_dir))

if __name__ == '__main__':
    main()

Thanks @bioothod for the smaller reproducer. This is actually a subtle problem that can occur if there are control edges between TRT nodes. I have filed PR to a fix that tensorflow/tensorflow#37294

Thanks for the patch, but I'm afraid it is not enough, my test above segfaults with it, but I can not tell at the moment whether it is because of the same issue as mentioned here, but even if crash is the same name node dereference, this constant fix is not enough for tf.nn.swish test case

No, I was wrong, your patch fixes the crash, thank you @tfeher

Closing this, as the fix is merged in TF. If you use NGC containers, the 20.05 version will contain the fix.

@tfeher thanks for your response. Do you know when the NGC containers 20.05 will be available ?