NVIDIA/tensorrt-laboratory

run multiple models at one time on xavier

jeansely opened this issue · 1 comments

Hi,
how can i take two model at same time on nvidia jetson xavier. with last update i have error.
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
pycuda._driver.LogicError: cuMemcpyHtoDAsync failed: invalid argument
CException ignored in: <module 'threading' from '/usr/lib/python3.6/threading.py'>
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 1294, in _shutdown
t.join()
File "/usr/lib/python3.6/threading.py", line 1056, in join
self._wait_for_tstate_lock()
File "/usr/lib/python3.6/threading.py", line 1072, in _wait_for_tstate_lock
elif lock.acquire(block, timeout):
KeyboardInterrupt
FATAL: exception not rethrown
Aborted (core dumped)

Jetson is outside the scope of this project. However, you can use the examples in this project to use TensorRT efficiently on Jetson. See Issue #35.

However, your example code above looks to be some issues with using pycuda.