Convert PyTorch model to executable program, not just a TensorRT engine.
CUDA-11.1.1
cuDNN-8.1.1
TensorRT-7.2.3.4
OpenCV-4.5.3
jsoncpp-1.9.4
Firstly, clone the repository:
git clone https://github.com/CnybTseng/torch2exe.git
It is highly recommended to compile the project with ninja. You need to install ninja first. On Windows system, you also need to install microsoft visual c++ build tools.
On windows:
cd torch2exe
mkdir build && cd build
cmake -G"Ninja" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=%cd%\..\install ..
ninja && ninja install
On linux:
cd torch2exe
mkdir build && cd build
cmake -G"Ninja" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$(pwd)/../install ..
ninja && ninja install
Usually, on the target platform, the output decoder of the model does not have ready-made operators and needs to be customized in the form of plug-ins. Therefore, the calculation logic of this part should not be included in the PyTorch style description file of the model. Take YOLOv5m6 as an example, output 4 tensors, and then parse these 4 tensors to get the detection result of the object, then the details of parsing these 4 tensors are not suitable for direct translation into the calculation graph, so the original code should be slightly modified , feed forward until the output of these 4 tensors.