Cannot build engine
antithing opened this issue · 15 comments
Hi, and thank you for making this code available!
I am building and I see teh following error:
Searching for engine file with name: yolov8x.engine.NVIDIAGeForceRTX3090.fp16.1.1.2000000000
Engine not found, generating. This could take a while...
4: [network.cpp::nvinfer1::Network::validate::2671] Error Code 4: Internal Error (Network must have at least one output)
2: [builder.cpp::nvinfer1::builder::Builder::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
What might be causing this?
Are you using one of Ultralytic's models? Or are you using your own model?
See this solution here. You may need to explicitly mark the output using the INetworkDefinition::markOutput
method.
Hi, thanks for getting back to me, where would I put this code?
I am using the Ultralytic yolov8x-seg.pt model
I imagine you would do it somewhere here, so something like
network->markOutput(*network->getLayer(network->getNbLayers() - 1));
Error (active) E0434 a reference of type "nvinfer1::ITensor &" (not const-qualified) cannot be initialized with a value of type "nvinfer1::ILayer" tensorrt_cpp_api
Try this:
ILayer* outputLayer = network->getLayer(network->getNbLayers() - 1);
ITensor* outputTensor = outputLayer->getOutput(0);
network->markOutput(*outputTensor);
This is as much support as I can provide without actually knowing any more about your model (as your issue isn't with my YoloV8 implementation but instead with your model). Perhaps you should ask why your onnx model doesn't have the output marked correctly? Maybe try to fix the issue at the source.
Thank you, I will try that. This is the model from ultralytics,
https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt
converted with:
Navigate to the [official YoloV8 repository](https://github.com/ultralytics/ultralytics) and download your desired version of the model (ex. YOLOv8m).
pip3 install ultralytics
Navigate to the scripts/ directory and modify this line so that it points to your downloaded model: model = YOLO("../models/yolov8m.pt").
python3 pytorch2onnx.py
After running this command, you should successfully have converted from PyTorch to ONNX.
Should I be using a different model?
Strange, you should not be getting this error if you used the model from the Ultralytics repo. Which model exactly did you use?
Also can you confirm your TensorRT version.
https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8x-seg.pt
TensorRT-8.5.3.1.Windows10.x86_64.cuda-11.8.cudnn8.6
I see. I haven't yet added support for models with segmentation. Please have a look at this repo for guidance on how to use models with segmentation.
I'll leave this issue open for now as a reminder to come back and add support for segmentation down the line.
Hi @antithing
I have added support for segmentation models in the code.
I tested with the yolov8x-seg.pt
model and it is working.
Hi @antithing I have added support for segmentation models in the code. I tested with the
yolov8x-seg.pt
model and it is working.
could the segmentation model be deployed on Jetson TX2? @cyrusbehr
Please try the feat/jetson-tx2
branch.
Please try the
feat/jetson-tx2
branch
well,I have tried the feat/jetson-tx2 branch, but I can only deploy the detection function of YOLOV8.When it comes to segmentation task, I meet the issue:could not find any implementation for node ConvTranspose_177
. The segmentation model I have tried is official yolov8n-seg.pt/onnx. @cyrusbehr