/TwinLiteNet-onnxruntime

Perform inference with TwinLiteNet model using ONNX Runtime. TwinLiteNet is a lightweight and efficient deep learning model designed for drivable area and lane segmentation

Primary LanguageC++MIT LicenseMIT

TwinLiteNet ONNX Model Inference with ONNX Runtime

This repository includes a C++ implementation for performing inference with the state-of-the-art TwinLiteNet model using ONNX Runtime. TwinLiteNet is a cutting-edge lane detection and drivable area segmentation model. This implementation provides support for both CUDA and CPU inference through build options.

Image 1 Image 2 Image 3
Image 4 Image 5 Image 6

Acknowledgment 🌟

I would like to express sincere gratitude to the creators of the TwinLiteNet model for their remarkable work .Their open-source contribution has had a profound impact on the community and has paved the way for numerous applications in autonomous driving, robotics, and beyond.Thank you for your exceptional work.

Project Structure

The project has the following structure:


├── CMakeLists.txt
├── LICENSE
├── README.md
├── assets/
├── images/
├── include/
│   └── twinlitenet_onnxruntime.hpp
├── models/
│   └── best.onnx
└── src/
    ├── main.cpp
    └── twinlitenet_onnxruntime.cpp

Requirements


Build Options

  • CUDA Inference: To enable CUDA support for GPU acceleration, build with the -DENABLE_CUDA=ON CMake option.
  • CPU Inference: For CPU-based inference, no additional options are required.

Usage

  1. Clone this repository.
  2. Build the project using CMake with your preferred build options.
mkdir build
cd build
cmake  -DENABLE_CUDA=ON ..
make -j8
  1. Execute ./main and Enjoy accurate lane detection and drivable area results!

License

This project is licensed under the MIT License. Feel free to use it in both open-source and commercial applications.

Extras