Radar-Camera Fusion Object Detection System

This project implements a real-time radar-camera fusion system for robust object detection in complex environments. It leverages YOLOv8 for visual inference, with CUDA-accelerated pre- and post-processing and TensorRT-based deployment for high-performance execution. By fusing complementary data from radar and camera sensors, the system significantly improves perception accuracy and reliability under diverse conditions.


โœจ Key Features

  • Radar-Camera Sensor Fusion
  • YOLOv8-Based Detection
  • C++17 Implementation
  • Modular Design
  • Visual Output Support

๐Ÿ“ Project Structure

radar_image/
โ”œโ”€โ”€ config/              
โ”œโ”€โ”€ include/             
โ”œโ”€โ”€ msgs/                
โ”œโ”€โ”€ src/                 
โ”œโ”€โ”€ ultralytics-8.0.40/  
โ”œโ”€โ”€ workspace/              
โ”œโ”€โ”€ CMakeLists.txt       
โ””โ”€โ”€ README.md            

๐Ÿ”ง Build Dependencies

  • C++ compiler with C++17 support (e.g., g++ >= 7)
  • CMake โ‰ฅ 3.10
  • OpenCV โ‰ฅ 4.0
  • jsoncpp
  • Protobuf โ‰ฅ 3.0

โš™๏ธ Build & Run

mkdir build
cd build
cmake .. -DCMAKE_CXX_STANDARD=17
make -j$(nproc)
./yolo_refactor

๐Ÿงช Filtering Visualization

The following visual examples show detection results before and after the radar-camera fusion filtering process:

Before Filtering

Before Filtering

After Filtering

After Filtering

๐Ÿ™ Acknowledgements

This project makes use of the following open-source resources: