/Radar-RGB-Attentive-Multimodal-Object-Detection

Object Detection on Radar sensor and RGB camera images. https://ieeexplore.ieee.org/document/9191046 Full Thesis : RADAR+RGB Fusion for Robust Object Detection in Autonomous Vehicles. Zenodo. https://doi.org/10.5281/zenodo.13738235

Primary LanguagePythonMIT LicenseMIT

Python 3.7+ TensorFlow 1.1 License: MIT

Radar+RGB Attentive Fusion For Robust Object Detection in Autonomous Vehicles(ICIP 2020)

Description:

Code is for two robust multimodal two-stage object detection networks BIRANet and RANet. The two modalities used in these architectures are radar signals and RGB camera images. These two networks have the same base architecture with differences in anchor generation and RPN target generation methods, which are explained in the paper. Evaluation is done on NuScenes dataset[https://www.nuscenes.org], and results are compared with Faster R-CNN with feature pyramid network for object detection(FFPN)[https://arxiv.org/pdf/1612.03144.pdf]. Both proposed networks proved to be robust in comparison to FFPN. BIRANet performs better than FFPN and also proved to be more robust. RANet is evaluated to be robust and works reasonably well with fewer anchors, which are merely based on radar points. For further details, please refer to our paper(https://ieeexplore.ieee.org/document/9191046).

alt text      alt text

Packing List:

The repository includes:

  • Source code(which is built on Mask RCNN code base structure but without mask/segmentation branch hence equivalent to FFPN.)
  • Training code
  • Trained weights for testing/evaluation
  • ParallelModel class for multi-GPU training
  • Evaluation on MS COCO metrics (AP & AR) with changes mentioned in the paper

Player Information:

Installation

  1. Download modified small NuScenes dataset (size: 2.7 GB). [ACCESS ISSUES, TRYING TO RETRIEVE DATA]
  2. Install pycocotools using https://github.com/cocodataset/cocoapi.
  3. Install dependencies from requirement.txt
    pip3 install -r requirements.txt
  4. Run setup from the repository root directory.
    python3 setup.py install

Training and Evaluation

Training and evaluation code is in samples/coco/nucoco.py. You can import this module in Jupyter notebook or you can run it directly from the command line as such:

# Train a new model starting from pre-trained COCO weights
python3 samples/coco/nucoco.py train --dataset=/path/to/nuscenes/ --model=coco

# Train a new model starting from ImageNet weights
python3 samples/coco/nucoco.py train --dataset=/path/to/nuscenes/ --model=imagenet

# Continue training a model that you had trained earlier
python3 samples/coco/nucoco.py train --dataset=/path/to/nuscenes/ --model=/path/to/weights.h5

# Continue training the last model you trained. This will find
# the last trained weights in the model directory.
python3 samples/coco/nucoco.py train --dataset=/path/to/nuscenes/ --model=last

# Run COCO evaluation on the last trained model
python3 samples/coco/nucoco.py evaluate --dataset=/path/to/nuscenes/ --model=last

Optional arguments

# To select network:
--net= BIRANet/RANet Default = BIRANet.
# To select image resolution:
--resolution=512/1024   Default =1024

# The training schedule, learning rate, and other parameters should be set in `mrcnn/config.py`.

If you are using this work, please cite:

   @INPROCEEDINGS{9191046,
     author={Yadav, Ritu and Vierling, Axel and Berns, Karsten},
     booktitle={2020 IEEE International Conference on Image Processing (ICIP)}, 
     title={Radar + RGB Fusion For Robust Object Detection In Autonomous Vehicle}, 
     year={2020},
     volume={},
     number={},
     pages={1986-1990},
     doi={10.1109/ICIP40778.2020.9191046}}

Contact Information:

Ritu Yadav (Email: er.ritu92@gmail.com)