/MultiCorrupt

MultiCorrupt: A benchmark for robust multi-modal 3D object detection, evaluating LiDAR-Camera fusion models in autonomous driving. Includes diverse corruption types (e.g., misalignment, miscalibration, weather) and severity levels. Assess model performance under challenging conditions.

Primary LanguageJupyter NotebookMIT LicenseMIT

MultiCorrupt: A Multi-Modal Robustness Dataset and Benchmark of LiDAR-Camera Fusion for 3D Object Detection

Till Beemelmanns1   Quan Zhang2   Christian Geller1   Lutz Eckstein1  
1Institute for Automotive Engineering, RWTH Aachen University, Germany   2Department of Electrical Engineering and Computer Science, TU Berlin, Germany  

Abstract: Multi-modal 3D object detection models for autonomous driving have demonstrated exceptional performance on computer vision benchmarks like nuScenes. However, their reliance on densely sampled LiDAR point clouds and meticulously calibrated sensor arrays poses challenges for real-world applications. Issues such as sensor misalignment, miscalibration, and disparate sampling frequencies lead to spatial and temporal misalignment in data from LiDAR and cameras. Additionally, the integrity of LiDAR and camera data is often compromised by adverse environmental conditions such as inclement weather, leading to occlusions and noise interference. To address this challenge, we introduce MultiCorrupt, a comprehensive benchmark designed to evaluate the robustness of multi-modal 3D object detectors against ten distinct types of corruptions.

Paper and Poster

Overview

Corruption Types

Missing Camera

Severity Level 1 Severity Level 2 Severity Level 3
Multi-View: Multi-View: Multi-View:
Severity 1 Multi-View Severity 2 Multi-View Severity 3 Multi-View

Motion Blur

Severity Level 1 Severity Level 2 Severity Level 3
BEV: BEV: BEV:
Severity 1 BEV Severity 2 BEV Severity 3 BEV
Front: Front: Front:
Severity 1 Front Severity 2 Front Severity 3 Front

Points Reducing

Severity Level 1 Severity Level 2 Severity Level 3
BEV: BEV: BEV:
Severity 1 BEV Severity 2 BEV Severity 3 BEV
Front: Front: Front:
Severity 1 Front Severity 2 Front Severity 3 Front

Snow

Severity Level 1 Severity Level 2 Severity Level 3
BEV: BEV: BEV:
Severity 1 BEV Severity 2 BEV Severity 3 BEV
Front: Front: Front:
Severity 1 Front Severity 2 Front Severity 3 Front

Temporal Misalignment

Severity Level 1 Severity Level 2 Severity Level 3
BEV: BEV: BEV:
Severity 1 BEV Severity 2 BEV Severity 3 BEV
Multi-View: Multi-View: Multi-View:
Severity 1 Multi-View Severity 2 Multi-View Severity 3 Multi-View

Spatial Misalignment

Severity Level 1 Severity Level 2 Severity Level 3
BEV: BEV: BEV:
Severity 1 BEV Severity 2 BEV Severity 3 BEV
Multi-View: Multi-View: Multi-View:
Severity 1 Multi-View Severity 2 Multi-View Severity 3 Multi-View

Beams Reducing

Severity Level 1 Severity Level 2 Severity Level 3
BEV: BEV: BEV:
Severity 1 BEV Severity 2 BEV Severity 3 BEV
Front: Front: Front:
Severity 1 Front Severity 2 Front Severity 3 Front

Brightness

Severity Level 1 Severity Level 2 Severity Level 3
Multi-View: Multi-View: Multi-View:
Severity 1 Multi-View Severity 2 Multi-View Severity 3 Multi-View

Dark

Severity Level 1 Severity Level 2 Severity Level 3
Multi-View: Multi-View: Multi-View:
Severity 1 Multi-View Severity 2 Multi-View Severity 3 Multi-View

Fog

Severity Level 1 Severity Level 2 Severity Level 3
BEV: BEV: BEV:
Severity 1 BEV Severity 2 BEV Severity 3 BEV
Multi-View: Multi-View: Multi-View:
Severity 1 Multi-View Severity 2 Multi-View Severity 3 Multi-View

Note: Right click and click on Open Image in new tab to enlarge an animation

News

  • [19.07.2024] MultiCorrupt paper is now accssible via IEEE Explore, Poster uploaded
  • [12.07.2024] v0.0.7 IS-Fusion was added to the benchmark
  • [30.03.2024] MultiCorrupt has been accepted to IEEE Intelligent Vehicles Symposium (IV)
  • [28.03.2024] v0.0.3 Changed severity configuration for Brightness, reevaluated all models and metrics
  • [17.02.2024] v0.0.2 Changed severity configuration for Pointsreducing, reevaluated all models and metrics
  • [01.02.2024] v0.0.1 Initial Release with 10 corruption types and 5 evaluated models

Benchmark Results

Resistance Ability (RA) computed with NDS metric

Model Clean Beams Red. Brightness Darkness Fog Missing Cam. Motion Blur Points Red. Snow Spatial Mis. Temporal Mis. mRA
CMT 0.729 0.786 0.937 0.948 0.806 0.974 0.841 0.925 0.833 0.809 0.788 0.865
Sparsefusion 0.732 0.689 0.975 0.963 0.767 0.954 0.848 0.879 0.770 0.714 0.777 0.834
BEVfusion 0.714 0.676 0.967 0.969 0.752 0.974 0.866 0.872 0.774 0.705 0.742 0.830
IS-Fusion 0.737 0.680 0.960 0.952 0.758 0.953 0.873 0.860 0.733 0.715 0.771 0.826
TransFusion 0.708 0.633 0.993 0.988 0.754 0.985 0.826 0.851 0.748 0.685 0.777 0.824
DeepInteraction 0.691 0.655 0.969 0.929 0.583 0.842 0.832 0.882 0.759 0.731 0.768 0.795

Relative Resistance Ability (RRA) computed with NDS metric and baseline BEVfusion

Model Clean Beams Red. Brightness Darkness Fog Missing Cam. Motion Blur Points Red. Snow Spatial Mis. Temporal Mis. mRRA
BEVfusion 0.714 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
CMT 0.729 18.642 -1.138 -0.096 9.398 2.041 -0.841 8.213 9.887 17.053 8.448 7.161
Sparsefusion 0.732 4.264 3.179 1.821 4.429 0.297 0.280 3.242 1.887 3.699 7.228 3.033
IS-Fusion 0.737 3.684 2.291 1.267 3.890 0.920 3.994 1.691 -2.351 4.513 7.177 2.708
TransFusion 0.708 -7.210 1.799 1.146 -0.552 0.340 -5.412 -3.296 -4.220 -3.626 3.850 -1.718
DeepInteraction 0.691 -6.361 -3.150 -7.215 -25.037 -16.386 -7.077 -2.188 -5.149 0.212 0.145 -7.221

Metrics

We adhere to the official nuScenes metric definition for computing the NDS and mAP metrics on the MultiCorrupt dataset. To quantitatively compare the performance between the corrupted dataset and the clean nuScenes datasets, we use a metric called the Resistance Ability (RA). This metric is calculated across the different severity levels with

$$RA_{c,s} = \frac{M_{c,s}}{M_{clean}}, RA_c = \frac{1}{3} \sum_{s=1}^{3} RA_{c,s}$$

$$mRA = \frac{1}{N} \sum_{c=1}^{N} RA_c$$

where $M_{c,s}$ represents metric for the $c$ types of corruption at the $s$-the severity level, $N$ is the total number of corruption types considered in our benchmark, and $M_{clean}$ is performance on the "clean" nuScenes dataset.

Relative Resistance Ability ( $RRA_{c}$ ) compares the relative robustness of each model for a specific type of corruption with a baseline model. If the value is greater than zero, it indicates that the model demonstrates superior robustness compared to the baseline model. If the value is less than zero, it suggests that the model is less robust than the baseline. We can summarize the relative resistance by computing Mean Relative Resistance Ability (mRRA), which measures the relative robustness of the candidate model compared to a baseline model for all types of corruptions

$$RRA_{c} = \frac{\sum\limits_{i=1}^{3} (M_{c, s})}{\sum\limits_{i=1}^{3} (M_{baseline, c, s})} - 1,$$

$$mRRA = \frac{1}{N} \sum_{i=1}^{N} RRA_c.$$

where $c$ denotes the type of corruption, $s$ represents the level of severity, and $N$ is the total number of corruption types considered in our benchmark. The term $RRA_{c}$ specifically illustrates the relative robustness of each model under a particular type of corruption $c$. The $mRRA$ reflects the global perspective by showing the average robustness of each model across all considered types of corruption with the baseline model.

Installation

Clone this repository:

git clone https://github.com/ika-rwth-aachen/MultiCorrupt.git
cd multicorrupt

Build the Docker image:

cd docker
docker build -t multicorrupt_create -f Dockerfile .

Download Snowflakes

We use LiDAR_snow_sim to simulate snow in LiDAR point clouds. To make the snow simulation run we need to download the snowflakes:

cd converter
wget https://www.trace.ethz.ch/publications/2022/lidar_snow_simulation/snowflakes.zip
unzip snowflakes.zip
rm snowflakes.zip

Usage

Docker Container Setup

We recommend to use run.sh to start the multicorrupt_create_container in order to generate MultiCorrupt. Please modify the following pathes according to your local setup.

multicorrupt_data_dir="/work/multicorrupt"
nuscenes_data_dir="/work/nuscenes"

Please make sure that you have downloaded nuScenes to nuscenes_data_dir.

After setting up the container, you could use VS Code to attach to the container or directly execute the following scripts.

Image Corruption Generation

Run the following script to generate a corrupted image data:

usage: img_converter.py [-h] [-c N_CPUS] [-a {snow,fog,temporalmisalignment,brightness,dark,missingcamera,motionblur}] [-r ROOT_FOLDER]
                        [-d DST_FOLDER] [-f SEVERITY] [--seed SEED]

Generate corrupted nuScenes dataset for image data

options:
  -h, --help            show this help message and exit
  -c N_CPUS, --n_cpus N_CPUS
                        number of CPUs that should be used
  -a {snow,fog,temporalmisalignment,brightness,dark,missingcamera,motionblur}, --corruption {snow,fog,temporalmisalignment,brightness,dark,missingcamera,motionblur}
                        corruption type
  -r ROOT_FOLDER, --root_folder ROOT_FOLDER
                        root folder of dataset
  -d DST_FOLDER, --dst_folder DST_FOLDER
                        savefolder of dataset
  -f SEVERITY, --severity SEVERITY
                        severity level {1,2,3}
  --seed SEED           random seed

Example

python converter/img_converter.py \
--corruption snow \
--root_folder /workspace/data/nuscenes \
--dst_folder /workspace/multicorrupt/snow/3 \
--severity 3 \
--n_cpus 24

LiDAR Corruption Generation

Run the following script to generate a corrupted LiDAR data:

usage: lidar_converter.py [-h] [-c N_CPUS] [-a {pointsreducing,beamsreducing,snow,fog,copy,spatialmisalignment,temporalmisalignment,motionblur}] [-s SWEEP] [-r ROOT_FOLDER]
                          [-d DST_FOLDER] [-f SEVERITY] [--seed SEED]

Generate corrupted nuScenes dataset for LiDAR

options:
  -h, --help            show this help message and exit
  -c N_CPUS, --n_cpus N_CPUS
                        number of CPUs that should be used
  -a {pointsreducing,beamsreducing,snow,fog,copy,spatialmisalignment,temporalmisalignment,motionblur}, --corruption {pointsreducing,beamsreducing,snow,fog,copy,spatialmisalignment,temporalmisalignment,motionblur}
                        corruption type
  -s SWEEP, --sweep SWEEP
                        if apply for sweep LiDAR
  -r ROOT_FOLDER, --root_folder ROOT_FOLDER
                        root folder of dataset
  -d DST_FOLDER, --dst_folder DST_FOLDER
                        savefolder of dataset
  -f SEVERITY, --severity SEVERITY
                        severity level {1,2,3}
  --seed SEED           random seed

Example

python3 converter/lidar_converter.py \
--corruption snow \
--root_folder /workspace/data/nuscenes \
--dst_folder /workspace/multicorrupt/snow/3/ \
--severity 3 \
--n_cpus 64 \
--sweep true

MultiCorrupt Folder Structure

We recommend to create the following folder structure for MultiCorrupt using lidar_converter.py and img_converter.py:

-- multicorrupt
    |-- beamsreducing
    |   |-- 1
    |   |-- 2
    |   `-- 3
    |-- brightness
    |   |-- 1
    |   |-- 2
    |   `-- 3
    |-- dark
    |   |-- 1
    |   |-- 2
    |   `-- 3
    |-- fog
    |   |-- 1
    |   |-- 2
    |   `-- 3
    |-- missingcamera
    |   |-- 1
    |   |-- 2
    |   `-- 3
    .
    .
    .

MultiCorrupt Evaluation

If you have created MultiCorrupt in the structure above, we recommend you to use our simple evaluation script that iterates over the whole dataset, executes the evaluation and extracts the NDS and mAP metrics.

In the script you would need to replace the pathes for multicorrupt_root, nuscenes_data_dir and logfile according to your setup.

#!/bin/bash

# List of corruptions and severity levels
corruptions=("beamsreducing" "brightness" "dark" "fog" "missingcamera" "motionblur" "pointsreducing" "snow" "spatialmisalignment" "temporalmisalignment")
severity_levels=("1" "2" "3")

# Directory paths
multicorrupt_root="/workspace/multicorrupt/"
nuscenes_data_dir="/workspace/data/nuscenes"
logfile="/workspace/evaluation_log.txt"

.
.
.

TODOs

  • Add more visualization
  • Add contribution guidelines

Contribution

  • Coming Soon
    • How to contribute
    • How to add a model to the benchmark

Acknowledgments

We thank the authors of

for their open source contribution which made this project possible.


This work has received funding from the European Union’s Horizon Europe Research and Innovation Programme under Grant Agreement No. 101076754 - AIthena project.

Citation

@INPROCEEDINGS{10588664,
  author={Beemelmanns, Till and Zhang, Quan and Geller, Christian and Eckstein, Lutz},
  booktitle={2024 IEEE Intelligent Vehicles Symposium (IV)}, 
  title={MultiCorrupt: A Multi-Modal Robustness Dataset and Benchmark of LiDAR-Camera Fusion for 3D Object Detection}, 
  year={2024},
  volume={},
  number={},
  pages={3255-3261},
  keywords={Training;Solid modeling;Three-dimensional displays;Laser radar;Object detection;Detectors;Benchmark testing},
  doi={10.1109/IV55156.2024.10588664}}