A dataset for the 2TX-4RX MMWave Radar with the raw ADC data being recorded. Six main objects - pedestrian, cyclist, car, motorbike, bus, truck - were collected to fit the automotive object detection scenario.
RAMP-CNN: A Novel Neural Network for Enhanced Automotive Radar Object Recognition,
Xiangyu Gao, Guanbin Xing, Sumit Roy, and Hui Liu,
arXiv technical report (arXiv 2011.08981)
@ARTICLE{9249018, author={Gao, Xiangyu and Xing, Guanbin and Roy, Sumit and Liu, Hui},
journal={IEEE Sensors Journal},
title={RAMP-CNN: A Novel Neural Network for Enhanced Automotive Radar Object Recognition},
year={2021}, volume={21}, number={4}, pages={5119-5132}, doi={10.1109/JSEN.2020.3036047}}
Raw ADC Data of 77GHz MMWave radar for Automotive Object Detection,
Xiangyu Gao, Youchen Luo, Guanbin Xing, Sumit Roy, Hui Liu,
IEEE Dataport
@data{xm40-jx59-22, doi = {10.21227/xm40-jx59}, url = {https://dx.doi.org/10.21227/xm40-jx59},
author = {Gao, Xiangyu and Luo, Youchen and Xing, Guanbin and Roy, Sumit and Liu, Hui},
publisher = {IEEE Dataport},
title = {Raw ADC Data of 77GHz MMWave radar for Automotive Object Detection},
year = {2022} }
(April 28, 2023) Update the description for labels.
(Dec. 11, 2022) Initial release of dataset and tools.
In this dataset, we provided the raw analog-to-digital-converter (ADC) data of a 77GHz mmwave radar for the automotive object detection scenario. The overall dataset contains approximately 19800 frames of radar data as well as synchronized camera images and labels. For each radar frame, its raw data has 4 dimensions: samples (fast time), chirps (slow time), transmitters, and receivers. The experiment radar was assembled from the TI AWR 1843 board, with 2 horizontal transmit antennas and 4 receive antennas. With time-division multiplexing on all transmitters, it can form a 1D-MIMO virtual array with 8 elements.
The data collection was done on the campus, road, and parking lot during the daytime, with the focus of capturing the data for six main objects: pedestrian, cyclist, car, motorbike, bus, truck. The collected objects can be either moving (mostly) or static. A single data collection run consisted of multiple objects listed above moving or being static at a normal speed for 30 seconds in front of the testbed. More information in terms of dataset structure, format, tools, and radar configuration was described below.
Download the dataset from the google drive link:
https://drive.google.com/file/d/1QgjwdQpY96NAVGdvjjFrXLhb48o15EO_/view?usp=share_link
Or from IEEE Dataport:
https://ieee-dataport.org/documents/raw-adc-data-77ghz-mmwave-radar-automotive-object-detection
The dataset consists of multiple sequences, e.g., "2019_04_09_bms1000", "2019_04_09_cms1000". Under each sequence folder, there exists the image folder "images_0", and radar data folder "radar_raw_frame", and label folder "text_labels".
The overall dataset structure is presented as below.
Automotive
---2019_04_09_bms1000
---images_0
---radar_raw_frame
---text_labels
---2019_04_09_cms1000
......
The "radar_raw_frame" folder contains raw ADC radar data in .mat format, and "images_0" folder contains camera images in .jpg format, and the "text_labels" contains label files for each frame in .csv format. The detailed data format is explained below.
You should match the radar frame, camera image, and labels based on their filenames. There is some redundant data and you can simply disregard it.
-
For each radar frame, its raw data (*.mat) has 4 dimension:
samples (128), chirps (255), receivers (4), transmitters (2).
All transmitters were arranged with time-division multiplexing (TDM), i.e., send chirp signals one by one. The example frame structure is shown below:
- The placement of 2 transmitters and 4 receivers was plotted in the left figure below, from the TI documentation. Through TDM, it forms a 1 by 8 MIMO array as shown in the right figure below:
- All radar configurations are included in config.
- The camera image for each frame is 1440x1080 pixels.
-
Each *.csv file include the labels for a frame, with each row of it in format of [uid, class, px, py, wid, len], they are,
uid: the unique tracking id of objects in this sequence class: the class id of objects, with the id number represented in label map, px: x-axis center of bounding box in meters within the range of [-20m, 20m] py: y-axis center of bounding box in meters within the range of [1m, 24m] wid: width of bounding box in meters (corresponding to x-axis) len: lengt of bounding box in meters (corresponding to y-axis) The mapping of class id to objects is: label_map = {0: 'person', 2: 'car', 3: 'motorbike', 5: 'bus', 7: 'truck', 80: 'cyclist', }
Note that there might be a few special cases where the px, py values exceed the provided limit and you may just wanna ignore them or do the clipping.
Python 3.6 (please refer to INSTALL to set up libraries.)
Under prepare...
This tool is released under MIT license (see LICENSE).
This project was supported by the FUNLAB, University of Washington, and Silkwave Holdings. This project is not possible without multiple great open-source codebases. We list some notable examples below.