/sumo2vision

Simulate perception sensors and object detection based on traffic data generated by Sumo

Primary LanguagePython

Downloading Map

  1. Go to https://www.openstreetmap.org/
  2. Search for the desired area, e.g. escondido california
  3. Zoom-in to the desired area
  4. Export the area and save it to a specific folder

Tip

You can use https://josm.openstreetmap.de/ to edit the map and remove some unwanted blocks (e.g. roads or buildings)

Sumo to Vision

Simulate perception sensors and object detection based on traffic data generated by Sumo. The roads are from downtown Toronto.

Data generation

  1. Set the maps_path in initialize_folders.py to the path of the downloaded maps. You can also set the position of a Basestation as a text file corresponding to each map
  2. Set the output_dataset_path in initialize_folders.py for the output to generate the dataset.
  3. Run initialize_folders.py.
  4. To run all scenario (bulk run): Run sumo_visual_all_scenario.py.
  5. Otherwise, run sumo_visual_scenario.py for generating data for a single scenario.
  6. For step 4 or 5, adjust the parameters accordingly before running the script.

Data format

  1. Vehicle class (in vehicle_info.py) has the vehicle attached information.
  2. The output files have the following format:

(cv2x_vehicles, non_cv2x_vehicles, buildings, cv2x_perceived_non_cv2x_vehicles, scores_per_cv2x, los_statuses, vehicles, cv2x_perceived_non_cv2x_vehicles, cv2x_vehicles_perception_visible, tot_perceived_objects, tot_visible_objects)

  1. cv2x_perceived_non_cv2x_vehicles is a dictionary having the cv2x id as a key and the perceived vehicles as value.