/AutowareArchitectureProposal.proj

This is the source code of the feasibility study for Autoware architecture proposal.

Primary LanguageShell

Autoware (Architecture Proposal)

autoware

A meta-repository for the new Autoware architecture feasibility study created by Tier IV. For more details about the architecture itself, please read this overview.

WARNING: All source code relating to this meta-repository is intended solely to demonstrate a potential new architecture for Autoware, and should not be used to autonomously drive a real car!

NOTE: Some, but not all of the features within the AutowareArchitectureProposal.iv repository are planned to be merged into Autoware.Auto (the reason being that Autoware.Auto has its own scope and ODD which it needs to achieve, and so not all the features in this architecture proposal will be required).

Installation Guide

Minimum Requirements

Hardware

  • x86 CPU (8 cores)
  • 16GB RAM
  • [Optional] Nvidia GPU (4GB RAM)
    • Although not required to run basic functionality, a GPU is mandatory in order to run the following components:
      • lidar_apollo_instance_segmentation
      • traffic_light_ssd_fine_detector
      • cnn_classifier

Performance will be improved with more cores, RAM and a higher-spec graphics card.

Software

  • Ubuntu 18.04
  • Nvidia driver

Review licenses

The following software will be installed during the installation process, so please confirm their licenses first before proceeding.

Installation steps

If the CUDA or TensorRT frameworks have already been installed, we strongly recommend uninstalling them first.

  1. Set up the Autoware repository
sudo apt install -y python3-vcstool
mkdir -p ~/workspace
cd ~/workspace
git clone git@github.com:tier4/AutowareArchitectureProposal.proj.git
cd AutowareArchitectureProposal.proj
mkdir -p src
vcs import src < autoware.proj.repos
  1. Run the setup script to install CUDA, cuDNN 7, osqp, ROS and TensorRT 7, entering 'y' when prompted (this step will take around 45 minutes)
./setup_ubuntu18.04.sh

ROS installation alone takes around 20 minutes and may fail during this step. In that event, please follow steps 1.2 to 1.4 of the ROS Melodic installation guide and then re-run the script in step 2 above.

  1. Build the source code (this will take around 15 minutes)
source ~/.bashrc
colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release --catkin-skip-building-tests

Several modules will report stderror output, but these are just warnings and can be safely ignored.

Sensor hardware configuration

Prepare launch and vehicle description files according to the sensor configuration of your hardware.
The following files are provided as samples:

Running Autoware

Quick Start

Rosbag simulation

  1. Download the sample pointcloud and vector maps, unpack the zip archive and copy the two map files to the same folder.
  2. Download the sample rosbag.
  3. Open a terminal and launch Autoware
cd ~/workspace/AutowareArchitectureProposal.proj
source install/setup.bash
roslaunch autoware_launch logging_simulator.launch map_path:=/path/to/map_folder vehicle_model:=lexus sensor_model:=aip_xx1 rosbag:=true
  1. Open a second terminal and play the sample rosbag file
cd ~/workspace/AutowareArchitectureProposal.proj
source install/setup.bash
rosbag play --clock -r 0.2 /path/to/sample.bag
  1. Focus the view on the ego vehicle by changing the Target Frame in the RViz Views panel from viewer to base_link.

Note

  • Sample map and rosbag: © 2020 Tier IV, Inc.
    • Due to privacy concerns, the rosbag does not contain image data, and so traffic light recognition functionality cannot be tested with this sample rosbag. As a further consequence, object detection accuracy is decreased.

Planning Simulator

  1. Download the sample pointcloud and vector maps, unpack the zip archive and copy the two map files to the same folder.
  2. Open a terminal and launch Autoware
cd ~/workspace/AutowareArchitectureProposal.proj
source install/setup.bash
roslaunch autoware_launch planning_simulator.launch map_path:=/path/to/map_folder vehicle_model:=lexus sensor_model:=aip_xx1
  1. Set an initial pose for the ego vehicle
    • a) Click the 2D Pose estimate button in the toolbar, or hit the P key
    • b) In the 3D View pane, click and hold the left-mouse button, and then drag to set the direction for the initial pose.
  2. Set a goal pose for the ego vehicle
    • a) Click the 2D Nav Goal button in the toolbar, or hit the G key
    • b) In the 3D View pane, click and hold the left-mouse button, and then drag to set the direction for the goal pose.
  3. Engage the ego vehicle.

Note

  • Sample map: © 2020 Tier IV, Inc.

Tutorials

For more information about running the AutowareArchitectureProposal code, along with more verbose instructions and screenshots, please refer to the detailed tutorials here. These tutorials were originally created for a workshop given at the 2020 Arm DevSummit, and have been adapted for use here.

Running the AutowareArchitectureProposal source code with Autoware.Auto

For anyone who would like to use the features of this architecture proposal with existing Autoware.Auto modules right now, ros_bridge can be used.

Until the two architectures become more aligned, message type conversions are required to enable communication between the Autoware.Auto and AutowareArchitectureProposal modules and these will need to be added manually.

References

Autoware.IV demonstration videos

Credits