/Sensor-Fusion-And-Autonomous-Racing-Cars

The creation of autonomous racing vehicles built for intense head-to-head competition, serving as a dynamic demonstration of AI and sensor integration, sensor fusion technologies, and etc.

Primary LanguageMakefileMIT LicenseMIT

Sensor Fusion And Autonomous Racing Cars

ezgif com-resize

Table of Contents

About

In this project, PID control and image processing methods are used to create an autonomous racing vehicle control system. The system reads real-time video inputs to identify track boundaries and dynamically compute steering adjustments, all while utilizing the ROS 2 Foxy framework. Key responsibilities include using morphological processes to improve picture quality for line recognition and adjusting HSV color (green) ranges to different light levels for reliable track detection.

Based on the perceived deviation from the track center, a PID controller determines the required steering changes, combining error integration and distinction for responsive and smooth vehicle control. In order to dynamically modify the car's speed for the best possible racing performance, the system also computes the track's curvature based on lines that are identified.

Detailed Overview of Technical Implementations

Curvature Calculation

Determining the curvature of the track properly is essential for efficient navigation. My methodology includes:

Curvature Calculation

  • Line Detection, I start by looking for edges using the Canny edge detector, then I use the Hough Transform to look for lines that indicate the track's borders.
  • Line Grouping, Based on their slopes, these identified lines are further divided into left and right bounds.
  • Circle Fitting, To estimate the curvature of the track, I use a least-squares circle-fitting method to these groups. Here, minimizing the subsequent objective function is the goal:

Calculating Radius with Euclidean Distance

n represents the total number of waypoints. xi and yi are the coordinates of each waypoint. xavg and yavg are the average coordinates of all waypoints, pretty much the centroid. The formula calculates the average distance of each waypoint from this centroid, which will help in determining path curvature for steering adjustments in the car. There will be some false positives as spotted in testing.

Recovery Mechanism

I've set up a recovery mechanism in case the car loses sight of the lane lines in order to guarantee more consistency. One of the main reasons for the car leaving the track are the camera angle, field of view (FOV), and poor camera quality. (this will be fixed in a future release)

  • Loss of Line Detection, The car will reverse and shift into neutral steering to reposition itself for improved line vision if it does not identify any lines for more than thirty seconds.
  • Extended Detection Failure, If lines are not detected for an extended period of time, the vehicle will continue to operate in reverse and in neutral, which will stop it from deviating until lines are detected once more.

Recovery Mechanism

PID Control

For swift and smooth car management, the PID controller is essential.

  • Proportional (P), It modifies the steering angle in accordance with the track center deviation.
  • Integral (I), This part corrects systematic errors and biases by slowly building up the mistake over time.
  • Derivative (D), It helps to minimize overshooting and provide a stable driving by moderating the steering response by taking the rate of error change into account.

Computing Correction

This happens based on the current error and delta time. Applying the PID formula for the correction value.

Correction Calculation

Enhancements in Image Processing

Image Processing Pipeline

Pipeline

The car's ability to navigate autonomously relies largely on the process_image function. The region of interest (ROI), which is the track ahead, is first divided into sections at the bottom of the frame. This subset, taken by the car's camera, is critical since it contains the lines that control the steering logic.

After that, the ROI is changed to the HSV color space, which is preferable to the usual BGR color system used in photos for color recognition in an array of lighting conditions. The transformation is shown by two sample images above, one shows a straight path, while the other shows a curve in the track.

Once the HSV conversion is finished, the track lines color, typically green, is separated using a color mask. To suit changing lighting settings, the precise range of green is dynamically modified based on the overall brightness of the image. For debugging reasons, the generated binary mask clearly separates the track lines, as shown in the 'mask.jpg'.

The mask is tested with morphological processes in order to eliminate small noise and refine the picture, ensuring that the edge detection process that follows will only target important features. 'edges.jpg' is the outcome of applying the Canny edge detection method on this cleaned mask. It shows the sharp transitions from the track line to the surrounding area.

Ultimately, these edges are converted into line segments by using the HoughLinesP algorithm. The linear patterns in the edge-detected image can be easily found by this mathematical approach, which can then convert the patterns into a set of line coordinates. The car's steering logic depends on these positions in order to understand the path's structure and modify the steering as needed. The photos that have been analyzed and lines that have been identified are stored for future validation and debugging.

Architecture

ARC-1.0 Design

ARC_ROS2_Architecture drawio (1) (1) drawio

The ARC-1.0 system is an architecture designed specifically for autonomous rc cars that makes use of ROS2 Foxy for communication and control. There are two main levels in this design: the Hardware Layer, which works directly with the physical components to initiate actions, and the System Layer, which handles control inputs and coordinates the navigation logic of the car.

Control inputs at the System Layer can come from an autonomous algorithm that chooses the vehicle's route and maneuvers, or they can come from manually publishing commands. The Ackermann Steering Controller receives these inputs and interprets them into directives. It then uses these directives to calculate the proper wheel speeds and steering angles.

The /ackermann_cmd topic receives steering signals in a standardized message format that is specified by ROS2. Data like the intended speed, steering angle, and acceleration are included in this message. The Ackermann Steering Controller node receives these messages as they are published, analyzes the commands, and determines the required output signals to accomplish the motion that is wanted.

The real physical control of the car happens at the Hardware Layer. It is made up of motor drivers that communicate with the actuators of the car. One essential element that manages the brushless motors and controls their speed based on commands from the System Layer is the vesc driver.

The servo motor driver adjusts the steering mechanism to the proper angle once the vesc driver receives a steering instruction from the /ackermann_cmd topic. These electrical signals govern the wheel speed. The car is able to precisely follow the intended trajectory because to its steering and speed control.

The absence of LiDAR and a complete Navigation2 stack, which are usually seen in autonomous cars for navigation and obstacle avoidance, significantly simplifies the system. Actually, the architecture is made to work with other sensors or in controlled circumstances where complicated navigational tools like these are not needed.

Software

Software Purpose
Ubuntu 20.04 Operating System for both the on-board (RPi) and off-board(laptop) machines
ROS2 Foxy Acts as middleware for communication and development
Gazebo (not implemented yet) Realistic environment for testing and simulating sensors used in the racing cars. Also tests racing strategies, decision-making, and algorithms
Python3 Programming language used in this project
OpenCV Open-source computer vision and machine learning software library
NumPy Library for scientific computing with Python. Support for large, multi-dimensional arrays and matrices
SciPy (not using) Built on top of NumPy, functionalities include optimization, regression, interpolation, etc.
cv_bridge ROS library that provides an interface between ROS and OpenCV

Hardware

Hardware Purpose
Raspberry Pi 4B 8GB RAM Minicomputer to run nodes, scripts, and etc.
20kg Servo Steer For steering the car
TT-02 Type-S Chassis The load-bearing framework of the car
HOBBYWING Sensored Brushless Motor Sensored motor for car, connects to VESC
VESC 6 MkVI Controls and regulates the speed of the electric motor. Customizable firmware, regenerative braking, and real-time telemetry
Traxxas 4000mAh 11.1v 3-Cell 25C Battery to power VESC
Power Bank To power RPi when mobile - stores up to 42800mAh
Logitech C270 Captures images for obstacle detection, lane following, and AI
2D LiDAR (not using) Scans surroundings, detecting obstacles, navigation support, and path planning

Demos

Clips Description
Car Build Video Car Build (no upgrades)
ARC Drive On Test Track Car performing two laps

Track

track 2

Testing

A realistic verification test was conducted to make sure the movement commands given to the car were executed accurately. In order to verify the car's speed, a two-meter strip of tape was placed on the ground. The car was seen to cross the two-meter distance by posting data to the /ackermann_cmd topic, indicating that the speed commands were correctly transmitted as the car stopped the motor at the two-meter mark. Also, the degree to which the vehicle steers its trajectory in response to commands was also used to assess the effectiveness of the steering system. This careful testing approach guarantees that the vehicle's control system translates the command signals into the appropriate physical actions.

carmovingnexttotape-ezgif com-crop

How to Run

After ensuring that all components are set up and all prerequisites have been installed, follow these steps to run the system (All on seprate terminals):

  1. Launch the arc_startup file containing the startups for the camera and vesc_driver package.
    • Command: ros2 launch arc_startup startup.launch.py
  2. Launch the vesc_ackermann package for command translation.
    • Command: ros2 launch vesc_ackermann ackermann_to_vesc_node.launch.xml
  3. Finally, once these two launch files are successfully launched, start the autonomous algorithm package.
    • Command: ros2 launch arc_autonomous_ctrl autonomous_ctrl.launch.py