This repository is about team project in 'Deep learning class' in Hanbat National University (Advisor : Prof. KyunTae Lim).
This class is selected as the NVIDIA University Ambassador Program from NVIDIA Deep Learning Institute (DLI).
And I was a team leader and whole period was 4 weeks project.
Autopilot's interest is increasing without interupt.
Lots of big company such as Amazon, Tesla, Uber etc. research Autopilot area these day and They are looking for great developer and the popularity may look likely to go further.
But It is not easy to pracitce or test your model with real car. (hard to hack your car (Is it even possible?) or too danger)
With Nvidia Jetson Nano and Jetbot AI kid, You can practice Autopilot technology and it is really helpful to get a sense how The Autopilot works.
In this reposityry, We are introducing about our Autopilot project.
And If you are studying Autopilot with Jetson Nano and Jetson robot or not, It might be helpful.
NVIDIA® Jetson Nano™ Developer Kit is a small, powerful computer that lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. All in an easy-to-use platform that runs in as little as 5 watts.
As you can see the above quatation, You can practice some AI models with Jetso nano.
Jetson nano is just small grphic card so you are able to train models (but only light weight model).
In jetson nano, Models can be trained and inference.
With inferenced information, Jetson robot can move what you trained. (We used 'Waveshare Jetbot AI kid'.)
Our Autopilot is configured Road following, Collision Avoidance, Object detection. (You can see the details lower part)
The flow chart is like below image.
Green is Regression model, Red is Classification model.
There are 3 models for this project and 1 xml file for face recognition of OpenCV.
-
LR_best_model_trt.pth
-
block_free_model_trt.pth
-
road_following_model_trt.pth
-
haarcascade_frontalface_default.xml
This bases on resnet18 and is converted as TensorRT.
And this model is for Left / Right decision. We are trained this model with more than 400 images.
We took pictures included these images.
This bases on resnet18 and is converted as TensorRT too.
And this model is used like object detection.
This model detects object in front of jetbot. If it detects something, It inferences as Block.
This bases on resnet18 and is converted as TensorRT.
This model trained the track which is served when We buy jetbot.
Our jetbot settings are these.
- speed_gain_slider = 0.20
- steering_gain_slider = 0.05
- sttering_dgain_slider = 0.0
- sttering_bias_slider = 0.0
(These parameter must differ every jetbot. So You need to find your fit parameters. It is totally empirical.)
This is from openCV face detection model named haarcascade.
We used this model to mosaic person's face during recording when the jetbot is running.
In this project, We consered about personal privacy policy.
Many company gathers autopilot data without caring people's privacy. That's why we use cascade model.
After left or right decision, Our jetbot calls avoidance().
You can find these in Main.ipynb.
def left_avoidance():
...
def right_avoidance():
...
If LR_best_model decides left, It calls left_avoidnace() and It decides the other sides, It calls right_avoidance().
So our jetbot can avoid obstructions on the road.
This method makes ndarray to tensor array then normalize the tensor with mean, standard.
and return the tensor half.