Given the increased interest in autonomous vehicles, various companies are rushing forward to bring forward level 5 autonomous vehicles. In order to do so, these vehicles have to have an array of sensors to percieve the surrounding environment such as cameras, radar, LiDAR and so forth. This project, will focus on LiDAR which projects the surrounding environment as point clouds. Specifically, the aim is to investigate the performance of LiDAR only object detection methods in urban vs non-urban contexts.
- Download
- Separate into Train, Test, Val set
- Preprocess point clouds
- Visually classify subset of KITTI dataset to be used in training and testing.
- Run DeepLab V3 Model on KITTI Dataset
- Obtain semantic histograms
- Train image context classifier using semantic histograms
- Explore classification models suitable to point clouds
- Train pointcloud context classifier using features
- Obtain baseline results with pretrained model
- Evaluate original alpha and beta values
- Evaluate original alpha and beta values with SGD
- Notebooks for interactive training and testing
- Implement utility tools and model functions for validation dataset.
- Implement GPU Monitor for inference code.
- Early stopping
- Explore kernel initialisation
- Implement different RPN architectures for pedestrian and cyclist models.
- Implement focal loss function.
- Obtain baseline results with pretrained model
- Notebooks for interactive training and testing
- Implement GPU Monitor for inference code.
- Convert .pcap to bin
- Convert .pcap frames to .pcd for annotations
- Modify config files to include VLP-16 specifications.
- Run VoxelNet on dataset.
- Research GPU metrics to be logged during model inference
- Notebooks for interactive analysis
- Calculate FLOPs and No of parameters
- Calculate point cloud density