/PointCloud

Point cloud generation using mm-wave radar, classification using PointNet/PointNet++, etc.

Primary LanguagePythonMIT LicenseMIT

3D Point Cloud Generation & Human Activity Recognition

中文文档

Introduction

This repository is the experimental code for my undergraduate graduation thesis "Millimeter Wave Radar Based Human Activity Recognition System".

PointCloudGeneration.ipynb implements an algorithm to generate 3D point clouds from the raw data collected by radar, through the steps of range FFT, calculation of distance-azimuth heat map, Doppler FFT, etc. Each generated point contains information on distance, angle, signal strength, speed, etc.

The PointCloudClassification folder implements a machine learning model consisting of a point cloud feature extraction layer, a time series processing layer, and a fully connected classification layer. The models used in each layer can be independently chosen and combined. In the point cloud feature extraction layer, voxel-based 3D convolution and PCA methods, PointNet, and PointNet++ are used to extract features in the spatial domain. In the time series processing layer, RNN, GRU, LSTM are used to extract features in the time domain.

Environment Configuration

PyTorch3D provides high-performance CUDA implementations of Ball Query algorithm, FPS algorithm, and point cloud voxelization algorithm, which can improve efficiency by dozens of times. However, it has very strict requirements on the version of Python and PyTorch. After testing, it can be installed successfully using the following command (with CUDA 11.4 on Linux).

conda create -n MachineLearning python=3.9 pytorch=1.9.1 cudatoolkit fvcore iopath pytorch3d -c pytorch3d -c pytorch -c fvcore -c iopath -c conda-forge

Install other used libraries:

pip install matplotlib numpy sklearn scipy tqdm

Dataset

These datasets are placed in the data folder, and the file structure looks like this:

/data
  /Pantomime
    /primary_exp
      ...
    /supp_exp
      ...
    data.pickle
  /PeopleWalking
    1.mat
    2.mat
    ...
  /RadHAR
    /Test
      ...
      data.pickle
    /Train
      ...
      data.pickle
/PointCloudClassification
  ...
PointCloudGeneration.ipynb
README.md

When the datasets are read for the first time, they will be preprocessed, and the preprocessed result will be saved as data.pickle, and then only data.pickle will be read.

So you can directly use the preprocessed data I provided, so you don't have to download the datasets:

https://1drv.ms/u/s!Ap0_tHPGTLjfhv1cZZpv_iQyyNnExA?e=aBzV1G

Training Log

The log folder records all the data that will be generated during the training process:

  • Accuracy and loss of training set and test set for each epoch
  • Training parameters
  • The best model & the lastest model
  • Confusion matrix

Training logs for all 65 experiments in my graduation thesis:

https://1drv.ms/u/s!Ap0_tHPGTLjfhv1fUuyTU2nmQwvCQg?e=0VBFdz

Visualization figures and videos of point cloud generation

The figures and videos generated by PointCloudGeneration.ipynb are saved in the fig folder for slides and thesis illustrations, including:

  • The result of Range FFT (figure)
  • The result after static object removal (figure)
  • The distance-azimuth heatmap (video)
  • The result of CFAR algorithm (video)
  • The result of DBSCAN algorithm (video)
  • The range-azimuth heatmap with velocity values (video)

Image and video sharing link after sorting:

https://1drv.ms/u/s!Ap0_tHPGTLjfh4A2tp6LUW7s-1fsVA?e=pShZbn

Experimental Code

The code for each experiment can be seen through the historical code saved by git. The code that ends with training config in the commit message is the code used for that training.