/Keypoints_HRNet_RK3588

Keypoint detection. Launch on RK3588. Training custom models.

Primary LanguagePython

Keypoints_HRNet_RK3588

Abstract

We provide a solution for launching Keypoints Search (Pose Estimation) neural networks on RK3588.
The process for preparing the edge device is described below.
We also provide a quick guide to converting models.

Example:
human kphuman

1. Prerequisites Prerequisites

  • Ubuntu

    Install Ubuntu on your RK3588 device. (tested on Ubuntu 20.04 and OrangePi5/Firefly ROC RK3588S devices)

    For installing Ubuntu on Firefly you can use their manual[1][2].

    For installing Ubuntu on OrangePi you can use their manual.

    Or use ours README's for them (select the one below).

    OrangePi Firefly

2. Installing and configurating Configurations

Install miniconda

wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.sh
bash Miniconda3-latest-Linux-aarch64.sh

Then rerun terminal session:

source ~/.bashrc

Create conda env with python3.8

conda create -n <env-name> python=3.8
conda activate <env-name>

Clone repository:

git clone https://github.com/Applied-Deep-Learning-Lab/Keypoints_HRNet_RK3588 
cd Keypoints_HRNet_RK3588

Install RKNN-Toolkit2-Lite

pip install install/rknn_toolkit_lite2-1.5.0-cp38-cp38-linux_aarch64.whl

In created conda enviroment also install requirements from the same directory

pip install -r install/requirements.txt

3. Running the keypoints search Keypoints

main.py runs inference like:

python3 main.py weights/human_pose.rknn \
                images/human.jpg

Inference results are saved to the ./results folder by default.

4. Convert pytorch model to onnx to rknn Converter

  • Preparation Host PC

    For training model we use MMPose by OpenMMLab.

    Step 0. You will also need conda on the host PC.

    conda create -n openmmlab python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y
    conda activate openmmlab
    

    Step 1. Install MMCV using MIM.

    sudo apt-get update
    pip3 install -U openmim
    mim install mmcv-full==1.7.0
    

    Step 2. Install MMPose.

    git clone --depth 1 --branch v0.29.0 https://github.com/open-mmlab/mmpose.git
    cd mmpose
    pip install -r requirements.txt
    pip install -v -e .
    pip install numpy==1.23.5
    
  • Convert pytorch to onnx

    Inside mmpose folder and conda 'openmmlab' environment:

    python tools/deployment/pytorch2onnx.py <path/to/config.py> \
      				<path/to/model.pth> \
      				--output-file <path/to/model.onnx> \
      				--shape 1 3 <model_size> <model_size>
    

    Example:

    mim download mmpose --config associative_embedding_hrnet_w32_coco_512x512  --dest .
    python tools/deployment/pytorch2onnx.py associative_embedding_hrnet_w32_coco_512x512.py \
      				hrnet_w32_coco_512x512-bcb8c247_20200816.pth \
      				--output-file human_pose.onnx \
      				--shape 1 3 512 512
    
  • Convert onnx to rknn

    Step 1. Create conda environment

    conda create -n rknn python=3.8
    conda activate rknn
    

    Step 2. Install RKNN-Toolkit2

    git clone https://github.com/Applied-Deep-Learning-Lab/Keypoints_HRNet_RK3588
    cd Keypoints_HRNet_RK3588
    pip install install/rknn_toolkit2-1.5.0+1fa95b5c-cp38-cp38-linux_x86_64.whl
    

    Step 3. For convert your .onnx model to .rknn run onnx2rknn.py like:

    python onnx2rknn.py <path/to/model.onnx>
    
    # For more precise conversion settings, 
    # check the additional options in the help:
    # python onnx2rknn.py -h
    

    Example:

    python onnx2rknn.py human_pose.onnx