Inference-Function-for-Lite-HRNet

For the requirements, installation, training, and testing of Lite-HRNet, please visit the official GitHub page (https://github.com/HRNet/Lite-HRNet). If you fail to install mmcv-full, install mmcv and mmpose instead.

Unlike other HRNet family, Lite-HRNet uses dataloader to load all images into the model. I’ve tried to create my own dataloader but the result was incorrect.

image

After reading the configuration file thoroughly, I realized that all images will be processed by val_pipeline during validation.

image

Therefore, I used OpenCV to load a single image and imitated the whole process of val_pipeline.

image

The input of the model needs to be a dictionary, so I created an empty one and used img_trans as the value for ‘img.’ As for ‘img_metas’, it is originally used to draw bounding box on the image from the json file. Since we don’t need to know the ground-truth when inferencing, ‘image_file’, ‘bbox-score’, and ‘bbox_id’ are not important. I assumed ‘rotation’ will rotate the bounding box, so I just used the default value. Hence, center and scale are the only two parameters that will affect the inference result.

image

Since I was too lazy to write a function that can draw the joints and limbs of a person, I borrowed the add_joints function from Efficient HRNet (https://github.com/TeCSAR-UNCC/EfficientHRNet/blob/main/lib/utils/vis.py).

image image