stefanopini/simple-HRNet

Displaying joint keypoints on video input

timtensor opened this issue · 4 comments

Hi , I am trying to display the keypoint co-ordinates on top of the human body. However i seem to get some error in vizualization. I planned to display the keypoint just over the joint but it is displayed in a different location. I think the org parameter has to be changed but I am not sure how to do so . I used the following code to extract keypoints

person_ID = 0 
RIGHT_ANKLE = 1
 right_ankle = [pts[person_ID, RIGHT_ANKLE][0] ,  pts[person_ID, RIGHT_ANKLE][1]] 
                cv2.putText(frame, str(right_ankle), 
                           tuple(np.multiply((np.asarray(right_ankle)),[384, 288])
                           .astype(int)), 
                           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (52, 52, 232), 1, cv2.LINE_AA
                            )

I multiplied with the [384,288] is the image width and height . Do you know how one can solve this ?

Hi @timtensor ,

As far as I remember, cv2.putText requires a point in x,y format so you should use

right_ankle = [pts[person_ID, RIGHT_ANKLE][1], pts[person_ID, RIGHT_ANKLE][0]]

similarly to what is done here (btw you may directly change this function).

In addition, the joints should already be rescaled so there should not need to multiply them with the image size.

Thank you for the response. yes cv2.put text requires an origin point which i wanted to be on top of the ankle , displaying the Xand the Y co ordinates. However i did not manage that . I could get the x,y co ordinates but i could not manage to put it on top the ankle on the image so i was wondering what I was doing wrong .

Also isnt it RIGHT_ANKLE[[0] =x corodinate and RIGHT_ANKLE[1]= y cor ordinate ?

It's been a while, but according to this the points are saved in y,x coordinates (so that you can use them to access a pixel on the image with img[joint[0], joint[1]]).
Thus, I think you should use (joint[1], joint[0]) with the drawing functions of opencv.

According to the opencv doc, the point given to cv2.putText will be used as the "Bottom-left corner of the text string in the image." so the text should appear on the top-right of the joint.

Let me know if it works! Otherwise, please share an example with the joint coordinates and the text printed in the wrong position.

Thank you , i think you are correct there. I added some visulization for angle calculations and extended the live-demo.py code . It looks a bit unorganized . Could you suggest some improvements.
Angles code in utils.py

def calculate_angle(a,b,c):
    a = np.array(a) # First
    b = np.array(b) # Mid
    c = np.array(c) # End
    
    radians = np.arctan2(c[1]-b[1], c[0]-b[0]) - np.arctan2(a[1]-b[1], a[0]-b[0])
    angle = np.abs(radians*180.0/np.pi)
    
    if angle >180.0:
        angle = 360-angle
        
    return angle

Modified live-demo.py

import os
import sys
import argparse
import ast
import cv2
import time
import torch
from vidgear.gears import CamGear
import numpy as np

sys.path.insert(1, os.getcwd())
from SimpleHRNet import SimpleHRNet
from misc.visualization import draw_points, draw_skeleton, draw_points_and_skeleton, joints_dict, check_video_rotation
from misc.utils import find_person_id_associations ,calculate_angle

def main(camera_id, filename, hrnet_m, hrnet_c, hrnet_j, hrnet_weights, hrnet_joints_set, image_resolution,
         single_person, use_tiny_yolo, disable_tracking, max_batch_size, disable_vidgear, save_video, video_format,
         video_framerate, device):
    if device is not None:
        device = torch.device(device)
    else:
        if torch.cuda.is_available():
            torch.backends.cudnn.deterministic = True
            device = torch.device('cuda')
        else:
            device = torch.device('cpu')

    # print(device)

    image_resolution = ast.literal_eval(image_resolution)
    has_display = 'DISPLAY' in os.environ.keys() or sys.platform == 'win32'
    video_writer = None

    if filename is not None:
        rotation_code = check_video_rotation(filename)
        video = cv2.VideoCapture(filename)
        assert video.isOpened()
    else:
        rotation_code = None
        if disable_vidgear:
            video = cv2.VideoCapture(camera_id)
            assert video.isOpened()
        else:
            video = CamGear(camera_id).start()

    if use_tiny_yolo:
         yolo_model_def="./models/detectors/yolo/config/yolov3-tiny.cfg"
         yolo_class_path="./models/detectors/yolo/data/coco.names"
         yolo_weights_path="./models/detectors/yolo/weights/yolov3-tiny.weights"
    else:
         yolo_model_def="./models/detectors/yolo/config/yolov3.cfg"
         yolo_class_path="./models/detectors/yolo/data/coco.names"
         yolo_weights_path="./models/detectors/yolo/weights/yolov3.weights"

    model = SimpleHRNet(
        hrnet_c,
        hrnet_j,
        hrnet_weights,
        model_name=hrnet_m,
        resolution=image_resolution,
        multiperson=not single_person,
        return_bounding_boxes=not disable_tracking,
        max_batch_size=max_batch_size,
        yolo_model_def=yolo_model_def,
        yolo_class_path=yolo_class_path,
        yolo_weights_path=yolo_weights_path,
        device=device
    )

    if not disable_tracking:
        prev_boxes = None
        prev_pts = None
        prev_person_ids = None
        next_person_id = 0

    while True:
        t = time.time()

        if filename is not None or disable_vidgear:
            ret, frame = video.read()
            if not ret:
                break
            if rotation_code is not None:
                frame = cv2.rotate(frame, rotation_code)
        else:
            frame = video.read()
            if frame is None:
                break

        pts = model.predict(frame)

        if not disable_tracking:
            boxes, pts = pts

        if not disable_tracking:
            if len(pts) > 0:
                person_ID = 0 
                # left side 
                NOSE = 0
                RIGHT_EYE = 2
                RIGHT_EAR = 4
                RIGHT_ANKLE = 16
                RIGHT_KNEE = 14
                RIGHT_HIP = 12
                RIGHT_SHOULDER = 6
                RIGHT_ELBOW = 8
                RIGHT_WRIST = 10
                #Left side 
                LEFT_EYE =1
                LEFT_EAR =3
                LEFT_ANKLE = 15
                LEFT_KNEE = 13
                LEFT_HIP = 11
                LEFT_SHOULDER = 5
                LEFT_ELBOW = 7
                LEFT_WRIST = 9
                # Y co-ordinate X co-ordinate 
                right_eye = [pts[person_ID, RIGHT_EYE][1] ,  pts[person_ID, RIGHT_EYE][0]]
                right_nose = [pts[person_ID, NOSE][1] ,  pts[person_ID, NOSE][0]]
                right_ear = [pts[person_ID, RIGHT_EAR][1] ,  pts[person_ID, RIGHT_EAR][0]]
                right_ankle = [pts[person_ID, RIGHT_ANKLE][1] ,  pts[person_ID, RIGHT_ANKLE][0]]              #print(right_ankle_x)
                right_knee = [pts[person_ID,  RIGHT_KNEE][1] ,  pts[person_ID, RIGHT_KNEE][0]]
                right_shoulder = [pts[person_ID, RIGHT_SHOULDER][1] ,  pts[person_ID, RIGHT_SHOULDER][0]]
                right_hip  = [pts[person_ID, RIGHT_HIP][1] ,  pts[person_ID, RIGHT_HIP][0]]
                right_elbow = [pts[person_ID, RIGHT_ELBOW][1] ,  pts[person_ID, RIGHT_ELBOW][0]]
                right_wrist = [pts[person_ID, RIGHT_WRIST][1] ,  pts[person_ID, RIGHT_WRIST][0]]


                left_eye = [pts[person_ID, LEFT_EYE][1] ,  pts[person_ID, LEFT_EYE][0]]
                left_nose = [pts[person_ID, NOSE][1] ,  pts[person_ID, NOSE][0]]
                left_ear = [pts[person_ID, LEFT_EAR][1] ,  pts[person_ID, LEFT_EAR][0]]
                left_ankle = [pts[person_ID, LEFT_ANKLE][1] ,  pts[person_ID, LEFT_ANKLE][0]]              #print(right_ankle_x)
                left_knee = [pts[person_ID,  LEFT_KNEE][1] ,  pts[person_ID, LEFT_KNEE][0]]
                left_shoulder = [pts[person_ID, LEFT_SHOULDER][1] ,  pts[person_ID, LEFT_SHOULDER][0]]
                left_hip  = [pts[person_ID, LEFT_HIP][1] ,  pts[person_ID, LEFT_HIP][0]]
                left_elbow = [pts[person_ID, LEFT_ELBOW][1] ,  pts[person_ID, LEFT_ELBOW][0]]
                left_wrist = [pts[person_ID, LEFT_WRIST][1] ,  pts[person_ID, LEFT_WRIST][0]]

                # Calculate angles
                
                rshoulderangle = round(calculate_angle(right_elbow, right_shoulder, right_hip),3)
                rhipangle = round(calculate_angle(right_shoulder, right_hip, right_knee),3)
                rkneeangle = round(calculate_angle(right_hip, right_knee, right_ankle),3)
                reyeangle = round(calculate_angle(right_ear, right_eye, right_nose),3)
                relbowangle = round(calculate_angle(right_shoulder, right_elbow, right_wrist),3)
                




                #Calculate Angles 
                lshoulderangle = round(calculate_angle(left_elbow, left_shoulder, left_hip),3)
                lhipangle = round(calculate_angle(left_shoulder, left_hip, left_knee),3)
                lkneeangle = round(calculate_angle(left_hip, left_knee, left_ankle),3)
                leyeangle = round(calculate_angle(left_ear, left_eye, left_nose),3)
                lelbowangle = round(calculate_angle(left_shoulder, left_elbow, left_wrist),3)



                # display the angles 
                
                cv2.putText(frame, str(rshoulderangle), 
                           ((np.asarray(right_shoulder))
                           .astype(int)), 
                           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1, cv2.LINE_AA
                            )
                
                cv2.putText(frame, str(rhipangle), 
                           ((np.asarray(right_hip))
                           .astype(int)), 
                           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1, cv2.LINE_AA
                            )

               
                cv2.putText(frame, str(rkneeangle), 
                           ((np.asarray(right_knee))
                           .astype(int)), 
                           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1, cv2.LINE_AA
                            )
                cv2.putText(frame, str(reyeangle), 
                           ((np.asarray(right_eye))
                           .astype(int)), 
                           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1, cv2.LINE_AA
                            )
                cv2.putText(frame, str(relbowangle), 
                           ((np.asarray(right_elbow))
                           .astype(int)), 
                           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1, cv2.LINE_AA
                            )

                #Display angles left side 

                cv2.putText(frame, str(lelbowangle), 
                           ((np.asarray(left_elbow))
                           .astype(int)), 
                           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1, cv2.LINE_AA
                            )
                
                cv2.putText(frame, str(lhipangle), 
                           ((np.asarray(left_hip))
                           .astype(int)), 
                           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1, cv2.LINE_AA
                            )

               
                cv2.putText(frame, str(lkneeangle), 
                           ((np.asarray(left_knee))
                           .astype(int)), 
                           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1, cv2.LINE_AA
                            )
                cv2.putText(frame, str(leyeangle), 
                           ((np.asarray(left_eye))
                           .astype(int)), 
                           cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1, cv2.LINE_AA
                            )


                if prev_pts is None and prev_person_ids is None:
                    person_ids = np.arange(next_person_id, len(pts) + next_person_id, dtype=np.int32)
                    next_person_id = len(pts) + 1
                else:
                    boxes, pts, person_ids = find_person_id_associations(
                        boxes=boxes, pts=pts, prev_boxes=prev_boxes, prev_pts=prev_pts, prev_person_ids=prev_person_ids,
                        next_person_id=next_person_id, pose_alpha=0.2, similarity_threshold=0.4, smoothing_alpha=0.1,
                    )
                    next_person_id = max(next_person_id, np.max(person_ids) + 1)
            else:
                person_ids = np.array((), dtype=np.int32)

            prev_boxes = boxes.copy()
            prev_pts = pts.copy()
            prev_person_ids = person_ids

        else:
            person_ids = np.arange(len(pts), dtype=np.int32)

        for i, (pt, pid) in enumerate(zip(pts, person_ids)):
            frame = draw_points_and_skeleton(frame, pt, joints_dict()[hrnet_joints_set]['skeleton'], person_index=pid,
                                             points_color_palette='gist_rainbow', skeleton_color_palette='jet',
                                             points_palette_samples=10)

        fps = 1. / (time.time() - t)
        #print('\rframerate: %f fps' % fps, end='')

        if has_display:
            cv2.imshow('frame.png', frame)
            k = cv2.waitKey(1)
            if k == 27:  # Esc button
                if disable_vidgear:
                    video.release()
                else:
                    video.stop()
                break
        else:
            cv2.imwrite('frame.png', frame)

        if save_video:
            if video_writer is None:
                fourcc = cv2.VideoWriter_fourcc(*video_format)  # video format
                video_writer = cv2.VideoWriter('output.avi', fourcc, video_framerate, (frame.shape[1], frame.shape[0]))
            video_writer.write(frame)

    if save_video:
        video_writer.release()


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument("--camera_id", "-d", help="open the camera with the specified id", type=int, default=0)
    parser.add_argument("--filename", "-f", help="open the specified video (overrides the --camera_id option)",
                        type=str, default=None)
    parser.add_argument("--hrnet_m", "-m", help="network model - 'HRNet' or 'PoseResNet'", type=str, default='HRNet')
    parser.add_argument("--hrnet_c", "-c", help="hrnet parameters - number of channels (if model is HRNet), "
                                                "resnet size (if model is PoseResNet)", type=int, default=48)
    parser.add_argument("--hrnet_j", "-j", help="hrnet parameters - number of joints", type=int, default=17)
    parser.add_argument("--hrnet_weights", "-w", help="hrnet parameters - path to the pretrained weights",
                        type=str, default="./weights/pose_hrnet_w48_384x288.pth")
    parser.add_argument("--hrnet_joints_set",
                        help="use the specified set of joints ('coco' and 'mpii' are currently supported)",
                        type=str, default="coco")
    parser.add_argument("--image_resolution", "-r", help="image resolution", type=str, default='(384, 288)')
    parser.add_argument("--single_person",
                        help="disable the multiperson detection (YOLOv3 or an equivalen detector is required for"
                             "multiperson detection)",
                        action="store_true")
    parser.add_argument("--use_tiny_yolo",
                        help="Use YOLOv3-tiny in place of YOLOv3 (faster person detection). Ignored if --single_person",
                        action="store_true")
    parser.add_argument("--disable_tracking",
                        help="disable the skeleton tracking and temporal smoothing functionality",
                        action="store_true")
    parser.add_argument("--max_batch_size", help="maximum batch size used for inference", type=int, default=16)
    parser.add_argument("--disable_vidgear",
                        help="disable vidgear (which is used for slightly better realtime performance)",
                        action="store_true")  # see https://pypi.org/project/vidgear/
    parser.add_argument("--save_video", help="save output frames into a video.", action="store_true")
    parser.add_argument("--video_format", help="fourcc video format. Common formats: `MJPG`, `XVID`, `X264`."
                                                     "See http://www.fourcc.org/codecs.php", type=str, default='MJPG')
    parser.add_argument("--video_framerate", help="video framerate", type=float, default=30)
    parser.add_argument("--device", help="device to be used (default: cuda, if available)."
                                         "Set to `cuda` to use all available GPUs (default); "
                                         "set to `cuda:IDS` to use one or more specific GPUs "
                                         "(e.g. `cuda:0` `cuda:1,2`); "
                                         "set to `cpu` to run on cpu.", type=str, default=None)
    args = parser.parse_args()
    main(**args.__dict__)