dronefreak/human-action-classification

How to exclude Scene Classification?

karndeepsingh opened this issue · 11 comments

Hello!
Thanks for the work you have done.
I just want to get Sitting and Standing pose from a live feed. How do I do it? But at the same time I don't want Scene classification to be done as it is taking longer time to process a image. How can I stop scene classification?

It would be helpful for my custom task. Thanks again. LOOKING FOR ANSWER ASAP.

Hello @karndeepsingh

Apologies for the late reply. You can simply comment out the scene class extraction part in the scene classification code. Please look for the following snippet in run_image.py:

# Classification
	pose_class = label_img.classify(image)
	scene_class = label_img_scene.classify(args.image)

Comment out the second line and other lines as well that use the scene_class variable, so that it does create issues later. Hope that helps!

Thankyou so much!!
Just one more question!!
I am trying to use this model on Embedded device. What device would you suggest to deploy on??
I am trying to deploy it on Raspberry Pi, I guess it will take time to inference right!!

Uhh that is a tricky one. You see, the OpenPose designed by CMU that we use in this project for pose extraction is indeed light-weight, but probably not so much. Hence, it might be a bit tricky to deploy on embedded devices. Furthermore, MobileNetV2 that we used is also meant for real-time applications, but probably not for a RPi (MobileNetV3 was introduced after I completed this project).
In total, I would say that this project might not be currently suitable for embedded devices. If however, this is a requirement for you, I would recommend to try this out on NVIDIA devices such as Jetson TX2, NX, Xavier etc. because they have GPU options. RPi-4 probably has a GPU option too, but I have never experimented with it so I am not sure how useful it would be for your use case. But if you want to try, I'd say NVIDIA Jetson series would be a good starting point.

Thanks for your valuable answer.I was looking for forward with NVIDIA series itself.

Can we connect on LinkedIn ?
How can I find you over there?😅

Sorry, to put a question again!
I had doubt since many days. Please clear it.
We ran Open Pose for pose estimation so it was able to give cordinate of the important points of the body.After this we did specific pose classification such as standing or sitting with these all cordinates points. So, how we did came to know that these particular cordinates are for standing and sitting and pass it to the classification Algo to classify? inshort I just want to understand classification part.

Please explain it.It would help me to understand everything clearly as nobody has explained pose classification.

Thanks again.

See basically, for this project you would need to have your poses as images, and not as 2D skeletal structures. I apologize for this noob implementation, I will fix it soon. Once your OpenPose outputs poses, what I did was to save the poses as images with white backgrounds, then I used those poses (images) for training a image classifier. I know this sounds idiotic, but at the time of this project, I was facing a deadline and had to submit my project.

What really should have been done was that the generated poses should have been saved as key-points (2D skeletal structures) and then calculate the angles between certain subset of those keypoints. You know if you are standing, your femur (thigh bone) is likely to be at a right angle with respect to your chest. Or maybe another idea could be to calculate the length between the highest head-keypoint and lowest foot-keypoint. This distance will be short is you are sitting and large (a threshold) if you are standing.

Based on this angular information or the distance information, you could maybe train a ML classifier like SVM or Decision Trees to classify the poses into sitting or standing. Hope this answer helps!

Thanks again for your answer. This is what my doubt was.As I was figuring out how to use those cordinates as an input for classifing in DL algo. But I assumed that you must have taken all those pose images and trained on it after segmenting it to the class, I got it right now.

How to calculate angle and all with those cordinates to prepare my own model?
Any reference or link would also help.😅

Since you already have the coordinates, you need to choose a certain set of points to calculate the angle between the corresponding lines. You can use the following piece of code to calculate these angles:

import math
def dot(vA, vB):
    return vA[0]*vB[0]+vA[1]*vB[1]
def ang(lineA, lineB):
    # Get nicer vector form
    vA = [(lineA[0][0]-lineA[1][0]), (lineA[0][1]-lineA[1][1])]
    vB = [(lineB[0][0]-lineB[1][0]), (lineB[0][1]-lineB[1][1])]
    # Get dot prod
    dot_prod = dot(vA, vB)
    # Get magnitudes
    magA = dot(vA, vA)**0.5
    magB = dot(vB, vB)**0.5
    # Get cosine value
    cos_ = dot_prod/magA/magB
    # Get angle in radians and then convert to degrees
    angle = math.acos(dot_prod/magB/magA)
    # Basically doing angle <- angle mod 360
    ang_deg = math.degrees(angle)%360

    if ang_deg-180>=0:
        # As in if statement
        return 360 - ang_deg
    else: 

        return ang_deg

Here, lineA and lineB are essentially tuples that contain your x,y coordinates of the chosen points. For instance, if you have three points (or two lines intersecting at a point) you would call the function like angle = ang(((1,2),(3,4)),((3,4),(5,6))), which will return you the angle formed between the lines passing through (1, 2) (3, 4) and (3, 4) (5, 6), intersecting at (3,4).
I hope I have answered your queries!

Thankyou so much ! For you answers!
I am building the model using posenet for standing and sitting pose classification.So, this can be used to build lightmodel for Raspberry Pi.
Thanks again.
If there is anything I will connect you personally on LinkedIn.🙂

You are welcome. Wishing you and your family a happy Diwali and all the best for your project.
Closing this as it all seems resolved. Happy coding!

Wishing you Happy Diwali to you too!! Hope you have safe year ahead. Enjoy the Life.