una-dinosauria/3d-pose-baseline

How to get the action identified from a video

Closed this issue · 4 comments

I got the Json points as guided by ArashHossasni's fork of this project. I am able to get the output in the 3d format too. Could you please guide me how to get the action evaluated. When i run your code I can see the action on the console. I want a similar thing with this too.
Thanks in advance

Did you get the 3d points in a json file? If so would you please tell me how you obtained that? Thanks

Hi @RaghunandanVenkatesh,

There is no easy answer to that -- action recognition is an open problem in machine learning and computer vision. 3d poses can help, but other deep learning approaches might be more appropriate. I suggest you browse the literature from the latest ECCV/ICCV/CVPR to learn more.

Did you get the 3d points in a json file? If so would you please tell me how you obtained that? Thanks

@RaghunandanVenkatesh , it could be great if you could share that . Were you also able to output it , on a frame to frame basis ?

Closing for lack of activity. Please reopen if the issue is still ongoing.