pre-trained weights for fall detection
Opened this issue · 6 comments
Hi, @SeanChen0220
Could you share your weights and guide me ho to test my own videos with that. I may further work on fall detection task, so therefore I need some demo for my videos. Thanks!
Are you from Beijing Institute of Technology? If so, u can leave you phone number and we shall talk with WeChat which is more convenient.
no, I am based in Korea. There are a couple of videos to test. If you could share your weights and the way how you inference, I'd appreciate it. Thanks
The weights file is not in the computer I am using for some personal reasons. However, the weights are based on NTU RGBD joints and it may not be helpful for your own video. Here are the steps to test on your videos. Our method is based on human joints, so you should apply a human pose detector(we choose a lightweight pose estimator named LPN and trained it on COCO and NTU RGBD dataset) first to acquire joints of each frame. Then, our fall detection model takes 300 frames of joints as input and output the fall result. The input should be (batch, frame, joints, 2or3), if your video length is less than 300, you can pad null frames with previous ones. We evaluate our method on NTU dataset and the video sequences in this dataset are all less than 300 frames, so we pad all videos to 300 frames when evaluation.
@bit-scientist https://github.com/kchengiva/DecoupleGCN-DropGraph This is the repo we refer to, if you want to conduct experiments on NTU dataset, you can refer to thier codes. The difference is that when detecting fall events, the output dim is 2 rather than M actions.
Are you from Beijing Institute of Technology? If so, u can leave you phone number and we shall talk with WeChat which is more convenient.
I graduated from UESTC, I'm working for fall detection. Could you leave you wechat number or telphone num, I hope to know some details in your work. Thanks in advance.
@dongxiaolong SeanChen2823 WeChat