hand only result
Closed this issue · 9 comments
Hi, there was a bug in the demo code. The image should be read with load_img function, which basically change BGR -> RGB, but the old code used BGR as an input of the network. I fixed it. Could you try agin?
Hi, there was a bug in the demo code. The image should be read with load_img function, which basically change BGR -> RGB, but the old code used BGR as an input of the network. I fixed it. Could you try agin?
Yes, the above result is the image loaded with the load_img function, will this be the cause of the bad result
Did you use this code (https://github.com/mks0601/Hand4Whole_RELEASE/blob/Pose2Pose/demo/hand/demo_hand.py) for the result? If so, the code that you used was not using load_img function, so could you try again with the updated one?
thks,i have used mediapipe that get the hand bbox to solved the problem.
And I would like to know how to solve the problem of unpredictable afterimage caused by high-speed hand movement
that is still not been explored much. we're pushing for that direction and one of our previous work is here https://arxiv.org/abs/2303.15417
that is still not been explored much. we're pushing for that direction and one of our previous work is here https://arxiv.org/abs/2303.15417
thks a lot. And if I add a few of the following training sets, can I improve the accuracy of my hands with blur motion?
So far, BlurHand dataset is the only one that provides 3D GT from blurry hand images.
thks for your reply!it is helpful.