Pinned Repositories
HandTracker
3D Hand Tracking using input from a depth sensor.
MocapNET
We present MocapNET, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images. Our contributions include: (a) A novel and compact 2D pose NSRM representation. (b) A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions. (c) An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known). All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET) while maintaining real-time performance
mocapnet_rosnode
A ROS node for the MocapNET 3D Pose Estimator
MonocularRGB_2D_Handjoints_MVA19
Accurate Hand Keypoint Localization on Mobile Devices
MonocularRGB_3D_Handpose_WACV18
Using a single RGB frame for real time 3D hand pose estimation in the wild
PyOpenPose
Python bindings for the Openpose library
reading_group
Reading group material and links
wacv_docker
A Dockerfile for our WACV18 paper: Using a single RGB frame for real time 3D hand pose estimation in the wild.
FORTH Computational Vision and Robotics Laboratory's Repositories
FORTH-ModelBasedTracker/MocapNET
We present MocapNET, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images. Our contributions include: (a) A novel and compact 2D pose NSRM representation. (b) A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions. (c) An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known). All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET) while maintaining real-time performance
FORTH-ModelBasedTracker/HandTracker
3D Hand Tracking using input from a depth sensor.
FORTH-ModelBasedTracker/PyOpenPose
Python bindings for the Openpose library
FORTH-ModelBasedTracker/MonocularRGB_3D_Handpose_WACV18
Using a single RGB frame for real time 3D hand pose estimation in the wild
FORTH-ModelBasedTracker/MonocularRGB_2D_Handjoints_MVA19
Accurate Hand Keypoint Localization on Mobile Devices
FORTH-ModelBasedTracker/reading_group
Reading group material and links
FORTH-ModelBasedTracker/mocapnet_rosnode
A ROS node for the MocapNET 3D Pose Estimator
FORTH-ModelBasedTracker/wacv_docker
A Dockerfile for our WACV18 paper: Using a single RGB frame for real time 3D hand pose estimation in the wild.