Zhongan-Wang's Stars
GT-RIPL/Awesome-LLM-Robotics
A comprehensive list of papers using large language/multi-modal models for Robotics/RL, including papers, codes, and related websites
Health-Devices-Research-Group/Posture-and-Fall-Detection-System-Using-3D-Motion-Sensors
This work presents a supervised learning approach for training a posture detection classifier, and implementing a fall detection system using the posture classification results as inputs with a Microsoft Kinect v2 sensor. The Kinect v2 skeleton tracking provides 3D depth coordinates for 25 body parts. We use these depth coordinates to extract seven features consisting of the height of the subject and six angles between certain body parts. These features are then fed into a fully connected neural network that outputs one of three considered postures for the subject: standing, sitting, or lying down. An average classification rate of over 99.30% for all three postures was achieved on test data consisting of multiple subjects where the subjects were not even facing the Kinect depth camera most of the time and were located in different locations. These results show the feasibility to classify human postures with the proposed setup independently of the location of the subject in the room and orientation to the 3D sensor.
Y-B-Class-Projects/Human-Fall-Detection
Human Falling Detection
cwlroda/falldetection_openpifpaf
Fall Detection using OpenPifPaf's Human Pose Estimation model
ChunyuanLI/Optimus
Optimus: the first large-scale pre-trained VAE language model
iluvrachel/Visualize-3D-skeleton
geaxgx/depthai_blazepose
brjathu/LART
Code repository for the paper "On the Benefits of 3D Pose and Tracking for Human Action Recognition", (CVPR 2023)
leolyliu/TACO-Instructions
Official repository of "TACO: Benchmarking Generalizable Bimanual Tool-ACtion-Object Understanding".
nv-tlabs/tesmo
Official implementation of TeSMo, a method for text-controlled scene-aware motion generation, from the ECCV 2024 paper: "Generating Human Interaction Motions in Scenes with Text Control".
zhtjtcz/Mine-Arxiv
sisidai/InterFusion
[ECCV 2024] InterFusion: Text-Driven Generation of 3D Human-Object Interaction
NextVisionLab/egoism-hoi
fedebotu/ICLR2023-OpenReviewData
Crawl & Visualize ICLR 2023 Data from OpenReview
tangjiapeng/DiffuScene
[CVPR 2024] DiffuScene: Denoising Diffusion Models for Generative Indoor Scene Synthesis
OpenRobotLab/UniHSI
[ICLR 2024 Spotlight] Unified Human-Scene Interaction via Prompted Chain-of-Contacts
sebastianstarke/AI4Animation
Bringing Characters to Life with Computer Brains in Unity
isaac-sim/IsaacLab
Unified framework for robot learning built on NVIDIA Isaac Sim
IDC-Flash/InterScene
[3DV 2024] Official repo of "Synthesizing Physically Plausible Human Motions in 3D Scenes"
mbreuss/diffusion-literature-for-robotics
Summary of key papers and blogs about diffusion models to learn about the topic. Detailed list of all published diffusion robotics papers.
Krasjet/quaternion
A brief introduction to the quaternions and its applications in 3D geometry.
DirtyHarryLYL/HOI-Learning-List
A list of Human-Object Interaction Learning.
nicolasugrinovic/multiphys
Code for the paper MultiPhys: Multi-Person Physics-aware 3D Motion Estimation (CVPR 2024)
jiawei-ren/insactor
[NeurIPS 2023] InsActor: Instruction-driven Physics-based Characters
visonpon/human-motion-capture
collect papers about human motion capture
snuvclab/ParaHome
Parameterizing Everyday Home Activities Towards 3D Generative Modeling of Human-Object Interactions
DaLi-Jack/SSR-code
Official implementation of 3DV24 paper "Single-view 3D Scene Reconstruction with High-fidelity Shape and Texture"
IIT-PAVIS/DiffAssemble
Official repository for "DiffAssemble: A Unified Graph-Diffusion Model for 2D and 3D Reassembly" accepted at CVPR2024
xiexh20/HDM
Official implementation for Hierarachical Diffusion Model in CVPR24 Template free reconstruction of human object interaction
liangxuy/Inter-X
[CVPR 2024] Official implementation of the paper "Towards Versatile Human-Human Interaction Analysis"