Future Interfaces Group (CMU)
The Future Interfaces Group is an interdisciplinary research lab within the Human-Computer Interaction Institute at Carnegie Mellon University.
Pittsburgh, PA
Pinned Repositories
DirectionOfVoice
Direction-of-Voice (DoV) Estimation for Intuitive Speech Interaction with Smart Devices Ecosystems
EyeMU
Gaze + IMU Gestures on Mobile Devices
FastAccel
Fast (4 kHz) accelerometer sampling for the LG G Watch.
hand-activities
Research repository for the CHI 2019 Paper on Sensing Fine-Grained Hand Activity with Smartwatches
IMUPoser
Code for IMUPoser: Full-Body Pose Estimation using IMUs in Phones, Watches, and Earbuds
PoseOnTheGo
Code and data for Pose-on-the-Go: Approximating User Pose with SmartphoneSensor Fusion and Inverse Kinematics
Sozu
Sozu: Self-Powered Radio Tags for Building-Scale Activity Sensing
ubicoustics
Accompanying repository for Ubicoustics: Plug-and-Play Acoustic Activity Recognition
vibrosight
Vibrosight open-source project
Vid2Doppler
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.
Future Interfaces Group (CMU)'s Repositories
FIGLAB/ubicoustics
Accompanying repository for Ubicoustics: Plug-and-Play Acoustic Activity Recognition
FIGLAB/IMUPoser
Code for IMUPoser: Full-Body Pose Estimation using IMUs in Phones, Watches, and Earbuds
FIGLAB/EyeMU
Gaze + IMU Gestures on Mobile Devices
FIGLAB/Vid2Doppler
This is the research repository for Vid2Doppler: Synthesizing Doppler Radar Data from Videos for Training Privacy-Preserving Activity Recognition.
FIGLAB/hand-activities
Research repository for the CHI 2019 Paper on Sensing Fine-Grained Hand Activity with Smartwatches
FIGLAB/vibrosight
Vibrosight open-source project
FIGLAB/DirectionOfVoice
Direction-of-Voice (DoV) Estimation for Intuitive Speech Interaction with Smart Devices Ecosystems
FIGLAB/FastAccel
Fast (4 kHz) accelerometer sampling for the LG G Watch.
FIGLAB/Sozu
Sozu: Self-Powered Radio Tags for Building-Scale Activity Sensing
FIGLAB/PoseOnTheGo
Code and data for Pose-on-the-Go: Approximating User Pose with SmartphoneSensor Fusion and Inverse Kinematics
FIGLAB/RGBDGaze
FIGLAB/High-Resolution-EIT
The high resolution Electrical Impedance Tomograph (EIT) project is provided for academic and non-commercial research. This project is based on our publication at ACM UIST 2016: https://dl.acm.org/doi/abs/10.1145/2984511.2984574
FIGLAB/mouthhaptics
Open source code for Mouth Haptics in VR using a Heaset Ultrasound Phased Array
FIGLAB/FastAccel-kernel
Custom LG G Watch kernel supporting high-speed accelerometer sampling. See https://github.com/FIGLAB/FastAccel for more details.
FIGLAB/RetargetedSelfHaptics
Research Repository of "Retargeted Self-Haptics for Increased Immersion in VR without Hand Instrumentation"
FIGLAB/synjets
Synjets code/demos/instructions
FIGLAB/constellations
Research Repository for "Exploring the Efficacy of Sparse, General-Purpose Sensor Constellations for Wide-Area Activity Sensing"
FIGLAB/EtherPose
Daehwa Kim, Chris Harrison EtherPose: Continuous Hand Pose Tracking with Wrist-Worn Antenna Impedance Characteristic Sensing. UIST 2022
FIGLAB/MylarFilmSimulator
A simulator for Mylar film with C++, openMP.
FIGLAB/Super-Resolution-Dataset
FIGLAB/3DHandPose
FIGLAB/zensors
Research repository for the original Zensors Paper
FIGLAB/DynaTags
Low-Cost Fiducial Marker Mechanisms - coming soon
FIGLAB/pullgestures
Design and code from paper: "Pull Gestures with Coordinated Graphics on Dual-Screen Devices"