Pinned Repositories
amago
a simple and scalable agent for training adaptive policies with sequence-based RL
Coopernaut
Coopernaut: End-to-End Driving with Cooperative Perception for Networked Vehicles
deoxys_control
A modular, real-time controller library for Franka Emika Panda robots
Ditto
Code for Ditto: Building Digital Twins of Articulated Objects from Interaction
FORGE
Code for Few-View Object Reconstruction with Unknown Categories and Camera Poses at 3DV 2024 (oral)
GIGA
Official PyTorch implementation of Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations
maple
Official codebase for Manipulation Primitive-augmented reinforcement Learning (MAPLE)
PRELUDE
Official codebase for Perceptive Locomotion Under Dynamic Environments (PRELUDE)
TRILL
Official codebase for Teleoperation and Imitation Learning for Loco-manipulation (TRILL)
VIOLA
Official implementation for VIOLA
UT Robot Perception and Learning Lab's Repositories
UT-Austin-RPL/deoxys_control
A modular, real-time controller library for Franka Emika Panda robots
UT-Austin-RPL/GIGA
Official PyTorch implementation of Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations
UT-Austin-RPL/Ditto
Code for Ditto: Building Digital Twins of Articulated Objects from Interaction
UT-Austin-RPL/VIOLA
Official implementation for VIOLA
UT-Austin-RPL/TRILL
Official codebase for Teleoperation and Imitation Learning for Loco-manipulation (TRILL)
UT-Austin-RPL/FORGE
Code for Few-View Object Reconstruction with Unknown Categories and Camera Poses at 3DV 2024 (oral)
UT-Austin-RPL/amago
a simple and scalable agent for training adaptive policies with sequence-based RL
UT-Austin-RPL/maple
Official codebase for Manipulation Primitive-augmented reinforcement Learning (MAPLE)
UT-Austin-RPL/Coopernaut
Coopernaut: End-to-End Driving with Cooperative Perception for Networked Vehicles
UT-Austin-RPL/PRELUDE
Official codebase for Perceptive Locomotion Under Dynamic Environments (PRELUDE)
UT-Austin-RPL/GROOT
Official implementation of GROOT, CoRL 2023
UT-Austin-RPL/BUDS
Bottom-Up Skill Discovery from Unsegmented Demonstrations for Long-Horizon Robot Manipulation (BUDS)
UT-Austin-RPL/Doduo
Official PyTorch implementation of Doduo: Dense Visual Correspondence from Unsupervised Semantic-Aware Flow
UT-Austin-RPL/MUTEX
UT-Austin-RPL/sirius
Official codebase for Sirius: Robot Learning on the Job
UT-Austin-RPL/Lotus
UT-Austin-RPL/deoxys_vision
Vision package for robot manipulation and learning research
UT-Austin-RPL/sailor
UT-Austin-RPL/HouseDitto
Code for Ditto in the House: Building Articulation Models of Indoor Scenes through Interactive Perception
UT-Austin-RPL/Articulated_object_simulation
Data generation code for Ditto
UT-Austin-RPL/openreview-to-pmlr
Making PMLR Proceedings from CoRL OpenReview Data
UT-Austin-RPL/sirius-runtime-monitor
UT-Austin-RPL/PRIME
UT-Austin-RPL/robosuite-project-template
UT-Austin-RPL/BUDS-website
UT-Austin-RPL/cs391r-fall20-website
Course Website of CS391R: Robot Learning
UT-Austin-RPL/OKAMI
UT-Austin-RPL/olaf
Olaf: Interactive Robot Learning from Verbal Correction
UT-Austin-RPL/ORION-release
UT-Austin-RPL/rss2022
Robotics: Science and Systems conference website