damianliumin
MSML at Carnegie Mellon University. Researcher in machine learning and robotics.
Carnegie Mellon UniversityPittsburgh, Pennsylvania
Pinned Repositories
damianliumin.github.io
Min's Homepage
Depth-Anything
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
IsaacGymEnvs
Isaac Gym Reinforcement Learning Environments
manimo
A Modular interface for robotic manipulation.
monometis
non-adversarial_backdoor
Implementation of "Beating Backdoor Attack at Its Own Game" (ICCV-23).
robomimic
robomimic: A Modular Framework for Robot Learning from Demonstration
SoftMAC
Code repository for our paper SoftMAC: Differentiable Soft Body Simulation with Forecast-based Contact Model and Two-way Coupling with Articulated Rigid Bodies and Clothes
stable-baselines3
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
UCL-AffCorrs
Given one example of an annotated part, this model finds its semantic correspondences in a target image. Thus you get - one-shot semantic part correspondence!
damianliumin's Repositories
damianliumin/SoftMAC
Code repository for our paper SoftMAC: Differentiable Soft Body Simulation with Forecast-based Contact Model and Two-way Coupling with Articulated Rigid Bodies and Clothes
damianliumin/non-adversarial_backdoor
Implementation of "Beating Backdoor Attack at Its Own Game" (ICCV-23).
damianliumin/damianliumin.github.io
Min's Homepage
damianliumin/Depth-Anything
[CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation
damianliumin/IsaacGymEnvs
Isaac Gym Reinforcement Learning Environments
damianliumin/manimo
A Modular interface for robotic manipulation.
damianliumin/monometis
damianliumin/robomimic
robomimic: A Modular Framework for Robot Learning from Demonstration
damianliumin/stable-baselines3
PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.
damianliumin/UCL-AffCorrs
Given one example of an annotated part, this model finds its semantic correspondences in a target image. Thus you get - one-shot semantic part correspondence!