LostThinker's Stars
chernyadev/bigym
Demo-Driven Mobile Bi-Manual Manipulation Benchmark.
AlmondGod/aloha-bigym
Nintendo Switch Teleoperation and ACT autonomy on ALOHA for Bigym benchmark tasks
facebookresearch/metamotivo
The first behavioral foundation model to control a virtual physics-based humanoid agent for a wide range of whole-body tasks.
Genesis-Embodied-AI/Genesis
A generative world for general-purpose robotics & embodied AI learning.
Robot-VLAs/RoboVLMs
thuml/Time-Series-Library
A Library for Advanced Deep Time Series Models.
MaximeVandegar/Papers-in-100-Lines-of-Code
Implementation of papers in 100 lines of code.
TianxingChen/RoboTwin
RoboTwin: Dual-Arm Robot Benchmark with Generative Digital Twins
agilexrobotics/mobile_aloha_sim
AutonoBot-Lab/BestMan_Pybullet
Codebase for the BestMan Mobile Manipulator Platform
volcengine/verl
veRL: Volcano Engine Reinforcement Learning for LLM
etched-ai/open-oasis
Inference script for Oasis 500M
adityabingi/Dreamer
Reproduction of Dreamerv1 and v2 in pytorch for deepmind control suite
rail-berkeley/hil-serl
Nightmare-n/DepthAnyVideo
Depth Any Video with Scalable Synthetic Data
genmoai/mochi
The best OSS video generation models
microsoft/OmniParser
A simple screen parsing tool towards pure vision based GUI agent
yuanzhi-zhu/mini_edm
Minimum implementation of EDM (Elucidating the Design Space of Diffusion-Based Generative Models) on cifar10 and mnist
huggingface/lerobot
🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
phy-q/benchmark
Phy-Q: A Testbed for Physical Reasoning
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)
apple/ml-depth-pro
Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.
UX-Decoder/Segment-Everything-Everywhere-All-At-Once
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
fudan-zvg/Semantic-Segment-Anything
Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).
IDEA-Research/Grounded-Segment-Anything
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
HarborYuan/ovsam
[ECCV 2024] The official code of paper "Open-Vocabulary SAM".
dunnolab/xland-minigrid-datasets
XLand-100B: A Large-Scale Multi-Task Dataset for In-Context Reinforcement Learning
google-deepmind/open_x_embodiment
LostXine/open_x_pytorch_dataloader
An unofficial pytorch dataloader for Open X-Embodiment Datasets https://github.com/google-deepmind/open_x_embodiment
UX-Decoder/Semantic-SAM
[ECCV 2024] Official implementation of the paper "Semantic-SAM: Segment and Recognize Anything at Any Granularity"