Pinned Repositories
act-plus-plus
Imitation learning algorithms with Co-training for Mobile ALOHA: ACT, Diffusion Policy, VINN
Algorithm_Interview_Notes-Chinese
2018/2019/校招/春招/秋招/算法/机器学习(Machine Learning)/深度学习(Deep Learning)/自然语言处理(NLP)/C/C++/Python/面试笔记
Depth-Anything
Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data
IqiyiProject
Iqiyi movie dialog system
NAUM
Code for NAUM project paper
navchat
Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation"
OLMo
Modeling, training, eval, and inference code for OLMo
pirlnav
Code for training embodied agents using IL and RL finetuning at scale for ObjectNav
Seq2Seq-Models
Basic Seq2Seq, Attention, CopyNet
YinpeiDai.github.io
personal webpage
YinpeiDai's Repositories
YinpeiDai/navchat
Code for ICRA24 paper "Think, Act, and Ask: Open-World Interactive Personalized Robot Navigation"
YinpeiDai/Depth-Anything
Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data
YinpeiDai/OLMo
Modeling, training, eval, and inference code for OLMo
YinpeiDai/pirlnav
Code for training embodied agents using IL and RL finetuning at scale for ObjectNav
YinpeiDai/YinpeiDai.github.io
personal webpage
YinpeiDai/act-plus-plus
Imitation learning algorithms with Co-training for Mobile ALOHA: ACT, Diffusion Policy, VINN
YinpeiDai/Grounded-Segment-Anything
Marrying Grounding DINO with Segment Anything & Stable Diffusion & Tag2Text & BLIP & Whisper & ChatBot - Automatically Detect , Segment and Generate Anything with Image, Text, and Audio Inputs
YinpeiDai/MMCoref
Code for DSTC 10: SIMMC 2.0 track: Multimodal Coreference Resolution subtask.
YinpeiDai/habitat-lab
A modular high-level library to train embodied AI agents across a variety of tasks and environments.
YinpeiDai/home-robot
Mobile manipulation research tools for roboticists
YinpeiDai/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
YinpeiDai/LLaVA-NeXT
YinpeiDai/lseg-module
Adapted from Habitat home-robot code base
YinpeiDai/multiwoz
Source code for end-to-end dialogue model from the MultiWOZ paper (Budzianowski et al. 2018, EMNLP)
YinpeiDai/nerf-navigation
Code for the Nerf Navigation Paper. Implements a trajectory optimiser and state estimator which use NeRFs as an environment representation
YinpeiDai/ORB_SLAM2
Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities
YinpeiDai/ORB_SLAM3
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
YinpeiDai/Orion
Orion-14B is a family of models includes a 14B foundation LLM, and a series of models: a chat model, a long context model, a quantized model, a RAG fine-tuned model, and an Agent fine-tuned model. Orion-14B 系列模型包括一个具有140亿参数的多语言基座大模型以及一系列相关的衍生模型,包括对话模型,长文本模型,量化模型,RAG微调模型,Agent微调模型等。
YinpeiDai/peract
Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
YinpeiDai/peract_colab
Annotated Tutorial for PerAct
YinpeiDai/RLBench
A large-scale benchmark and learning environment.
YinpeiDai/rlmmbp
Learning mobile manipulation behaviors through reinforcement learning
YinpeiDai/robot-collab
Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models
YinpeiDai/Semantic-Segment-Anything
Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).
YinpeiDai/serl
SERL: A Software Suite for Sample-Efficient Robotic Reinforcement Learning
YinpeiDai/serl_franka_controllers
Compliant carteasian impedance controller for Franka Emika Robot
YinpeiDai/simmc2
YinpeiDai/Team8-Kakao-SIMMC2
YinpeiDai/YARR
Yet Another Robotics and Reinforcement (YARR) learning framework for PyTorch.
YinpeiDai/YOLO-World
Real-Time Open-Vocabulary Object Detection