OpenGVLab/Instruct2Act
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
Python
Stargazers
- anran1231
- balsulami
- BOBrownShanghai AI Laboratory
- chaytonminInstitute of Computing Technology (ICT), Chinese Academy of Sciences
- chenllliangPeking University
- DongzhuoranZhou
- DreamWaterFoundShanghai Jiao Tong University
- haoranDUK
- HedlenMEGVII
- huangatlas
- jiangzhengkaiHKUST
- kaitorecca
- liangchenyanghn
- lx704612715
- m9hCenter17
- MzXuan
- NietismBurning Legion
- nikitavoloboevTbilisi
- Rane2021
- rare
- Reagan1311University of Edinburgh
- renqiux0302Shanghai Jiaotong University
- ruilimit
- ShivamKumar2002Rein Games Private Limited
- SiyuanHuang95Shanghai AI Lab
- sky-fly97Shanghai AI Laboratory
- smellslikemlSmellsLikeML
- sunhan1997
- tgyy1995
- TimothyCorreia-PaulMelbourne, Australia
- weixin00
- wuyuesongSDUST
- xin-li-67California, US
- YESAndy
- Zed-Wu
- ZzzzzacToronto