OpenGVLab/Instruct2Act
Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model
Python
Issues
- 2
The problems about "Install the required packages with the provided environment.yaml"
#26 opened by wwwzxxzw - 1
- 1
- 1
No valid model configuration found
#24 opened by Keson1111 - 0
It seems technically impossible almost to solve novel_adj_and_noun, twist, stack_order, sweep_without_exceeding
#20 opened by shure-dev - 1
- 1
- 3
Did anyone run it successfully
#15 opened by zxb-0 - 1
- 1
- 6
- 3
How to perform sweep action?
#18 opened by shure-dev - 1
Do I need a paid plan for openAI?
#16 opened by ky0-999 - 1
How to install checkpoints for SAM and CLIP
#14 opened by asuzukosi - 2
question about running the example
#12 opened by MisterBrookT - 2
question about Rearrange task
#13 opened by FinnJob - 6
Issue with GUI and Arm Movement
#11 opened by jk188jk - 5
How to run
#10 opened by nbbb24 - 2
- 5
Where can I find the "openclip_tokenizer"?
#5 opened by euminds - 3
How can I make the robotic arm move
#7 opened by VitaLemonTea1 - 1
- 3
incorrect coordinate detected of base_obj
#3 opened by Breezewrf - 2
No module named 'easydict'
#1 opened by VitaLemonTea1