Questions about using robotic_anything_offline.py
euminds opened this issue · 6 comments
Hi,
thanks for your excellent work.
- I tried to run the _robotic_anything_gpt_online.py. However, even with a proxy, I still got the error "_openai.error.AuthenticationError: ". The network conditions cannot be improved at the moment.
- I tried to use the LLAMA-adapter to implement Instruct2Act. I would like to know how to use the LLAMA-adapter and robotic_anything_offline.py to implement Instruc2Act. Will this part of the code be open source later?
Any help would be much appreciated.
Bests
Hi,
thanks for your interest.
- Have you updated the API key required? Seems that the problem could be related to your API key.
- The offline version currently only fakes the generation, since the task is quite structured for robotics, it is more like a text competition.
- For the generation with LLaMA-Adapter, it is an easy-to-use project. You just need to install the Adapter-related. And change the line:
response = openai.Completion.create()
to the Adapter ones. Then else should be the same.
Hope that would help you.
Bests.
Hi,
thanks for your interest.
- Have you updated the API key required? Seems that the problem could be related to your API key.
- The offline version currently only fakes the generation, since the task is quite structured for robotics, it is more like a text competition.
- For the generation with LLaMA-Adapter, it is an easy-to-use project. You just need to install the Adapter-related. And change the line:
response = openai.Completion.create()
to the Adapter ones. Then else should be the same.
Hope that would help you.
Bests.
I have tested my API on Colab, and it was working fine. I tried using a different proxy, but I still encountered the same error: "openai.error.AuthenticationError: empty message".
Few bugs? Can you point them out?
I will fix them a little later.
The current instructions in the readme file state that the OpenAI API key should be modified in the visual_programming_prompt/prompt_generation.py file. However, the code file robotic_anything_gpt_online.py does not call the prompt_generation.py file but instead uses robotic_exec_generation.py. Consequently, the correct location to modify the OpenAI API key and proxy settings should be in the robotic_exec_generation.py file.
More Implementation Steps:
Comment out the following lines in the environment.yaml file:
- torch==1.12.1+cu113
- torchaudio==0.12.1+cu113
- torchvision==0.13.1+cu113
- vima==0.1
Install PyTorch and related packages using the following command:
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
Note: This command installs the specific versions of PyTorch, torchvision, and torchaudio that are compatible with the project.
Install the Vima package by executing the following commands:
git clone https://github.com/vimalabs/VimaBench
cd VimaBench
pip install -e .
Install the SAM package by running the following commands:
git clone https://github.com/facebookresearch/segment-anything.git
cd segment-anything
pip install -e .
Install the Open_clip package by executing the following commands:
git clone https://github.com/mlfoundations/open_clip.git
cd open_clip
pip install -e .
Download the required VIT-H models from the Huggingface model repository.
Install the additional dependencies cchardet and chardet using the following commands:
pip install cchardet
pip install chardet
These packages are required for proper functionality.
@euminds Have updated the readme. Thanks for your info!
Hope now it can work. :-) Enjoy it.