OpenAI InvalidRequestError during Trajectory Generation
Closed this issue · 2 comments
Thanks for the great work and thorough documentation! I had a question regarding trajectory generation of specific tasks.
When trying to generate trajectories using the following line, there appears to be OpenAI inference errors:
python scalingup/inference.py evaluation.num_episodes=1 policy=scalingup evaluation=table_top_bus_balance
The error is as follows:
openai.error.InvalidRequestError: You requested a length 0 completion, but you did not set the 'echo'
parameter. That means that we will return no data to you. (HINT: set 'echo' to true in order for the
API to echo back the prompt to you.)
The full error stack:
[08/07/23 22:34:16] INFO Dumping conf to
scalingup/wandb/run-20230807_223415-4opuwt1u/files/conf.pkl
generic.py:174
INFO MUJOCO_GL is not set, so an OpenGL backend will be chosen automatically. init.py:88
INFO Successfully imported OpenGL backend: glfw init.py:96
INFO MuJoCo library version is: 2.3.6 init.py:31
[08/07/23 22:34:17] INFO EnvConfig(obs_cameras=['front', 'top_down', 'ur5e/wsg50/d435i/rgb'], obs_dim=(160, 240), core.py:4309
ctrl=ControlConfig(frequency=4, dof=10, control_type=<ControlType.END_EFFECTOR: 1>,
rotation_type=<RotationType.UPPER_ROT_MAT: 2>, t_lookahead=0.0),
settle_time_after_dropped_obj=1.0, ee_action_num_grip_steps=25, num_action_candidates=500,
max_pushin_dist=0.05, min_pushin_dist=-0.01, num_steps_multiplier=40.0, min_steps=4,
rotate_gripper_threshold=0.0, solve_ee_inplace=True, pointing_up_normal_threshold=0.95,
place_height_min=0.02, place_height_max=0.15, preplace_dist_min=0.05, preplace_dist_max=0.35,
fallback_on_rrt_fail=False, end_on_failed_execution=True, grasp_primitive_z_pushin=0.01,
grasp_primitive_z_backup=0.2)
INFO SingleEnvSimEvaluation initialized with ControlConfig(frequency=4, dof=10, base.py:86
control_type=<ControlType.END_EFFECTOR: 1>, rotation_type=<RotationType.UPPER_ROT_MAT: 2>,
t_lookahead=0.0)
INFO TableTopPickAndPlace with time budget 100.0 seconds for tasks 'balance the bus on the block' base.py:309
INFO [0] Task(desc='balance the bus on the block') inference.py:30
INFO Dumping experience to scalingup/wandb/run-20230807_223415-4opuwt1u/files inference.py:40
2023-08-07 22:34:19,503 INFO worker.py:1621 -- Started a local Ray instance.
[08/07/23 22:34:28] ERROR [36mray::EnvSampler.sample()[39m (pid=1346411, ip=10.18.218.105, ray.py:56
actor_id=a49d4be51c804508306a7a5901000000, repr=<scalingup.utils.core.EnvSampler object at
0x7f2a557bbfd0>)
File "/home/helen/Documents/projects/LLM_robotics/scalingup/scalingup/utils/core.py", line 4081, in
sample
obs, done, trajectory = self.loop(
File "/home/helen/Documents/projects/LLM_robotics/scalingup/scalingup/utils/core.py", line 3932, in
loop
action = policy(
File "/home/helen/Documents/projects/LLM_robotics/scalingup/scalingup/utils/core.py", line 1843, in
call
return self._get_action(obs=obs, task=task)
File "/home/helen/Documents/projects/LLM_robotics/scalingup/scalingup/policy/scalingup.py", line
44, in _get_action
task_tree = self.task_tree_inference(
File "/home/helen/Documents/projects/LLM_robotics/scalingup/scalingup/policy/llm_policy_utils.py",
line 469, in call
current_task_node = self.plan_grounder(
File "/home/helen/Documents/projects/LLM_robotics/scalingup/scalingup/policy/llm_policy_utils.py",
line 604, in call
return self.link_path_to_action(
File "/home/helen/Documents/projects/LLM_robotics/scalingup/scalingup/policy/llm_policy_utils.py",
line 746, in link_path_to_action
return self.handle_pick_and_place(
File "/home/helen/Documents/projects/LLM_robotics/scalingup/scalingup/policy/llm_policy_utils.py",
line 642, in handle_pick_and_place
pick_obj, place_location = self.pick_and_place_parser(
File "/home/helen/Documents/projects/LLM_robotics/scalingup/scalingup/policy/llm_policy_utils.py",
line 287, in call
multiple_choice_output = {
File "/home/helen/Documents/projects/LLM_robotics/scalingup/scalingup/policy/llm_policy_utils.py",
line 288, in
choice_return_value: GPT3Wrapper.complete(
File "/home/helen/Documents/projects/LLM_robotics/scalingup/scalingup/utils/openai_api.py", line
404, in complete
cls.add_request(key=key, prompt=prompt, api_config=api_config)
File "/home/helen/Documents/projects/LLM_robotics/scalingup/scalingup/utils/openai_api.py", line
314, in add_request
response: OpenAIObject = openai.Completion.create(
File
"/home/helen/anaconda3/envs/scalingup/lib/python3.10/site-packages/openai/api_resources/completion.py
", line 25, in create
return super().create(*args, **kwargs)
File
"/home/helen/anaconda3/envs/scalingup/lib/python3.10/site-packages/openai/api_resources/abstract/engi
ne_api_resource.py", line 115, in create
response, _, api_key = requestor.request(
File "/home/helen/anaconda3/envs/scalingup/lib/python3.10/site-packages/openai/api_requestor.py",
line 181, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/helen/anaconda3/envs/scalingup/lib/python3.10/site-packages/openai/api_requestor.py",
line 396, in _interpret_response
self._interpret_response_line(
File "/home/helen/anaconda3/envs/scalingup/lib/python3.10/site-packages/openai/api_requestor.py",
line 429, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: You requested a length 0 completion, but you did not set the 'echo'
parameter. That means that we will return no data to you. (HINT: set 'echo' to true in order for the
API to echo back the prompt to you.)
Any insights on this will be greatly appreciated. Thanks!
I meet the same issue "openai.error.InvalidRequestError: You requested a length 0 completion, but you did not set the 'echo' parameter."
I solve it by setting 'api_config.echo=True' at line 395 in openai_api.py.
I'll push an update soon to fix it! Thanks for pointing it out.