a2c_common.py: UnboundLocalError: local variable 'mean_rewards' referenced before assignment
Closed this issue · 3 comments
(this is on line 1214 in the version that isaac gym is using):
rl_games/rl_games/common/a2c_common.py
Line 1207 in a33b6c4
the only changes i made in Isaac Gym is this function in isaacgymenvs/tasks/cartpole.py
:
@torch.jit.script
def compute_cartpole_reward(pole_angle, pole_vel, cart_vel, cart_pos,
reset_dist, reset_buf, progress_buf, max_episode_length):
# type: (Tensor, Tensor, Tensor, Tensor, float, Tensor, Tensor, float) -> Tuple[Tensor, Tensor]
reward = 1 - torch.abs(pole_angle) - 0.01*torch.abs(cart_vel)
reset = reset_buf
return reward, reset
error:
(rlgpu) stuart@hp:~/repos/IsaacGymEnvs/isaacgymenvs$ python train.py task=Cartpole
...
fps step: 229110.7 fps step and policy inference: 168550.7 fps total: 121436.5
Error executing job with overrides: ['task=Cartpole']
Traceback (most recent call last):
File "train.py", line 131, in <module>
launch_rlg_hydra()
File "/home/stuart/miniconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/main.py", line 52, in decorated_main
config_name=config_name,
File "/home/stuart/miniconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/_internal/utils.py", line 378, in _run_hydra
lambda: hydra.run(
File "/home/stuart/miniconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/_internal/utils.py", line 214, in run_and_report
raise ex
File "/home/stuart/miniconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/_internal/utils.py", line 211, in run_and_report
return func()
File "/home/stuart/miniconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/_internal/utils.py", line 381, in <lambda>
overrides=args.overrides,
File "/home/stuart/miniconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/_internal/hydra.py", line 111, in run
_ = ret.return_value
File "/home/stuart/miniconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/core/utils.py", line 233, in return_value
raise self._return_value
File "/home/stuart/miniconda3/envs/rlgpu/lib/python3.7/site-packages/hydra/core/utils.py", line 160, in run_job
ret.return_value = task_function(task_cfg)
File "train.py", line 127, in launch_rlg_hydra
'play': cfg.test,
File "/home/stuart/miniconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/torch_runner.py", line 139, in run
self.run_train()
File "/home/stuart/miniconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/torch_runner.py", line 125, in run_train
agent.train()
File "/home/stuart/miniconda3/envs/rlgpu/lib/python3.7/site-packages/rl_games/common/a2c_common.py", line 1214, in train
self.save(os.path.join(self.nn_dir, 'last_' + self.config['name'] + 'ep' + str(epoch_num) + 'rew' + str(mean_rewards)))
UnboundLocalError: local variable 'mean_rewards' referenced before assignment
(rlgpu) stuart@hp:~/repos/IsaacGymEnvs/isaacgymenvs$
hope this is helpful. i'm new to this stuff
One possible option that you never send done=True. So when rlg tries to save model there is no final rewards.
Ill fix it on my side by checking that there is no mean_rewards. But there is possible error in env.
Hello. I've run into the same issue, also working with Isaac Gym (IsaacGymEvs specifically).
@Denys88 it is not clear to me what you mean with:
you never send done=True.
Can you please advice further on how this issue may be resolved?
Hi @yorgosk here are two options:
- I have 'save_best_after' parameter which is defaulted to 100. If you never send done = True in any of your envs for the first 100 epochs you will get exact this error. if you set it to the very high value issue should disappear if everything is fine with your env.
- you never send done == True because of the errors in your code. If episode never ends I cannot return statistics.