rstrivedi/Melting-Pot-Contest-2023

Training ended unexpected.

lidpeng opened this issue · 0 comments

Hi, @rstrivedi
Here is my arguments:
Running trails with the following arguments: Namespace(num_workers=2, num_gpus=0, local=False, no_tune=False, algo='ppo', framework='torch', exp='clean_up', seed=123, results_dir='./results', logging='INFO', wandb=False, downsample=True, as_test=False)

After starting the training, the code automatically ended around 400 steps (nearly 2 minutes), and it seems that no errors were thrown. Do you have any suggestions for modification?

(PPO pid=299168) 2023-10-19 14:23:40,547 INFO rollout_worker.py:786 -- Training on concatenated sample batches:
(PPO pid=299168)
(PPO pid=299168) { 'count': 32,
(PPO pid=299168) 'policy_batches': { 'agent_3': { 'action_dist_inputs': np.ndarray((32, 9), dtype=float32, min=-0.176, max=0.307, mean=0.013),

...

(PPO pid=299168)
(PPO pid=299168) 2023-10-19 14:23:40,553 INFO rnn_sequencing.py:178 -- Padded input for RNN/Attn.Nets/MA:
....
(RolloutWorker pid=302375) /home/ldp/anaconda3/envs/mpc_main/lib/python3.10/site-packages/gymnasium/spaces/box.py:227: UserWarning: WARN: Casting input x to numpy array.
(RolloutWorker pid=302375) logger.warn("Casting input x to numpy array.")
...
(RolloutWorker pid=302375) 2023-10-19 14:23:33,831 INFO policy.py:1294 -- Policy (worker=2) running on CPU. [repeated 7x across cluster]
(PPO pid=299168) 2023-10-19 14:23:34,325 INFO torch_policy_v2.py:113 -- Found 0 visible cuda devices. [repeated 14x across cluster]
...
(PPO pid=299168) 2023-10-19 14:23:34,339 INFO util.py:118 -- Using connectors: [repeated 14x across cluster]
(PPO pid=299168) 2023-10-19 14:23:34,339 INFO util.py:119 -- AgentConnectorPipeline [repeated 14x across cluster]
(PPO pid=299168) StateBufferConnector [repeated 14x across cluster]
(PPO pid=299168) ViewRequirementAgentConnector [repeated 14x across cluster]
(PPO pid=299168) 2023-10-19 14:23:34,339 INFO util.py:120 -- ActionConnectorPipeline [repeated 14x across cluster]
(PPO pid=299168) ConvertToNumpyConnector [repeated 14x across cluster]
(PPO pid=299168) NormalizeActionsConnector [repeated 14x across cluster]
(PPO pid=299168) ImmutableActionsConnector [repeated 14x across cluster]
(RolloutWorker pid=302374) 2023-10-19 14:23:40,526 INFO rollout_worker.py:732 -- Completed sample batch:
(RolloutWorker pid=302374) 'agent_1': { 'action_dist_inputs': np.ndarray((200, 9), dtype=float32, min=-0.0, max=0.0, mean=-0.0),
(RolloutWorker pid=302374) 'agent_2': { 'action_dist_inputs': np.ndarray((200, 9), dtype=float32, min=-0.43, max=0.916, mean=0.128),
(RolloutWorker pid=302374) 'agent_3': { 'action_dist_inputs': np.ndarray((200, 9), dtype=float32, min=-0.314, max=0.355, mean=0.004),
(RolloutWorker pid=302374) 'agent_4': { 'action_dist_inputs': np.ndarray((200, 9), dtype=float32, min=-0.433, max=0.446, mean=-0.019),
(RolloutWorker pid=302374) 'agent_5': { 'action_dist_inputs': np.ndarray((200, 9), dtype=float32, min=-0.573, max=1.049, mean=0.05),
(RolloutWorker pid=302374) 'agent_6': { 'action_dist_inputs': np.ndarray((200, 9), dtype=float32, min=-0.464, max=0.5, mean=-0.004).

Result(
metrics={'custom_metrics': {}, 'episode_media': {}, 'info': {'learner': {'agent_3': {'learner_stats': {'allreduce_latency': 0.0, 'grad_gnorm': 0.2938595721563488, 'cur_kl_coeff': 0.19999999999999998, 'cur_lr': 5.000000000000001e-05, 'total_loss': 0.01326687481046783, 'policy_loss': 0.011805192082956956, 'vf_loss': 0.0014429467907768848, 'vf_explained_var': -1.0, 'kl': 9.368463734633353e-05, 'entropy': 2.1968671936737865, 'entropy_coeff': 0.0}, 'model': {}, 'custom_metrics': {}, 'num_agent_steps_trained': 32.0, 'num_grad_updates_lifetime': 285.5, 'diff_num_grad_updates_vs_sampler_policy': 284.5}, 'agent_6': {'learner_stats': {'allreduce_latency': 0.0, 'grad_gnorm': 0.44844337551151975, 'cur_kl_coeff': 0.19999999999999998, 'cur_lr': 5.000000000000001e-05, 'total_loss': 0.063614134408795, 'policy_loss': 0.062265382422820516, 'vf_loss': 0.0006033147813166013, 'vf_explained_var': -1.0, 'kl': 0.0037272102948426424, 'entropy': 2.197162473829169, 'entropy_coeff': 0.0}, 'model': {}, 'custom_metrics': {}, 'num_agent_steps_trained': 32.0, 'num_grad_updates_lifetime': 285.5, 'diff_num_grad_updates_vs_sampler_policy': 284.5}, 'agent_2': {'learner_stats': {'allreduce_latency': 0.0, 'grad_gnorm': 0.33651486742391923, 'cur_kl_coeff': 0.19999999999999998, 'cur_lr': 5.000000000000001e-05, 'total_loss': 0.020297989991836643, 'policy_loss': 0.01227701953693963, 'vf_loss': 0.004015607813974049, 'vf_explained_var': -0.9549619204119633, 'kl': 0.020026806541649823, 'entropy': 2.196402873072708, 'entropy_coeff': 0.0}, 'model': {}, 'custom_metrics': {}, 'num_agent_steps_trained': 32.0, 'num_grad_updates_lifetime': 285.5, 'diff_num_grad_updates_vs_sampler_policy': 284.5}, 'agent_0': {'learner_stats': {'allreduce_latency': 0.0, 'grad_gnorm': 0.08610846985768723, 'cur_kl_coeff': 0.19999999999999998, 'cur_lr': 5.000000000000001e-05, 'total_loss': 0.00414218608486025, 'policy_loss': 0.002945413388181151, 'vf_loss': 0.0011898500088354863, 'vf_explained_var': -1.0, 'kl': 3.462271995048046e-05, 'entropy': 2.1969731778429265, 'entropy_coeff': 0.0}, 'model': {}, 'custom_metrics': {}, 'num_agent_steps_trained': 32.0, 'num_grad_updates_lifetime': 285.5, 'diff_num_grad_updates_vs_sampler_policy': 284.5}, 'agent_4': {'learner_stats': {'allreduce_latency': 0.0, 'grad_gnorm': 0.7889667204074692, 'cur_kl_coeff': 0.19999999999999998, 'cur_lr': 5.000000000000001e-05, 'total_loss': 0.04514940154264893, 'policy_loss': -0.005434698990562506, 'vf_loss': 0.04920632253490846, 'vf_explained_var': -1.0, 'kl': 0.006888877354785194, 'entropy': 2.195473243897421, 'entropy_coeff': 0.0}, 'model': {}, 'custom_metrics': {}, 'num_agent_steps_trained': 32.0, 'num_grad_updates_lifetime': 285.5, 'diff_num_grad_updates_vs_sampler_policy': 284.5}, 'agent_1': {'learner_stats': {'allreduce_latency': 0.0, 'grad_gnorm': 0.20199027515032836, 'cur_kl_coeff': 0.19999999999999998, 'cur_lr': 5.000000000000001e-05, 'total_loss': 0.021706062150106096, 'policy_loss': 0.01644499337202624, 'vf_loss': 0.005171904300403789, 'vf_explained_var': -1.0, 'kl': 0.00044581278011054644, 'entropy': 2.195561667074237, 'entropy_coeff': 0.0}, 'model': {}, 'custom_metrics': {}, 'num_agent_steps_trained': 32.0, 'num_grad_updates_lifetime': 285.5, 'diff_num_grad_updates_vs_sampler_policy': 284.5}, 'agent_5': {'learner_stats': {'allreduce_latency': 0.0, 'grad_gnorm': 2.785927937036021, 'cur_kl_coeff': 0.19999999999999998, 'cur_lr': 5.000000000000001e-05, 'total_loss': 0.356755890442761, 'policy_loss': 0.024646596205339096, 'vf_loss': 0.3239888458541317, 'vf_explained_var': -0.735680664945067, 'kl': 0.040602242583161224, 'entropy': 2.195885472130357, 'entropy_coeff': 0.0}, 'model': {}, 'custom_metrics': {}, 'num_agent_steps_trained': 32.0, 'num_grad_updates_lifetime': 285.5, 'diff_num_grad_updates_vs_sampler_policy': 284.5}}, 'num_env_steps_sampled': 400, 'num_env_steps_trained': 400, 'num_agent_steps_sampled': 2800, 'num_agent_steps_trained': 2800}, 'sampler_results': {'episode_reward_max': nan, 'episode_reward_min': nan, 'episode_reward_mean': nan, 'episode_len_mean': nan, 'episode_media': {}, 'episodes_this_iter': 0, 'policy_reward_min': {}, 'policy_reward_max': {}, 'policy_reward_mean': {}, 'custom_metrics': {}, 'hist_stats': {'episode_reward': [], 'episode_lengths': []}, 'sampler_perf': {}, 'num_faulty_episodes': 0, 'connector_metrics': {}}, 'episode_reward_max': nan, 'episode_reward_min': nan, 'episode_reward_mean': nan, 'episode_len_mean': nan, 'episodes_this_iter': 0, 'policy_reward_min': {}, 'policy_reward_max': {}, 'policy_reward_mean': {}, 'hist_stats': {'episode_reward': [], 'episode_lengths': []}, 'sampler_perf': {}, 'num_faulty_episodes': 0, 'connector_metrics': {}, 'num_healthy_workers': 2, 'num_in_flight_async_reqs': 0, 'num_remote_worker_restarts': 0, 'num_agent_steps_sampled': 2800, 'num_agent_steps_trained': 2800, 'num_env_steps_sampled': 400, 'num_env_steps_trained': 400, 'num_env_steps_sampled_this_iter': 400, 'num_env_steps_trained_this_iter': 400, 'num_env_steps_sampled_throughput_per_sec': 7.399711776182035, 'num_env_steps_trained_throughput_per_sec': 7.399711776182035, 'num_steps_trained_this_iter': 400, 'agent_timesteps_total': 2800, 'timers': {'training_iteration_time_ms': 54056.106, 'sample_time_ms': 6118.289, 'learn_time_ms': 47902.557, 'learn_throughput': 8.35, 'synch_weights_time_ms': 32.938}, 'counters': {'num_env_steps_sampled': 400, 'num_env_steps_trained': 400, 'num_agent_steps_sampled': 2800, 'num_agent_steps_trained': 2800}, 'done': True, 'trial_id': '00b18_00000', 'perf': {'cpu_util_percent': 1.6532467532467536, 'ram_util_percent': 8.0}, 'experiment_tag': '0'},
path='/home/ldp/competitions/meltingpot/Melting-Pot-Contest-2023/results/torch/clean_up/PPO_meltingpot_00b18_00000_0_2023-10-19_14-23-22',
checkpoint=Checkpoint(local_path=/home/ldp/competitions/meltingpot/Melting-Pot-Contest-2023/results/torch/clean_up/PPO_meltingpot_00b18_00000_0_2023-10-19_14-23-22/checkpoint_000001)
)
(RolloutWorker pid=302374) [repeated 30x across cluster]
(RolloutWorker pid=302374) { 'count': 200,
(RolloutWorker pid=302374) 'policy_batches': { 'agent_0': { 'action_dist_inputs': np.ndarray((200, 9), dtype=float32, min=-0.0, max=0.001, mean=0.0),
(RolloutWorker pid=302374) 'action_logp': np.ndarray((200,), dtype=float32, min=-2.615, max=-1.737, mean=-2.185), [repeated 7x across cluster]
(RolloutWorker pid=302374) 'actions': np.ndarray((200,), dtype=int64, min=0.0, max=8.0, mean=3.71), [repeated 7x across cluster]
...