Training does not work properly and agent does not beyond first traffic light in TOWN02
DeLeonOscar opened this issue · 4 comments
Hello Idree,
I was going through your code, and I had a question about a specific part of it. I was training a new agent from scratch in TOWN02. The training was going well (the average reward is increasing every timestep), however, out of a sudden at any point the training stopped and then “exit” is printed in the command window prompt. Basically, the training is not finished, and it does not get to the end of the training (the print “Terminating the run” does never appear). I run the training several times and still doing the same situation but at different timesteps. I am making sure that in the while loop timestep < TOTAL_TIMESETPS, does not break because of this condition. I have attached a screenshot of the situation when the training stops and jumps to the printed exit. I don’t know if run into a similar issue.
Another issue is that the agent does not go beyond the first traffic light corner. I run the code until 2M timesteps but every time the agent restart and stops in that specific point. I don't not if this issue is related to training is stopped unexpected.
Hello Idree,
I was going through your code, and I had a question about a specific part of it. I was training a new agent from scratch in TOWN02. The training was going well (the average reward is increasing every timestep), however, out of a sudden at any point the training stopped and then “exit” is printed in the command window prompt. Basically, the training is not finished, and it does not get to the end of the training (the print “Terminating the run” does never appear). I run the training several times and still doing the same situation but at different timesteps. I am making sure that in the while loop timestep < TOTAL_TIMESETPS, does not break because of this condition. I have attached a screenshot of the situation when the training stops and jumps to the printed exit. I don’t know if run into a similar issue.
Another issue is that the agent does not go beyond the first traffic light corner. I run the code until 2M timesteps but every time the agent restart and stops in that specific point. I don't not if this issue is related to training is stopped unexpected.
Thank you, Oscar
I think you have the same problem with me, what is your carla version? I trained it in town7 and it will exit in less than 150 eposides
I am using CARLA version 0.9.13. I was not able to train my agent in Town07. Every time I wanted to run it in Town07 a glitch appear.
I am using CARLA version 0.9.13. I was not able to train my agent in Town07. Every time I wanted to run it in Town07 a glitch appear.
I'd suggest to use the version I've used. If it works fine then please review the changelog of 0.9.13 version because couple of things might have changed.
Hello Idree,
I was going through your code, and I had a question about a specific part of it. I was training a new agent from scratch in TOWN02. The training was going well (the average reward is increasing every timestep), however, out of a sudden at any point the training stopped and then “exit” is printed in the command window prompt. Basically, the training is not finished, and it does not get to the end of the training (the print “Terminating the run” does never appear). I run the training several times and still doing the same situation but at different timesteps. I am making sure that in the while loop timestep < TOTAL_TIMESETPS, does not break because of this condition. I have attached a screenshot of the situation when the training stops and jumps to the printed exit. I don’t know if run into a similar issue.
Another issue is that the agent does not go beyond the first traffic light corner. I run the code until 2M timesteps but every time the agent restart and stops in that specific point. I don't not if this issue is related to training is stopped unexpected.
@DeLeonOscar Can I ask you something? With what parameters did you achive those results? Bacuse I am running it for a few days now with differenet experimentes but my rewards are keep getting increased in a much lower degree.