medipixel/rl_algorithms

Loaded pretrained model does not converge quickly

Closed this issue · 2 comments

I trained a SAC agent on a custom environment and the agent behaves well and by the final episode the agent is converging quickly (in a low number of steps) towards the target objectives.

when I save the trained agent and load it and start training again (agent.train()) the agent takes a long time to converge again towards the target objectives ( as if it is being trained from scratch).
shouldn't the agent continue training as if it was stopped at the final episode?
I am using lunar lander continuous V2 main script and SACLearner as the RL agent

Hello, Thanks for using our repo.

In SAC, we are using initial random action for exploration.(here and here)
That's why pre-trained model seems like not successfully loaded in train phase.

It's better to change initial random action config as 0 to train your agent without exploration.

Hello, Thanks a lot for you answer, that seems to be the issue, changing the initial random action to 0 in config solved the problem.