vwxyzjn/cleanrl

get action in sac_continuous_action.py

Opened this issue · 2 comments

Problem Description

Hi! Thanks for this clean script to help me understand sac.

But I have some questions about the implementation of sac's get action function, mainly focused on the following code snippet

# Enforcing Action Bound
log_prob -= torch.log(self.action_scale * (1 - y_t.pow(2)) + 1e-6)
log_prob = log_prob.sum(1, keepdim=True)

What is the purpose of this? Thanks!

Checklist

Usually in SAC we use Normal distribution coupled with tanh to bound action space. However, after such transformation the actual distribution is now not just standard Normal and we can not use it's lob_prob to get the probabilities of actions. This formula accounts for the transformation and gives right probabilities for TanhNormal distribution. See Appendix C in the original paper: https://arxiv.org/pdf/1801.01290.pdf

Thanks for your generous help @Howuhh. Is 1e-6 meant to limit the logarithmic value to approach negative infinity?