/distributed-ppo

This is an pytorch implementation of Distributed Proximal Policy Optimization(DPPO).

Primary LanguagePythonMIT LicenseMIT

Stargazers