/oprl

A Modular Library for Off-Policy Reinforcement Learning with a focus on SafeRL and distributed computing

Primary LanguagePython

oprl_logo

OPRL

A Modular Library for Off-Policy Reinforcement Learning with a focus on SafeRL and distributed computing. Benchmarking resutls are available at associated homepage: Homepage

Code style: black

Disclaimer

The project is under an active renovation, for the old code with D4PG algorithm working with multiprocessing queues and mujoco_py please refer to the branch d4pg_legacy.

Roadmap 🏗

  • Switching to mujoco 3.1.1
  • Replacing multiprocessing queues with RabbitMQ for distributed RL
  • Baselines with DDPG, TQC for dm_control for 1M step
  • Tests
  • Support for SafetyGymnasium
  • Style and readability improvements
  • Baselines with Distributed algorithms for dm_control
  • D4PG logic on top of TQC

Installation

pip install -r requirements.txt
cd src && pip install -e .

For working with SafetyGymnasium install it manually

git clone https://github.com/PKU-Alignment/safety-gymnasium
cd safety-gymnasium && pip install -e .

Usage

To run DDPG in a single process

python src/oprl/configs/ddpg.py --env walker-walk

To run distributed DDPG

Run RabbitMQ

docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3.12-management

Run training

python src/oprl/configs/d3pg.py --env walker-walk

Tests

cd src && pip install -e .
cd .. && pip install -r tests/functional/requirements.txt
python -m pytest tests

Results

Results for single process DDPG and TQC: ddpg_tqc_eval

Acknowledgements