/raylab

Reinforcement learning algorithms in RLlib

Primary LanguagePythonMIT LicenseMIT

raylab

PyPI GitHub Workflow Status Dependabot GitHub CodeStyle

Reinforcement learning algorithms in RLlib and PyTorch.

Introduction

Raylab provides agents and environments to be used with a normal RLlib/Tune setup.

import ray
from ray import tune
import raylab

def main():
    raylab.register_all_agents()
    raylab.register_all_environments()
    ray.init()
    tune.run(
        "NAF",
        local_dir=...,
        stop={"timesteps_total": 100000},
        config={
            "env": "CartPoleSwingUp-v0",
            "exploration_config": {
                "type": tune.grid_search([
                    "raylab.utils.exploration.GaussianNoise",
                    "raylab.utils.exploration.ParameterNoise"
                ])
            }
            ...
        },
    )

if __name__ == "__main__":
    main()

One can then visualize the results using raylab dashboard

https://i.imgur.com/bVc6WC5.png

Installation

pip install raylab

Algorithms

Paper Agent Name
Actor Critic using Kronecker-factored Trust Region ACKTR
Trust Region Policy Optimization TRPO
Normalized Advantage Function NAF
Stochastic Value Gradients SVG(inf)/SVG(1)/SoftSVG
Soft Actor-Critic SoftAC
Streamlined Off-Policy (DDPG) SOP
Model-Based Policy Optimization MBPO
Model-based Action-Gradient-Estimator MAGE

Command-line interface

For a high-level description of the available utilities, run raylab --help

Usage: raylab [OPTIONS] COMMAND [ARGS]...

  RayLab: Reinforcement learning algorithms in RLlib.

Options:
  --help  Show this message and exit.

Commands:
  dashboard    Launch the experiment dashboard to monitor training progress.
  episodes     Launch the episode dashboard to monitor state and action...
  experiment   Launch a Tune experiment from a config file.
  find-best    Find the best experiment checkpoint as measured by a metric.
  info         View information about an agent's config parameters.
  rollout      Wrap `rllib rollout` with customized options.
  test-module  Launch dashboard to test generative models from a checkpoint.

Packages

The project is structured as follows

raylab
|-- agents            # Trainer and Policy classes
|-- cli               # Command line utilities
|-- envs              # Gym environment registry and utilities
|-- logger            # Tune loggers
|-- policy            # Extensions and customizations of RLlib's policy API
|   |-- losses        # RL loss functions
|   |-- modules       # PyTorch neural network modules for TorchPolicy
|-- pytorch           # PyTorch extensions
|-- utils             # miscellaneous utilities