/DI-engine

OpenDILab Decision AI Engine

Primary LanguagePythonApache License 2.0Apache-2.0


Twitter PyPI Conda Conda update PyPI - Python Version PyTorch Version OneFlow Version

Loc Comments

Style Docs Unittest Algotest deploy codecov

GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

Updated on 2022.09.23 DI-engine-v0.4.3

Introduction to DI-engine

DI-engine doc | 中文文档

DI-engine is a generalized decision intelligence engine. It supports various deep reinforcement learning algorithms (link):

  • Most basic DRL algorithms, such as DQN, PPO, SAC, R2D2, IMPALA
  • Multi-agent RL algorithms like QMIX, MAPPO
  • Imitation learning algorithms (BC/IRL/GAIL) , such as GAIL, SQIL, Guided Cost Learning, Implicit Behavioral Cloning
  • Exploration algorithms like HER, RND, ICM, NGU
  • Offline RL algorithms: CQL, TD3BC, Decision Transformer
  • Model-based RL algorithms: SVG, MVE, STEVE / MBPO, DDPPO

DI-engine aims to standardize different Decision Intelligence enviroments and applications. Various training pipelines and customized decision AI applications are also supported.

DI-engine also has some system optimization and design for efficient and robust large-scale RL training:

Have fun with exploration and exploitation.

Outline

Installation

You can simply install DI-engine from PyPI with the following command:

pip install DI-engine

If you use Anaconda or Miniconda, you can install DI-engine from conda-forge through the following command:

conda install -c opendilab di-engine

For more information about installation, you can refer to installation.

And our dockerhub repo can be found here,we prepare base image and env image with common RL environments.

  • base: opendilab/ding:nightly
  • atari: opendilab/ding:nightly-atari
  • mujoco: opendilab/ding:nightly-mujoco
  • dmc: opendilab/ding:nightly-dmc2gym
  • metaworld: opendilab/ding:nightly-metaworld
  • smac: opendilab/ding:nightly-smac
  • grf: opendilab/ding:nightly-grf

The detailed documentation are hosted on doc | 中文文档.

Quick Start

3 Minutes Kickoff

3 Minutes Kickoff (colab)

How to migrate a new RL Env | 如何迁移一个新的强化学习环境

如何定制策略使用的神经网络模型

Bonus: Train RL agent in one line code:

ding -m serial -e cartpole -p dqn -s 0

Feature

Algorithm Versatility

discrete  discrete means discrete action space, which is only label in normal DRL algorithms (1-18)

continuous  means continuous action space, which is only label in normal DRL algorithms (1-18)

hybrid  means hybrid (discrete + continuous) action space (1-18)

dist  Distributed Reinforcement Learning分布式强化学习

MARL  Multi-Agent Reinforcement Learning多智能体强化学习

exp  Exploration Mechanisms in Reinforcement Learning强化学习中的探索机制

IL  Imitation Learning模仿学习

offline  Offiline Reinforcement Learning离线强化学习

mbrl  Model-Based Reinforcement Learning基于模型的强化学习

other  means other sub-direction algorithm, usually as plugin-in in the whole pipeline

P.S: The .py file in Runnable Demo can be found in dizoo

No. Algorithm Label Doc and Implementation Runnable Demo
1 DQN discrete DQN doc
DQN中文文档
policy/dqn
python3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0
2 C51 discrete C51 doc
policy/c51
ding -m serial -c cartpole_c51_config.py -s 0
3 QRDQN discrete QRDQN doc
policy/qrdqn
ding -m serial -c cartpole_qrdqn_config.py -s 0
4 IQN discrete IQN doc
policy/iqn
ding -m serial -c cartpole_iqn_config.py -s 0
5 FQF discrete FQF doc
policy/fqf
ding -m serial -c cartpole_fqf_config.py -s 0
6 Rainbow discrete Rainbow doc
policy/rainbow
ding -m serial -c cartpole_rainbow_config.py -s 0
7 SQL discretecontinuous SQL doc
policy/sql
ding -m serial -c cartpole_sql_config.py -s 0
8 R2D2 distdiscrete R2D2 doc
policy/r2d2
ding -m serial -c cartpole_r2d2_config.py -s 0
9 A2C discrete A2C doc
policy/a2c
ding -m serial -c cartpole_a2c_config.py -s 0
10 PPO/MAPPO discretecontinuousMARL PPO doc
policy/ppo
python3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0
11 PPG discrete PPG doc
policy/ppg
python3 -u cartpole_ppg_main.py
12 ACER discretecontinuous ACER doc
policy/acer
ding -m serial -c cartpole_acer_config.py -s 0
13 IMPALA distdiscrete IMPALA doc
policy/impala
ding -m serial -c cartpole_impala_config.py -s 0
14 DDPG/PADDPG continuoushybrid DDPG doc
policy/ddpg
ding -m serial -c pendulum_ddpg_config.py -s 0
15 TD3 continuoushybrid TD3 doc
policy/td3
python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0
16 D4PG continuous D4PG doc
policy/d4pg
python3 -u pendulum_d4pg_config.py
17 SAC/[MASAC] discretecontinuousMARL SAC doc
policy/sac
ding -m serial -c pendulum_sac_config.py -s 0
18 PDQN hybrid policy/pdqn ding -m serial -c gym_hybrid_pdqn_config.py -s 0
19 MPDQN hybrid policy/pdqn ding -m serial -c gym_hybrid_mpdqn_config.py -s 0
20 HPPO hybrid policy/ppo ding -m serial_onpolicy -c gym_hybrid_hppo_config.py -s 0
21 QMIX MARL QMIX doc
policy/qmix
ding -m serial -c smac_3s5z_qmix_config.py -s 0
22 COMA MARL COMA doc
policy/coma
ding -m serial -c smac_3s5z_coma_config.py -s 0
23 QTran MARL policy/qtran ding -m serial -c smac_3s5z_qtran_config.py -s 0
24 WQMIX MARL WQMIX doc
policy/wqmix
ding -m serial -c smac_3s5z_wqmix_config.py -s 0
25 CollaQ MARL CollaQ doc
policy/collaq
ding -m serial -c smac_3s5z_collaq_config.py -s 0
26 GAIL IL GAIL doc
reward_model/gail
ding -m serial_gail -c cartpole_dqn_gail_config.py -s 0
27 SQIL IL SQIL doc
entry/sqil
ding -m serial_sqil -c cartpole_sqil_config.py -s 0
28 DQFD IL DQFD doc
policy/dqfd
ding -m serial_dqfd -c cartpole_dqfd_config.py -s 0
29 R2D3 IL R2D3 doc
R2D3中文文档
policy/r2d3
python3 -u pong_r2d3_r2d2expert_config.py
30 Guided Cost Learning IL Guided Cost Learning中文文档
reward_model/guided_cost
python3 lunarlander_gcl_config.py
31 TREX IL TREX doc
reward_model/trex
python3 mujoco_trex_main.py
32 Implicit Behavorial Cloning (DFO+MCMC) IL policy/ibc & model/template/ebm python3 d4rl_ibc_main.py -s 0 -c pen_human_ibc_mcmc_config.py
33 BCO IL entry/bco python3 -u cartpole_bco_config.py
34 HER exp HER doc
reward_model/her
python3 -u bitflip_her_dqn.py
35 RND exp RND doc
reward_model/rnd
python3 -u cartpole_rnd_onppo_config.py
36 ICM exp ICM doc
ICM中文文档
reward_model/icm
python3 -u cartpole_ppo_icm_config.py
37 CQL offline CQL doc
policy/cql
python3 -u d4rl_cql_main.py
38 TD3BC offline TD3BC doc
policy/td3_bc
python3 -u mujoco_td3_bc_main.py
39 MBSAC(SAC+MVE+SVG) continuousmbrl policy/mbpolicy/mbsac python3 -u pendulum_mbsac_mbpo_config.py \ python3 -u pendulum_mbsac_ddppo_config.py
40 STEVESAC(SAC+STEVE+SVG) continuousmbrl policy/mbpolicy/mbsac python3 -u pendulum_stevesac_mbpo_config.py
41 MBPO mbrl MBPO doc
world_model/mbpo
python3 -u pendulum_sac_mbpo_config.py
42 DDPPO mbrl world_model/ddppo python3 -u pendulum_mbsac_ddppo_config.py
43 PER other worker/replay_buffer rainbow demo
44 GAE other rl_utils/gae ppo demo
45 ST-DIM other torch_utils/loss/contrastive_loss ding -m serial -c cartpole_dqn_stdim_config.py -s 0
46 PLR other PLR doc
data/level_replay/level_sampler
python3 -u bigfish_plr_config.py -s 0
47 PCGrad other torch_utils/optimizer_helper/PCGrad python3 -u multi_mnist_pcgrad_main.py -s 0

Environment Versatility

No Environment Label Visualization Code and Doc Links
1 atari discrete original dizoo link
env tutorial
环境指南
2 box2d/bipedalwalker continuous original dizoo link
env tutorial
环境指南
3 box2d/lunarlander discrete original dizoo link
env tutorial
环境指南
4 classic_control/cartpole discrete original dizoo link
env tutorial
环境指南
5 classic_control/pendulum continuous original dizoo link
env tutorial
环境指南
6 competitive_rl discrete selfplay original dizoo link
环境指南
7 gfootball discretesparseselfplay original dizoo link
环境指南
8 minigrid discretesparse original dizoo link
env tutorial
环境指南
9 mujoco continuous original dizoo link
env tutorial
环境指南
10 PettingZoo discrete continuous marl original dizoo link
环境指南
11 overcooked discrete marl original dizoo link
env tutorial
12 procgen discrete original dizoo link
env tutorial
环境指南
13 pybullet continuous original dizoo link
环境指南
14 smac discrete marlselfplaysparse original dizoo link
env tutorial
环境指南
15 d4rl offline ori dizoo link
环境指南
16 league_demo discrete selfplay original dizoo link
17 pomdp atari discrete dizoo link
18 bsuite discrete original dizoo link
env tutorial
19 ImageNet IL original dizoo link
环境指南
20 slime_volleyball discreteselfplay ori dizoo link
env tutorial
环境指南
21 gym_hybrid hybrid ori dizoo link
env tutorial
环境指南
22 GoBigger hybridmarlselfplay ori dizoo link
env tutorial
环境指南
23 gym_soccer hybrid ori dizoo link
环境指南
24 multiagent_mujoco continuous marl original dizoo link
环境指南
25 bitflip discrete sparse original dizoo link
环境指南
26 sokoban discrete Game 2 dizoo link
环境指南
27 gym_anytrading discrete original dizoo link
环境指南
28 mario discrete original dizoo link
环境指南
29 dmc2gym continuous original dizoo link
环境指南
env tutorial
30 evogym continuous original dizoo link
环境指南

discrete means discrete action space

continuous means continuous action space

hybrid means hybrid (discrete + continuous) action space

MARL means multi-agent RL environment

sparse means environment which is related to exploration and sparse reward

offline means offline RL environment

IL means Imitation Learning or Supervised Learning Dataset

selfplay means environment that allows agent VS agent battle

P.S. some enviroments in Atari, such as MontezumaRevenge, are also sparse reward type

Feedback and Contribution

We appreciate all the feedbacks and contributions to improve DI-engine, both algorithms and system designs. And CONTRIBUTING.md offers some necessary information.

Supporters

↳ Stargazers

Stargazers repo roster for @opendilab/DI-engine

↳ Forkers

Forkers repo roster for @opendilab/DI-engine

Citation

@misc{ding,
    title={{DI-engine: OpenDILab} Decision Intelligence Engine},
    author={DI-engine Contributors},
    publisher = {GitHub},
    howpublished = {\url{https://github.com/opendilab/DI-engine}},
    year={2021},
}

License

DI-engine released under the Apache 2.0 license.