README.md

    PyPI Conda Conda update PyPI - Python Version PyTorch Version

    Loc Comments

    Style Docs Unittest Algotest deploy codecov

    GitHub Org's stars GitHub stars GitHub forks GitHub commit activity GitHub issues GitHub pulls Contributors GitHub license

    Updated on 2021.12.03 DI-engine-v0.2.2 (beta)

    Introduction to DI-engine (beta)

    DI-engine doc | 中文文档

    DI-engine is a generalized decision intelligence engine. It supports various deep reinforcement learning algorithms (link):

    • Most basic DRL algorithms, such as DQN, PPO, SAC, R2D2
    • Multi-agent RL algorithms like QMIX, MAPPO
    • Imitation learning algorithms (BC/IRL/GAIL) , such as GAIL, SQIL, Guided Cost Learning
    • Exploration algorithms like HER, RND, ICM
    • Offline RL algorithms: CQL, TD3BC
    • Model-based RL algorithms: MBPO

    DI-engine aims to standardize different RL enviroments and applications. Various training pipelines and customized decision AI applications are also supported.

    DI-engine also has some system optimization and design for efficient and robust large-scale RL training:

    Have fun with exploration and exploitation.

    Installation

    You can simply install DI-engine from PyPI with the following command:

    pip install DI-engine

    If you use Anaconda or Miniconda, you can install DI-engine from conda-forge through the following command:

    conda install -c opendilab di-engine

    For more information about installation, you can refer to installation.

    And our dockerhub repo can be found here,we prepare base image and env image with common RL environments.

    • base: opendilab/ding:nightly
    • atari: opendilab/ding:nightly-atari
    • mujoco: opendilab/ding:nightly-mujoco
    • smac: opendilab/ding:nightly-smac

    The detailed documentation are hosted on doc | 中文文档.

    Quick Start

    3 Minutes Kickoff

    3 Minutes Kickoff (colab)

    3 分钟上手中文版 (kaggle)

    How to migrate a new RL Env | 如何迁移一个新的强化学习环境

    Bonus: Train RL agent in one line code:

    ding -m serial -e cartpole -p dqn -s 0

    Supporters

    ↳ Stargazers

    Stargazers repo roster for @opendilab/DI-engine

    ↳ Forkers

    Forkers repo roster for @opendilab/DI-engine

    Feature

    Algorithm Versatility

    No Algorithm Label Doc and Implementation Runnable Demo
    1 DQN discrete DQN doc
    DQN中文文档
    policy/dqn
    python3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0
    2 C51 discrete policy/c51 ding -m serial -c cartpole_c51_config.py -s 0
    3 QRDQN discrete policy/qrdqn ding -m serial -c cartpole_qrdqn_config.py -s 0
    4 IQN discrete policy/iqn ding -m serial -c cartpole_iqn_config.py -s 0
    5 Rainbow discrete policy/rainbow ding -m serial -c cartpole_rainbow_config.py -s 0
    6 SQL discretecontinuous policy/sql ding -m serial -c cartpole_sql_config.py -s 0
    7 R2D2 distdiscrete policy/r2d2 ding -m serial -c cartpole_r2d2_config.py -s 0
    8 A2C discrete policy/a2c ding -m serial -c cartpole_a2c_config.py -s 0
    9 PPO/MAPPO discretecontinuousMARL policy/ppo python3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0
    10 PPG discrete policy/ppg python3 -u cartpole_ppg_main.py
    11 ACER discretecontinuous policy/acer ding -m serial -c cartpole_acer_config.py -s 0
    12 IMPALA distdiscrete policy/impala ding -m serial -c cartpole_impala_config.py -s 0
    13 DDPG/PADDPG continuoushybrid policy/ddpg ding -m serial -c pendulum_ddpg_config.py -s 0
    14 TD3 continuoushybrid policy/td3 python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0
    15 D4PG continuous policy/d4pg python3 -u pendulum_d4pg_config.py
    16 SAC continuous policy/sac ding -m serial -c pendulum_sac_config.py -s 0
    17 PDQN hybrid policy/pdqn ding -m serial -c gym_hybrid_pdqn_config.py -s 0
    18 MPDQN hybrid policy/pdqn ding -m serial -c gym_hybrid_mpdqn_config.py -s 0
    19 QMIX MARL policy/qmix ding -m serial -c smac_3s5z_qmix_config.py -s 0
    20 COMA MARL policy/coma ding -m serial -c smac_3s5z_coma_config.py -s 0
    21 QTran MARL policy/qtran ding -m serial -c smac_3s5z_qtran_config.py -s 0
    22 WQMIX MARL policy/wqmix ding -m serial -c smac_3s5z_wqmix_config.py -s 0
    23 CollaQ MARL policy/collaq ding -m serial -c smac_3s5z_collaq_config.py -s 0
    24 GAIL IL reward_model/gail ding -m serial_gail -c cartpole_dqn_gail_config.py -s 0
    25 SQIL IL entry/sqil ding -m serial_sqil -c cartpole_sqil_config.py -s 0
    26 DQFD IL policy/dqfd ding -m serial_dqfd -c cartpole_dqfd_config.py -s 0
    27 R2D3 IL R2D3中文文档
    policy/r2d3
    python3 -u pong_r2d3_r2d2expert_config.py
    28 Guided Cost Learning IL reward_model/guided_cost python3 lunarlander_gcl_config.py
    29 TREX IL reward_model/trex python3 mujoco_trex_main.py
    30 HER exp reward_model/her python3 -u bitflip_her_dqn.py
    31 RND exp reward_model/rnd python3 -u cartpole_ppo_rnd_main.py
    32 ICM exp ICM中文文档
    reward_model/icm
    python3 -u cartpole_ppo_icm_config.py
    33 CQL offline policy/cql python3 -u d4rl_cql_main.py
    34 TD3BC offline policy/td3_bc python3 -u mujoco_td3_bc_main.py
    35 MBPO mbrl model/template/model_based/mbpo python3 -u sac_halfcheetah_mopo_default_config.py
    36 PER other worker/replay_buffer rainbow demo
    37 GAE other rl_utils/gae ppo demo

    discrete means discrete action space, which is only label in normal DRL algorithms (1-18)

    continuous means continuous action space, which is only label in normal DRL algorithms (1-18)

    hybrid means hybrid (discrete + continuous) action space (1-18)

    dist means distributed training (collector-learner parallel) RL algorithm

    MARL means multi-agent RL algorithm

    exp means RL algorithm which is related to exploration and sparse reward

    IL means Imitation Learning, including Behaviour Cloning, Inverse RL, Adversarial Structured IL

    offline means offline RL algorithm

    mbrl means model-based RL algorithm

    other means other sub-direction algorithm, usually as plugin-in in the whole pipeline

    P.S: The .py file in Runnable Demo can be found in dizoo

    Environment Versatility

    No Environment Label Visualization Code and Doc Links
    1 atari discrete original code link
    env tutorial
    环境指南
    2 box2d/bipedalwalker continuous original dizoo link
    环境指南
    3 box2d/lunarlander discrete original dizoo link
    环境指南
    4 classic_control/cartpole discrete original dizoo link
    环境指南
    5 classic_control/pendulum continuous original dizoo link
    环境指南
    6 competitive_rl discrete selfplay original dizoo link
    环境指南
    7 gfootball discretesparseselfplay original dizoo link
    环境指南
    8 minigrid discretesparse original dizoo link
    环境指南
    9 mujoco continuous original dizoo link
    环境指南
    10 multiagent_particle discrete marl original dizoo link
    环境指南
    11 overcooked discrete marl original dizoo link
    env tutorial
    12 procgen discrete original dizoo link
    环境指南
    13 pybullet continuous original dizoo link
    环境指南
    14 smac discrete marlselfplaysparse original dizoo link
    环境指南
    15 d4rl offline ori dizoo link
    环境指南
    16 league_demo discrete selfplay original dizoo link
    17 pomdp atari discrete dizoo link
    18 bsuite discrete original dizoo link
    env tutorial
    19 ImageNet IL original dizoo link
    环境指南
    20 slime_volleyball discreteselfplay ori dizoo link
    环境指南
    21 gym_hybrid hybrid ori dizoo link
    环境指南
    22 GoBigger hybridmarlselfplay ori opendilab link
    env tutorial
    环境指南
    23 gym_soccer hybrid ori dizoo link
    环境指南
    24 multiagent_mujoco continuous marl original dizoo link
    环境指南

    discrete means discrete action space

    continuous means continuous action space

    hybrid means hybrid (discrete + continuous) action space

    MARL means multi-agent RL environment

    sparse means environment which is related to exploration and sparse reward

    offline means offline RL environment

    IL means Imitation Learning or Supervised Learning Dataset

    selfplay means environment that allows agent VS agent battle

    P.S. some enviroments in Atari, such as MontezumaRevenge, are also sparse reward type

    Feedback and Contribution

    We appreciate all the feedbacks and contributions to improve DI-engine, both algorithms and system designs. And CONTRIBUTING.md offers some necessary information.

    Citation

    @misc{ding,
        title={{DI-engine: OpenDILab} Decision Intelligence Engine},
        author={DI-engine Contributors},
        publisher = {GitHub},
        howpublished = {\url{https://github.com/opendilab/DI-engine}},
        year={2021},
    }

    License

    DI-engine released under the Apache 2.0 license.

    项目简介

    OpenDILab Decision AI Engine

    🚀 Github 镜像仓库 🚀

    源项目地址

    https://github.com/opendilab/DI-engine

    发行版本 5

    v0.2.2

    全部发行版

    贡献者 28

    全部贡献者

    开发语言

    • Python 99.8 %
    • Shell 0.2 %
    • Makefile 0.0 %