/sumo-gym

OpenAI-gym like toolkit for developing and comparing reinforcement learning algorithms on SUMO.

Primary LanguagePythonOtherNOASSERTION

SUMO-gym

Actions Status pre-commit.ci status Code style: black All Contributors

OpenAI-gym like toolkit for developing and comparing reinforcement learning algorithms on SUMO.

Installation

Install SUMO, SUMO GUI and XQuartz according to official guide.

$ python3 -m venv .env
$ source .env/bin/activate
(.env)$ pip install -r requirements.txt
(.env)$ pip install sumo-gym
(.env)$ export SUMO_HOME=<your_path_to>/sumo SUMO_GUI_PATH=<your_path_to>/sumo-gui # and copy the paths to ~/.bashrc

The installation is successful so far, then you can try the examples in the tutorials, for example:

(.env)$ python3 tutorials/fmp-jumbo.py --render 0

Features

SUMO-gym aims to build an interface between SUMO and Reinforcement Learning. With this toolkit, you will be able to convert the data generated from SUMO simulator into RL training setting like OpenAI-gym.

Remarkable features include:

  1. OpenAI-gym RL training environment based on SUMO.
import gym
from sumo_gym.envs.fmp import FMP

env = gym.make(
    "FMP-v0", mode, n_vertex, n_edge, n_vehicle, 
    n_electric_vehicles, n_charging_station, 
    vertices, demand, edges, 
    electric_vehicles, departures, charging_stations,
)
for _ in range(n_episode):
    obs = env.reset()
    for t in range(n_timestamp):
        action = env.action_space.sample()
        obs, reward, done, info = env.step(action)
        if done:
            break
env.close()
  1. Rendering tools based on matplotlib for urban mobility problems.

  1. Visualization tools that plot the statistics for each observation.

Contributors

We would like to acknowledge the contributors that made this project possible (emoji key):


N!no

💻 🐛 🤔

yunhaow

💻 🐛 🤔

Sam Fieldman

🐛 🤔

Lauren Hong

💻

nmauskar

💻

This project follows the all-contributors specification.