/marLo

Multi Agent Reinforcement Learning using MalmÖ

Primary LanguagePythonMIT LicenseMIT

https://raw.githubusercontent.com/crowdAI/crowdai/master/app/assets/images/misc/crowdai-logo-smile.svg?sanitize=true

MarLÖ : Reinforcement Learning + Minecraft = Awesomeness

https://readthedocs.org/projects/marlo/badge/

MarLÖ (short for Multi-Agent Reinforcement Learning in MalmÖ) is a high level API built on top of Project MalmÖ to facilitate Reinforcement Learning experiments with a great degree of generalizability, capable of solving problems in pseudo-random, procedurally changing single and multi agent environments withing the world of the mediatic phenomenon game Minecraft .

The Malmo platform provides an API which enables access to actions, observations (i.e. location, surroundings, video frames, game statistics) and other general data that Minecraft provides. Marlo, on the other hand, is a wrapper for Malmo that provides a higher level API and more standardized RL-friendly environment for scientific study.

The framework is written as an extension to OpenAI's Gym framework , which is a toolkit for developing and comparing reinforcement learning algorithms, thus providing an industry-standard and familiar platform for scientists, developers and popular RL frameworks.

MarLo-MazeRunner-v0
https://i.imgur.com/XpiVIoD.png
MarLo-CliffWalking-v0
https://i.imgur.com/cI1CgEQ.png
MarLo-CatchTheMob-v0
https://i.imgur.com/FtfKOzs.png
MarLo-FindTheGoal-v0
https://i.imgur.com/lpbQuty.png
MarLo-Attic-v0
https://imgur.com/fQVuOHD.png
MarLo-DefaultFlatWorld-v0
https://i.imgur.com/XQ7UxHP.png
MarLo-DefaultWorld-v0
https://i.imgur.com/bnpM9OX.png
MarLo-Eating-v0
https://i.imgur.com/kM5Y4pk.png
MarLo-Obstacles-v0
https://i.imgur.com/L53AlWG.png
MarLo-TrickyArena-v0
https://i.imgur.com/zfWeCnR.png
MarLo-Vertical-v0
https://i.imgur.com/jZC7buV.png
 

Contents

Simple Example

#!/usr/bin/env python
# Please ensure that you have a Minecraft client running on port 10000
# by doing :
# $MALMO_MINECRAFT_ROOT/launchClient.sh -port 10000

import marlo
client_pool = [('127.0.0.1', 10000)]
join_tokens = marlo.make('MarLo-MazeRunner-v0',
                          params={
                            "client_pool": client_pool
                          })
# As this is a single agent scenario,
# there will just be a single token
assert len(join_tokens) == 1
join_token = join_tokens[0]

env = marlo.init(join_token)

observation = env.reset()

done = False
while not done:
    _action = env.action_space.sample()
    obs, reward, done, info = env.step(_action)
    print("reward:", reward)
    print("done:", done)
    print("info", info)
env.close()

Authors