Google Research Football
This repository contains an RL environment based on open-source game Gameplay
Football.
It was created by the Google Brain team for research purposes.
Useful links:
- (NEW!) GRF Kaggle competition - take part in the competition playing games against others, win prizes and become the GRF Champion!
- GRF Game Server - challenge other researchers!
- Run in Colab - start training in less that 2 minutes.
- Google Research Football Paper
- GoogleAI blog post
- Google Research Football on Cloud
- Mailing List - please use it for communication with us (comments / suggestions / feature ideas)
For non-public matters that you'd like to discuss directly with the GRF team, please use google-research-football@google.com.
We'd like to thank Bastiaan Konings Schuiling, who authored and open-sourced the original version of this game.
Quick Start
In colab
Open our example Colab, that will allow you to start training your model in less than 2 minutes.
This method doesn't support game rendering on screen - if you want to see the game running, please use the method below.
Using Docker
This is the recommended way to avoid incompatible package versions. Instructions are available here.
On your computer
1. Install required packages
Linux
sudo apt-get install git cmake build-essential libgl1-mesa-dev libsdl2-dev \
libsdl2-image-dev libsdl2-ttf-dev libsdl2-gfx-dev libboost-all-dev \
libdirectfb-dev libst-dev mesa-utils xvfb x11vnc libsdl-sge-dev python3-pip
Mac OS X
First install brew. It should automatically install Command Line Tools. Next install required packages:
brew install git python3 cmake sdl2 sdl2_image sdl2_ttf sdl2_gfx boost boost-python3
To set up pygame
, it is also required to install older versions of SDL:
brew install sdl sdl_image sdl_mixer sdl_ttf portmidi
2a. From PyPi package
pip3 install gfootball
2b. Installing from sources using GitHub repository
git clone https://github.com/google-research/football.git
cd football
Optionally you can use virtual environment:
python3 -m venv football-env
source football-env/bin/activate
The last step is to build the environment:
pip3 install .
This command can run for a couple of minutes, as it compiles the C++ environment in the background.
3. Time to play!
python3 -m gfootball.play_game --action_set=full
Make sure to check out the keyboard mappings. To quit the game press Ctrl+C in the terminal.
Contents
- Running training
- Playing the game
- Environment API
- Observations & Actions
- Scenarios
- Multi-agent support
- Running in docker
- Saving replays, logs, traces
Training agents to play GRF
Run training
In order to run TF training, install additional dependencies (or alternatively use provided Docker image):
- Update PIP, so that tensorflow 1.15 is available:
python3 -m pip install --upgrade pip setuptools
- TensorFlow:
pip3 install tensorflow==1.15.*
orpip3 install tensorflow-gpu==1.15.*
, depending on whether you want CPU or GPU version; - Sonnet:
pip3 install dm-sonnet==1.*
; - OpenAI Baselines:
pip3 install git+https://github.com/openai/baselines.git@master
.
Then:
- To run example PPO experiment on
academy_empty_goal
scenario, runpython3 -m gfootball.examples.run_ppo2 --level=academy_empty_goal_close
- To run on
academy_pass_and_shoot_with_keeper
scenario, runpython3 -m gfootball.examples.run_ppo2 --level=academy_pass_and_shoot_with_keeper
In order to train with nice replays being saved, run
python3 -m gfootball.examples.run_ppo2 --dump_full_episodes=True --render=True
In order to reproduce PPO results from the paper, please refer to:
- gfootball/examples/repro_checkpoint_easy.sh
- gfootball/examples/repro_scoring_easy.sh
Playing the game
Please note that playing the game is implemented through an environment, so human-controlled players use the same interface as the agents. One important implication is that there is a single action per 100 ms reported to the environment, which might cause a lag effect when playing.
Keyboard mappings
The game defines following keyboard mapping (for the keyboard
player type):
ARROW UP
- run to the top.ARROW DOWN
- run to the bottom.ARROW LEFT
- run to the left.ARROW RIGHT
- run to the right.S
- short pass in the attack mode, pressure in the defense mode.A
- high pass in the attack mode, sliding in the defense mode.D
- shot in the the attack mode, team pressure in the defense mode.W
- long pass in the the attack mode, goalkeeper pressure in the defense mode.Q
- switch the active player in the defense mode.C
- dribble in the attack mode.E
- sprint.
Play vs built-in AI
Run python3 -m gfootball.play_game --action_set=full
. By default, it starts
the base scenario and the left player is controlled by the keyboard. Different
types of players are supported (gamepad, external bots, agents...). For possible
options run python3 -m gfootball.play_game -helpfull
.
Play vs pre-trained agent
In particular, one can play against agent trained with run_ppo2
script with
the following command (notice no action_set flag, as PPO agent uses default
action set):
python3 -m gfootball.play_game --players "keyboard:left_players=1;ppo2_cnn:right_players=1,checkpoint=$YOUR_PATH"
Trained checkpoints
We provide trained PPO checkpoints for the following scenarios:
In order to see the checkpoints playing, run
python3 -m gfootball.play_game --players "ppo2_cnn:left_players=1,policy=gfootball_impala_cnn,checkpoint=$CHECKPOINT" --level=$LEVEL
,
where $CHECKPOINT
is the path to downloaded checkpoint.
In order to train against a checkpoint, you can pass 'extra_players' argument to create_environment function. For example extra_players='ppo2_cnn:right_players=1,policy=gfootball_impala_cnn,checkpoint=$CHECKPOINT'.
Frequent Problems & Solutions
Rendering off-screen (on a display-less server / without GPU)
It is possible to do software rendering with MESA. For that, before starting environment you need to create virtual display (assuming you use the default resolution):
Xvfb :1 -screen 0 1280x720x24+32 -fbdir /var/tmp &
export DISPLAY=:1
Note that software rendering significantly increases CPU usage and is slow.