/ARAMARL

Experiments for the paper RL under Threats

Primary LanguageJupyter NotebookMIT LicenseMIT

ARAMARL: Adversarial Risk Analysis for Multi-Agent Reinforcement Learning.

This repository contains the code for the experiments of the papers "Reinforcement Learning under Threats" and "Opponent Aware Reinforcement Learning".

If you find it useful for your research, please cite either

@inproceedings{gallego2019reinforcement,
  title={Reinforcement Learning under Threats},
  author={Gallego, Victor and Naveiro, Roi and Insua, David Rios},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={33},
  pages={9939--9940},
  year={2019}
}

or

@misc{gallego2019opponent,
    title={Opponent Aware Reinforcement Learning},
    author={Victor Gallego and Roi Naveiro and David Rios Insua and David Gomez-Ullate Oteiza},
    year={2019},
    eprint={1908.08773},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}