/Adversarial-Reinforcement-Learning-Papers

Adversarial Reinforcement Learning papers (single-agent setting and multi-agent setting)

Adversarial Reinforcement Learning Papers

This is a collection of adversarial reinforcement learning papers. Each category is a potential start point for you to start your research. Some papers are listed more than once because they belong to multiple categories.

Adversarial reinforcement learning is closely related to robust reinforcement learning and attacks in reinforcement learning. If you are looking for papers in adversarial reinforcement learning, you should also see papers related to robust reinforcement learning and attacks in reinforcement learning.

For MARL resources, please refer to Multi Agent Reinforcement Learning papers, MARL Papers with Code and MARL Resources Collection.

I will continually update this repository and I welcome suggestions. (missing important papers, missing categories, invalid links, etc.) This is only a first draft so far and I'll add more resources in the next few months.

This repository is not for commercial purposes.

My email: chenhao915@mails.ucas.ac.cn

Overview

Single-Agent

Paper Code Accepted at Year
Robust Adversarial Reinforcement Learning Non-official implements on GitHub ICML 2017
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations https://github.com/chenhongge/StateAdvDRL NIPS 2020
Robust Reinforcement Learning as a Stackelberg Game via Adaptively-Regularized Adversarial Training 2022
Risk Averse Robust Adversarial Reinforcement Learning ICRA 2019
Robust Deep Reinforcement Learning with Adversarial Attacks 2017
Robust Reinforcement Learning on State Observations with Learned Optimal Adversary https://github.com/huanzhang12/ATLA_robust_RL ICLR 2021
Exploring the Training Robustness of Distributional Reinforcement Learning against Noisy State Observations 2021
RoMFAC: A Robust Mean-Field Actor-Critic Reinforcement Learning against Adversarial Perturbations on States 2022
Adversary Agnostic Robust Deep Reinforcement Learning TNNLS 2021
Learning to Cope with Adversarial Attacks 2019
Adversarial Attack on Graph Structured Data ICML 2018
Characterizing Attacks on Deep Reinforcement Learning AAMAS 2022
Adversarial policies: Attacking deep reinforcement learning https://github.com/HumanCompatibleAI/adversarial-policies ICLR 2020
Learning Robust Policy against Disturbance in Transition Dynamics via State-Conservative Policy Optimization AAAI 2022
On the Robustness of Safe Reinforcement Learning under Observational Perturbations 2022
Robust Reinforcement Learning using Adversarial Populations 2020
Robust Deep Reinforcement Learning through Adversarial Loss https://github.com/tuomaso/radial_rl_v2 NIPS 2021

Multi-Agent

Paper Code Accepted at Year
Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems 2022
Distributed Multi-Agent Deep Reinforcement Learning for Robust Coordination against Noise 2022
On the Robustness of Cooperative Multi-Agent Reinforcement Learning IEEE Security and Privacy Workshops 2020
Towards Comprehensive Testing on the Robustness of Cooperative Multi-agent Reinforcement Learning CVPR workshop 2022
Robust Multi-Agent Reinforcement Learning via Minimax Deep Deterministic Policy Gradient AAAI 2019
Multi-agent Deep Reinforcement Learning with Extremely Noisy Observations NIPS Deep Reinforcement Learning Workshop 2018
Policy Regularization via Noisy Advantage Values for Cooperative Multi-agent Actor-Critic methods 2021

Adversarial Communication

Paper Code Accepted at Year
Certifiably Robust Policy Learning against Adversarial Communication in Multi-agent Systems 2022

Benchmark

Paper Code Accepted at Year
Towards Comprehensive Testing on the Robustness of Cooperative Multi-agent Reinforcement Learning CVPR workshop 2022

Citation

If you find this repository useful, please cite our repo:

@misc{chen2022adversarial,
  author={Chen, Hao},
  title={Adversarial Reinforcement Learning Papers},
  year={2022}
  publisher = {GitHub},
  journal = {GitHub Repository},
  howpublished = {\url{https://github.com/TimeBreaker/Adversarial-Reinforcement-Learning-Papers}}
}