Code of the paper "Learning to Explain: Towards Human-Aligned Interpretability in Deep Reinforcement Learning via Attention Guidance"
You can view the explanation results in ExplanationVisualization.ipynb
.
You can also simply run train.py
for training from scratch.
There are also extra explanation results in pdfs in results/
folder.