This is a repository maintain an ever-evolving list of papers on the interpretability and robustness of AI in EEG systems, with frequent updates to ensure its comprehensiveness
and timeliness.
The contribution of this survey
This is the first comprehensive survey focusing on the interpretability and robustness of AI in EEG systems.
We propose a novel taxonomy of interpretability and robustness for EEG systems.
We summarize and highlight the emerging and most representative interpretable and robust AI works related to EEG systems.
We discuss some open problems and promising directions for future EEG systems.
A selected top tier paper list focusing interpretability and robustness of AI in EEG systems, and the list goes here: Paper List
Summary of Interpretable AI in EEG Systems
Interpretability Categories
Methods
Coverage
Explanation Type
Backpropagation-based Methods
LRP
Local/Global
Attribution
DeepLIFT
Local/Global
Attribution
CAM
Local
Attribution
Grad-CAM
Local
Attribution
Perturbation-based Methods
LIME
Local
Attribution
SHAP
Local
Attribution
Rule-based Methods
Random Forest
Global
Decision Rules
Fuzzy Inference Systems
Global
Fuzzy Rules
Bayesian Systems
Global
Bayesian Rules
Comparison of different interpretanility methods in EEG Systems
Backpropagation-based Method
Perturbation-based Method
Rule-based Method
Mechanism
Analyze the feature contribution by backpropagating the gradients from predictions.
Explain the original model's behavior with local surrogate models.