/trusting-explainers

Validating post-hoc explainers with learning science experts (LAK 2023).

Primary LanguageJupyter NotebookMIT LicenseMIT

XAI for Course Design

This repository is the official implementation of the LAK 2023 paper entitled "Trusting the Explainers: XAI for Course Design" written by Vinitra Swamy, Skye (Sijia) Du, Mirko Marras, and Tanja Käser.

Experiments are located in scripts/, corresponding directly to the experimental methodology mentioned in the paper. User study materials used to conduct the 26 semi-structured expert interviews with STEM professors are located in study/.

Project overview

Our goal is to validate explainers for student success prediction across controlled differences in online and blended learning course design. Our analyses cover five course pairs that differ in one educationally relevant aspect and two popular instance-based explainable AI methods (LIME and SHAP). We quantitatively compare the distances between the explanations across courses and methods, then validate the explanations of LIME and SHAP with 26 semi-structured interviews of university-level educators regarding which features they believe contribute most to student success, which explanations they trust most, and how they could transform these insights into actionable course design decisions. Our results show that quantitatively, explainers significantly disagree with each other about what is important, and qualitatively, experts themselves do not agree on which explanations are most trustworthy.

This project started in the ML4ED laboratory at EPFL in February 2022 as a continuation of work from our EDM 2022 paper Evaluating the Explainers and has been nominated for best full paper at LAK 2023.

Contributing

This code is provided for educational purposes and aims to facilitate reproduction of our results, and further research in this direction. We have done our best to document, refactor, and test the code before publication.

If you find any bugs or would like to contribute new models, training protocols, etc, please let us know. Feel free to file issues and pull requests on the repo and we will address them as we can.

Citations

If you find this code useful in your work, please cite our paper:

Swamy, V., Du, S., Marras, M., Käser, T. (2023). 
Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design. 
In: Proceedings of the 13th International Learning Analytics and Knowledge Conference (LAK 2023).

License

This code is free software: you can redistribute it and/or modify it under the terms of the MIT License.

This software is distributed in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose. See the MIT License for details.