/AIMEN

Primary LanguagePython

AIMEN

Use of What-if Scenarios to Help Explain Artificial Intelligence Models for Neonatal Health

Abdullah Mamun, Lawrence D. Devoe, Mark I. Evans, David W. Britt, Judith Klein-Seetharaman, Hassan Ghasemzadeh

AIMEN is an explainable machine learning system for predicting the risk of abnormal labor outcomes.

Read the full preprint here: https://arxiv.org/abs/2410.09635

Bibtex for citing the work:

@misc{mamun2024usewhatifscenarioshelp,
     title={Use of What-if Scenarios to Help Explain Artificial Intelligence Models for Neonatal Health},
     author={Abdullah Mamun and Lawrence D. Devoe and Mark I. Evans and David W. Britt and Judith Klein-Seetharaman and Hassan Ghasemzadeh},
     year={2024},
     eprint={2410.09635},
     archivePrefix={arXiv},
     primaryClass={cs.LG},
     url={https://arxiv.org/abs/2410.09635},
}

Method

This AIMEN system empowers an ensemble of Multilayer Perceptrons (MLP) with Conditional Tabular GAN (CTGAN) to make better predictions. AIMEN also provides counterfactual explanations through the Nearest Instance Counterfactual Explanation (NICE) method.

Our paper investigates the effect of different restrictions and flexibilities on the data augmentation method. It discusses them in terms of prediction performance and the distribution gap between augmented and real data.

Dataset

The original dataset is not available for public use at this point. We are considering publicly sharing some synthetic data of a similar format in the near future.

For questions or concerns, please contact Abdullah Mamun (a.mamun@asu.edu)