/explainaibility-of-model-based-feature-importance-

Variable importance is central to scientific studies, including the social sciences and causal inference, healthcare, and other domains. However, explainability of variable importance is lacking. This is problematic: what if there were multiple well-performing predictive models, and a specific variable is important to some of them and not to others? In that case, we may not be able to tell from a single well-performing model whether a variable is always important in predicting the outcome. In order to circumvent that issue feature importance obtained from the model being trained can be explained using bayesian linear model

Stargazers

No one’s star this repository yet.