/Interpretable-Machine-Learning

Trying to make sense of black box machine learning models such as Ensemble models

Primary LanguageRMIT LicenseMIT

Interpretable-Machine-Learning

Trying to make sense of black box machine learning models such as Ensemble models

This repository consists the data, code and a presentation highlighting how we can use different ways to understand and interpret Ensemble Models. The resources will cover:

  1. Concept behind different Interpretable Machine Learning methods
  2. Pre-processing the data for ensemble and deep learning models
  3. Understanding variable importance by permutation based method
  4. Partial dependency method
  5. Independent conditional expectation (ICE)
  6. Surrogate model
  7. LIME
  8. Game theoretic method - Shapley
  9. DALEX method