/ml_interpret

Dockerized machine learning interpretation app (currently on heroku and slow loading, yet to switch)

Primary LanguagePythonMIT LicenseMIT

ML interpreter

Blackblox ML classifiers visually explained

About

ML interpreter demonstrates auto-interpretability of machine learning models in a codeless environment.

Currently it focuses on high-performance blackbox tree ensemble models (random forest, XGBoost and lightGBM) for binary/multi-class classifications on tabular data, though the framework has the capability to extend to other models, other prediction types (regression), and other data types such as text/image.

It provides interpretation at both global and local levels:

  • At global level, it indicates feature importance
  • At local level, one can view how features affect individual predictions

How it works

Key features

  • demo data/upload a small csv (a demo csv included in the github folder)
  • choose among algorithms
  • data preview and classification report
  • global/local interpretations
  • inspect misclassified data

To view how individual classification decision is made, one can toggle which datapoint to view.

screenshot

Note: If preprocessing is needed, it is recommended to preprocess the data prior to the upload, since the app does not provide automatic data cleaning.

How to run this demo

git clone git@github.com:yanhann10/ml_interpret.git
cd ml_interpret
make install
streamlit run app.py
  • Pull from Docker
docker pull yanhann10/ml-explained
streamlit run app.py

Other resources

Tutorials ML Explainability by Kaggle
Interpretable ML book

Packages SHAP
ELI5
PDPplot

Feedback welcomed