tensorflow/decision-forests

is there any method for local interpretation?

jasonmsetiadi opened this issue · 1 comments

I am currently using SHAP explainers, such as TreeExplainer to obtain local interpretation of gradient boosted models (XGBoost, LightGBM, etc). I realize that tfdf offers method for obtaining global interpretation such as permutation importance, but what about local interpretation? I am unable to obtain local interpretation since the tfdf models are not compatible with SHAP's TreeExplainer.

rstz commented

Unfortunately tfdf currently does not support local interpretation methods like SHAP out of the box. If you our anyone else wants to push integrating TF-DF or YDF with SHAP's TreeExplainer, we'd be happy to provide assistance with design and implementation.