pytorch/captum

Integrated gradient on a function of models

MarcPoulin1 opened this issue · 0 comments

I have a prediction that is the result of a function of models such that pred = model1 + b * model2 + b^2 * model3 + b^3 * model4.

Is it correct to compute the attribution of each model and compute the total attribution using the same function?

Each model has its own baseline and the input x is not exactly the same after each model preprocessing steps.

Thank you!