hila-chefer/Transformer-Explainability

Couldn't find corresponding code with 《Transformer Interpretability Beyond Attention Visualization》

Yung-zi opened this issue · 2 comments

Hi,

Thanks for sharing the code

I have been testing your implementation, however I couldn't find corresponding code about AUC curve.

Hi @Yung-zi, thanks for your interest in our work!
The perturbation tests will provide you with the accuracy per each step (i.e. removing 0%, 10%,..., 90% of the tokens).
To calculate the AUC given this result, simply use np.trapz for the results.
I hope this helps.

Closing due to inactivity