Couldn't find corresponding code with 《Transformer Interpretability Beyond Attention Visualization》
Yung-zi opened this issue · 2 comments
Yung-zi commented
Hi,
Thanks for sharing the code
I have been testing your implementation, however I couldn't find corresponding code about AUC curve.
hila-chefer commented
Hi @Yung-zi, thanks for your interest in our work!
The perturbation tests will provide you with the accuracy per each step (i.e. removing 0%, 10%,..., 90% of the tokens).
To calculate the AUC given this result, simply use np.trapz for the results.
I hope this helps.
hila-chefer commented
Closing due to inactivity