pytorch/captum

Using IntegratedGradients to explain LSTM model

qishubo opened this issue · 3 comments

Hi
The TIME_STEPS of my LSTM model is 10.The code for the explanation section is:
input.requires_grad_()
ig = IntegratedGradients(model)
attr, delta = ig.attribute(input,target=0, return_convergence_delta=True)
attr = attr.detach().numpy()
def visualize_importances(feature_names, importances, title="Average Feature Importances", plot=True, axis_title="Features"):
print(title)
for i in range(len(feature_names)):
print(feature_names[i], ": ", '%.3f'%(importances[i]))
x_pos = (np.arange(len(feature_names)))
if plot:
plt.figure(figsize=(12,6))
plt.bar(x_pos, importances, align='center')
plt.xticks(x_pos, feature_names, wrap=True)
plt.xlabel(axis_title)
plt.title(title)
visualize_importances(feature_names, np.mean(attr, axis=0))

For one problem, print(np.mean(attr, axis=0).shape) gets (10, 7). Where 10 is TIME_STEPS and 7 is the number of features. print(np.mean(attr, axis=0))
[[-6.14843489e-03 1.23128297e-02 0.00000000e+00 0.00000000e+00 -2.44566928e-03 9.71663677e-03 0.00000000e+00]
[1.01105891e-02 -1.33561019e-02 0.00000000e+00 0.00000000e+00 -1.08681414e-02 -8.36898667e-03 0.00000000e+00]
[-3.01603199e-03 2.83478058e-05 0.00000000e+00 0.00000000e+00 5.63220110e-03 3.27828258e-03 0.00000000e+00]
[6.62596261e-04 -4.36685487e-03 0.00000000e+00 0.00000000e+00 -5.13588440e-04 2.89923891e-06 0.00000000e+00]
[-1.46707092e-03 1.61141641e-03 0.00000000e+00 0.00000000e+00 -4.48725778e-03 4.45105709e-03 0.00000000e+00]
[-4.85951945e-04 2.61684348e-03 0.00000000e+00 0.00000000e+00 -8.63782548e-03 6.60564365e-03 0.00000000e+00]
[1.44520150e-03 -4.12249724e-03 0.00000000e+00 0.00000000e+00 -3.75068304e-03 6.29765388e-03 0.00000000e+00]
[2.51529449e-03-1.04613991e-02 0.00000000e+00 0.00000000e+00 -5.62792612e-03 8.99994289e-03 0.00000000e+00]
[-2.50925849e-02 3.11531167e-02 0.00000000e+00 0.00000000e+00 1.79945677e-02 5.28183730e-02 0.00000000e+00]
[5.15326855e-02 -5.28545470e-02 0.00000000e+00 0.00000000e+00 -7.92302065e-02 -1.16893978e-02 0.00000000e+00]]
The correct result is that each feature corresponds to one importance value, and now there are 10, so how do we do that

Hi @qishubo, the correct attr shape for each target is meant to be [batch_size, time steps, num_features]. Because the importance can change across time. However, if you want only one importance value, you can aggregate along the time steps.

Hi @qishubo, the correct attr shape for each target is meant to be [batch_size, time steps, num_features]. Because the importance can change across time. However, if you want only one importance value, you can aggregate along the time steps.

Thank you for your answer.There's still something I don't understand. Number of my training set samples =11771, time_steps=5, num_features=6. trainX_shape is (11771, 5, 6). I'll explain with the following code,
"dl = DeepLift(model)
attributions, delta = dl.attribute(trainX, baseline, target=0, return_convergence_delta=True)
attributions = Attributions.detach ().numpy() "
attributions.shape is (11771, 5, 6). To understand these attributions, we can first average them across all the inputs and print the average attribution for each feature. So I use the code
"np.mean(attributions, axis=0)".
This code yields the result of
[[3.0981158e-03-1.3887375e-03-7.3225674e-04 1.9297500e-04 -1.9012614e-03-3.8923218e-04]
[-1.0495144e-01 2.9637243e-03 3.0928187e-04 8.9901492e-02 -3.5629296e-03 8.4021456e-05]
[2.9867730e-01-1.6294135e-02 8.4656780e-04-2.4087039e-01 2.2399269e-02 4.9659691e-04]
[-7.5518560e-01 3.1636413e-02-2.1290171e-03 6.3413280e-01 -6.5914929e-02 -1.1652018e-03]
[6.0248595e-01-3.8086083e-02 1.8795867e-03-4.8950049e-01 6.4727701e-02 9.7519101e-04]]
np.mean(attributions, axis=0).shape is (5, 6). Each feature has a contribution value at each time step, and there are positive numbers (positive contributions) and negative numbers (negative contributions). As you said, "if I want only one importance value, I can aggregate along the time steps." The result after adding will add both positive and negative numbers. Is the result still correct? Is it possible to take the absolute value and then sum?
That is, the calculation is as follows:

  1. Find the absolute value
    [[3.0981158e-03 1.3887375e-03 7.3225674e-04 1.9297500e-04 1.9012614e-03 3.8923218 e-04]
    [1.0495144e-01 2.9637243e-03 3.0928187e-04 8.9901492e-02 3.5629296e-03 8.4021456 e-05]
    [2.9867730e-01 1.6294135e-02 8.4656780e-04 2.4087039e-01 2.2399269e-02 4.9659691 e-04]
    [7.5518560e-01 3.1636413e-02 2.1290171e-03 6.3413280e-01 6.5914929e-02 1.1652018 e-03]
    [6.0248595e-01 3.8086083e-02 1.8795867e-03 4.8950049e-01 6.4727701e-02 9.7519101 e-04]]
  2. Sum [1.7643983 0.09036909 0.00589671 1.4545982 0.1585061 0.00311024].This is the end result.

Hi @qishubo ,
Some attribution methods return signed values, which interpret positive or negative correlations/relevance between the input and output. For your case, it is more accurate to sum(abs(attr)) because I think you only want the magnitude of importance. Hence, feel free to take the absolute.