philipperemy/keras-tcn

How the receptive field affect the output result

AlbertoSabater opened this issue · 4 comments

Hi! I am running some tests to check how the receptive field after the output result.
Maybe I am misunderstanding the TCN architecture, but theoretically, the beginning of a sequence longer than the receptive field should be ignored by the model because it doesn't have a memory long enough.

This is the test I have run, where I show that extending the sequence from its beginning (out of the receptive field) does affect the final output.

num_feats = 300
i = Input(batch_shape=(None, None, num_feats))
tcn_layer = TCN(kernel_size=2, return_sequences=False, padding='causal')
o = tcn_layer(i)  # The TCN layers are here.
m = Model(inputs=[i], outputs=[o])
m.compile(optimizer='adam', loss='mse')

print('receptive_field: ', tcn_layer.receptive_field)
sequence = np.random.rand(1, 128, num_feats)
sequence_long = np.concatenate([np.random.rand(1, 128, num_feats), sequence], axis=1)
print('sequence.shape: {} | sequence_long.shape: {}'.format(sequence.shape, sequence_long.shape))
pred = m(sequence)
pred_long = m(sequence_long)
print(np.array_equal(pred, pred_long))

This is the output:

receptive_field:  64
sequence.shape: (1, 128, 300) | sequence_long.shape: (1, 256, 300)
False

Does anyone have any idea why samples out of the memory view still affect the final result? Is that a desired behavior?

They do seem to be roughly equal, save for some precision errors visible if you look at the output tensors.

np.allclose(pred, pred_long) # True

@AlbertoSabater I updated the formula of the receptive field. There was an error in it. In your case, the receptive field was 127 and not 64. So that's why your values were equal. Refer to the formula for ample information: https://github.com/philipperemy/keras-tcn#receptive-field.

Fix pushed in 3.3.0.