mila-iqia/blocks

what if i want to implement a encoder with attention?

ayanamongol opened this issue · 2 comments

So, what should i do if i want to implement a Bidirectional GRU encoder with attention?
I have done like:

`self.bidir = BidirectionalWrap(GatedRecurrent(activation=Tanh(), dim=state_dim))
self.fwd_fork = Fork( [name for name in self.bidir.prototype.apply.sequences if name != 'mask'], prototype=Linear(), name='fwd_fork')
self.back_fork = Fork( [name for name in self.bidir.prototype.apply.sequences if name != 'mask'], prototype=Linear(), name='back_fork')

    self.attention = SequenceContentAttention(state_names=self.bidir.apply.states, attended_dim=embedding_dim, match_dim=state_dim, name="attention")
    self.testingone = AttentionRecurrent( transition = self.bidir, attention = self.attention)

    self.children = [self.bidir, self.fwd_fork, self.back_fork, self.testingone]`

but feels wrong and not working,
any suggestions?

What do you mean by "encoder with attention"? What do you want to attend to?

rizar commented

This a is question, not a bug report. The mailing list is the proper place for the questions.