CyberZHG/keras-self-attention
Attention mechanism for processing sequential data that considers the context for each timestamp.
PythonMIT
Issues
- 7
- 1
Which paper does Local Attention refer to?
#62 opened by Fatigerrr - 1
Issue with tensorflow-gpu
#65 opened by kerighan - 0
Self-attention before BiLSTM
#64 opened by katekats - 0
Question about the SeqSelfAttention.
#63 opened by katekats - 2
load attention model
#55 opened by jalalmzh - 1
- 4
Issue with importing SeqSelfAttention
#61 opened by katekats - 6
Error with flatten
#58 opened by sandeepbhutani304 - 3
- 1
TestResidualScaledDotProductAttention错误
#59 opened by Yesgo1220 - 3
Visualize attention results
#57 opened by sandeepbhutani304 - 2
- 1
local attention parameter
#54 opened by farhantandia - 1
Error when load model
#52 opened by raspatiocan - 1
visualizing attention weights
#50 opened by henokDES - 3
- 2
- 1
- 0
- 0
- 1
Attention to 2D input
#45 opened by raghavgurbaxani - 1
keras-self-attention Paper ?
#44 opened by mohamedScikitLearn - 1
reference for multiplicative attention
#43 opened by indi297 - 10
Compatibility with Tensorflow 2.0
#38 opened by mohamedScikitLearn - 1
Masking implementation
#41 opened by tomasmenezes - 1
Supporting convolutional LSTM?
#40 opened by adanacademic - 1
tf.keras.layers.Attention?
#39 opened by yuanjie-ai - 2
Tensorflow 2.0 Compatibility
#31 opened by SamanehSaadat - 1
- 3
AttributeError: module 'tensorflow' has no attribute 'get_default_graph' while using 'SeqSelfAttention'
#33 opened by octolis - 1
IndexError, tuple index out of range
#36 opened by elsheikh21 - 4
- 1
Attention Weights
#32 opened by BigMasonFang - 1
Examples for a basic NMT model?
#35 opened by user06039 - 1
Is it only for sequential data?
#34 opened by xuzhang5788 - 3
error in building my bi-lstm with attention, help
#22 opened by denglizong - 0
- 1
__init__() missing 3 required positional arguments: 'node_def', 'op', and 'message'
#29 opened by dingtine - 1
在加法模式和乘法模式里,一个加 ba,一个加ba[0]
#28 opened by xugaoliang - 2
初始化器写成了正则化器
#27 opened by xugaoliang - 2
results[2]应该改为results[0]
#26 opened by xugaoliang - 2
您好请问您这个对应的论文是那一篇。
#23 opened by dashujuzha - 2
The gradient is missing sometimes
#20 opened by SUNBERG010 - 0
- 0
T
#19 opened by SUNBERG010 - 2
Compatibility with `tf.keras`
#18 opened by nshaud - 1
Error (with multiplication?)
#16 opened by GadL - 3
Scaled Dot Product attention error
#14 opened by AliOsm - 2
can this be used in a seq 2 seq task?
#15 opened by cristianmtr