guillaume-chevalier/Linear-Attention-Recurrent-Neural-Network
A recurrent attention module consisting of an LSTM cell which can query its own past cell states by the means of windowed multi-head attention. The formulas are derived from the BN-LSTM and the Transformer Network. The LARNN cell with attention can be easily used inside a loop on the cell state, just like any other RNN. (LARNN)
Jupyter NotebookMIT
Stargazers
- akansal1
- alexeyegorovDortmund
- andrewcz
- BSeboo@GlobeCog
- ByteSumoLtdByteSumo Ltd.
- codealphago
- costypetrisor
- Diego999Google DeepMind
- EdisonModdy
- etschneiderClosed Loop Design, Inc.
- fly51flyPRIS
- hellcoderzComcast - Voice Search for Xfinity - Natural Language Processing
- Kunya
- linkazoo
- locosoft1986
- mimbresCentre for Digital Music, QMUL
- oudommeas
- pcdinhCodeStringers
- pcy1302KAIST
- qzhang95Dalian Maritime University
- RaviVijayCalifornia
- rongzhou
- RunzeJustinHangzhou, China
- samithaj
- sangjeedondrubAmdo
- smrjansTalentica
- thinline72@lucidworks
- trebuchet90
- usccolumbia
- volcacius@Apple
- wangii
- xiangjjjAmazon.com
- xuanhan863Los Angeles, USA
- yaoxy2010
- zhaoyu611xi'an
- zxt881108Qiniu Atlab