/memory-efficient-attention-pytorch

Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"

Primary LanguagePythonMIT LicenseMIT

Watchers

No one’s watching this repository yet.