/Pay-Attention-to-MLPs

My implementation of the gMLP model from the paper "Pay Attention to MLPs".

Primary LanguagePythonMIT LicenseMIT

Pay-Attention-to-MLPs

Implementation of the gMLP model introduced in Pay Attention to MLPs.

The authors of the paper propose a simple attention-free network architecture, gMLP, based solely on MLPs with gating, and show that it can perform as well as Transformers in key language and vision applications.