godatta's Stars
zeyuliu1037/LMUFormer
ICLR 2024 LMUFormer: Low Complexity Yet Powerful Spiking Model With Legendre Memory Units
godatta/Ultra-Low-Latency-SNN
godatta/godatta.github.io
Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes
bic-L/Masked-Spiking-Transformer
[ICCV-23] Masked Spiking Transformer
godatta/ISP-less-CV
mit-han-lab/tinyml
mtancak/PyTorch-ViT-Vision-Transformer
PyTorch implementation of the Vision Transformer architecture
jeonsworld/ViT-pytorch
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)
ksouvik52/hiresnn2021
gordicaleksa/pytorch-original-transformer
My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing otherwise seemingly hard concepts. Currently included IWSLT pretrained models.
usarawgi911/machine-learning-interview-questions
nitin-rathi/hybrid-snn-conversion
Training spiking networks with hybrid ann-snn conversion and spike-based backpropagation
TaoRuijie/TalkNet-ASD
ACM MM 2021: 'Is Someone Speaking? Exploring Long-term Temporal Features for Audio-visual Active Speaker Detection'
NeuroCompLab-psu/SNN-Conversion