STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking [NeurIPS 2025]
For the BrainCog framework, we recommend installing it via GitHub. You can use the following command in your terminal to install it from GitHub:
pip install git+https://github.com/braincog-X/Brain-Cog.git
git clone https://github.com/Fancyssc/STEP.gitStart Spikformer Training on CIFAR10 as the "Hello-world" Demo.
conda activate [your_env]
python train.py --config configs/spikformer/cifar10.ymlFor specific tasks, completed models and supported datasets, please refer to the corresponding submodule guides:
Spiking-Transformer-Benchmark/
βββ cls/ # Classification submodule
β βββ README.md
β βββ ...
βββ seg/ # Segmentation submodule
β βββ README.md
β βββ ...
βββ det/ # Object detection submodule
β βββ README.md
β βββ ...
βββ README.md
One-stop benchmark for Spiking-Transformer researchβclassification, segmentation, and detection share the same training & evaluation pipeline.
- Plug-and-play modules (neurons, encodings, attention, surrogate gradients, heads) let you prototype new ideas without touching the core loop.
- Ready-made loaders cover ImageNet, CIFAR, DVS-CIFAR10, N-Caltech101 β¦
- Task adapters integrate with MMSeg and MMDet, so dense prediction experiments need only a config tweak.
- Backend-agnostic code runs on SpikingJelly, BrainCog, or BrainPy, and every config is version-locked for full reproducibility. neuromorphic vision systems.
Static(e.g. CIFAR/ImageNet...)/Neuromorphic(e.g. CIFAR10-DVS)/3D Point Cloud(e.g. ModelNet40) classification datasets are supported.
See cls/README.md for details.
Frequently used datasets for both tasks which are assembled by MMSeg and MMDet are supported.
See MMSegmentation and MMDetection for details.
The brief tutorial of STEP can be found here.
The main experimental results, including the corresponding log files, configuration files, and checkpoints, can be downloaded here.
| Neuron Node | abbreviation | Neuron Node | abbreviation | |
|---|---|---|---|---|
| Integrate-and-Fire Neuron | IF | Leaky Integrate-and-Fire Neuron | LIF | |
| Parametric Leaky Integrate-and-Fire Neuron | PLIF | Exponential Integrate-and-Fire Neuron | EIF | |
| Integer LIF Neuron | I-LIF | Normarlized Interger LIF Neuron | NI-LIF | |
| Hybrid Dynamics LIF Neuron | HD-LIF | Gated LIF Neuron | GLIF | |
| k-based LIF Neuron | KLIF | Complementary LIF Neuron | CLIF | |
| Parallel Spiking Neuron | PSN | Hodgkin-Huxley Neuron | HHNode | |
| Izhikevich Neuron | IzhNode |
When working with static datasets, SNNs typically require encoding of static images. Our framework supports various encoding methods, including direct, rate, TTFS, and phase encoding.
@misc{shen2025stepunifiedspikingtransformer,
title={STEP: A Unified Spiking Transformer Evaluation Platform for Fair and Reproducible Benchmarking},
author={Sicheng Shen and Dongcheng Zhao and Linghao Feng and Zeyang Yue and Jindong Li and Tenglong Li and Guobin Shen and Yi Zeng},
year={2025},
eprint={2505.11151},
archivePrefix={arXiv},
primaryClass={cs.NE},
url={https://arxiv.org/abs/2505.11151},
}
- Support for 3D cloud point classification.
- Some known bugs fixed.
- initial version released.
Thanks to the BrainCog for providing the core ideas and components for this repository.
A full list of contributors can be found here.

