/Unstructured-Pruning

Codes of the paper: Towards Energy Efficient Spiking Neural Networks: An Unstructured Pruning Framework (ICLR2024)

Primary LanguagePythonMIT LicenseMIT

Towards Energy Efficient Spiking Neural Networks: An Unstructured Pruning Framework

Installing Dependencies

pip install torch torchvision
pip install tensorboard thop spikingjelly==0.0.0.0.12

Usage

To reproduce the experiments on CIFAR10 in the paper, simply follow the default settings

python main.py

You can specify the output path and the weight of penalty term $\lambda$ by

python main.py --penalty-lmbda <lambda> --output-dir <path>

To reproduce the experiments on other datasets, follow the settings in the appendix.

Citation

@inproceedings{shi2024towards,
  title={Towards Energy Efficient Spiking Neural Networks: An Unstructured Pruning Framework},
  author={Shi, Xinyu and Ding, Jianhao and Hao, Zecheng and Yu, Zhaofei},
  booktitle={The Twelfth International Conference on Learning Representations},
  year={2024}
}