/sparsity-indexed-ode

Official implementation of ``Neural Pruning via Sparsity-indexed ODE: A Continuous Sparsity Viewpoint"

Primary LanguagePythonMIT LicenseMIT

Sparsity-indexed-ODE

[ICML2023] Neural Pruning via Sparsity-indexed ODE: A Continuous Sparsity Viewpoint
Zhanfeng Mo1, Haosen Shi1,2, Sinno Jialin Pan1,3
1 School of Computer Science and Engineering, Nanyang Technological University
2 Continental-NTU Corporate Lab, Nanyang Technological University
3 Department of Computer Science and Engineering, Chinese University of Hong Kong.

Official implementation of "Neural Pruning via Sparsity-indexed ODE: A Continuous Sparsity Viewpoint, ICML 2023".

Environment

We provide a NVIDIA-Docker image to facilitate researchers in reproducing the results reported in our paper and conducting further studies using our code.

  1. Build docker image from our Dockerfile
bash ./build_docker_com.sh
  1. Two scripts run_docker_(summary/tem).sh are provided for running the experiments and summarizing the results. Change the dirname in these two scripts to your local dirname.
bash ./run_docker_tem.sh
  1. More scripts are provided in Scripts/(datasetname)/

Update

  • upload the link of paper
  • 15/06/2023 init readme and code

Contact

If you have any questions about this work, please feel easy to contact us (ZHANFENG001 (AT) ntu.edu.sg).

Thanks

This code is heavily borrowed from [SynFlow].

Citation

If you use this code for your research, please cite our paper, "Neural Pruning via Sparsity-indexed ODE: A Continuous Sparsity Viewpoint".