We will apply some non-deep learning classification algorithms as well as many deep learning algorithms with or without group sparsity on some real datasets. Our goal is to show the benefits of group sparsity in deep learning: significantly reduce the number of parameters while preserve a good accuracy. We will also include some brief discussion on theoretical properties of group sparsity.
Kan Chen, Zongyu Dai, Hanxiang (Henry) Pan, Yue Sheng
Please find the implementation and experiments in Project Experiments.ipynb
.
Please note that the notebook is run with a freshly installed Python 3.6.9
environment with the dependencies defined in requirements.txt
.
These can be found in Experiment Result/{Regularization Type}/{Dataset}
- Regularization Type can be one of [
L1
,L2
,Group Sparsity
] - Dataset can be one of [
Cover
,MNIST
,SDD
].
The results include plots of accuracy vs different types of hyperparameters, as well as raw results stored as {Dataset}_result.json
.
- MNIST should be downloaded automatically through Pytorch
- https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz --> unzip to same folder
- https://archive.ics.uci.edu/ml/machine-learning-databases/00325/Sensorless_drive_diagnosis.txt --> then rename to sdd.txt