Reproducing CIFAR-10, Resnet-18, Pytorch
jgamper opened this issue · 0 comments
jgamper commented
Hi @yosinski, @rquber, @mimosavvy
I've attempted to reproduce Figure S14 (see figure below) in arxiv version of the paper (https://arxiv.org/pdf/1804.08838.pdf), where you estimate intrinsic dimension on CIFAR-10 using ResNet.
I used ResNet-18 from torchvision.models
, fastfood transform, lr=0.0003
, batch_size=32
, ADAM optimizer, no regularisation, no learning rate schedule. The results I achieved are below.
Would appreciate if you could answer the following questions:
- What was the learning rate used for your experiment?
- Did you use learning rate schedule?
- What was the optimizer used for CIFAR-10+Resnet experiment? (ADAM or SGD, as you do mention in Figure S10 captions that ADAM produces higher int dim)
- Generally, to estimate intrinsic dimension for how many epochs/iterations do you train? Do you use a stopping criteria?
- ResNet-18 I used differs from the resnet you used for the experiment. In your case model was "20-layer structure of ResNet with 280k parameters". I was expecting to see that larger Resnet-18 would actually have lower intrinsic dim. Any comments on this?