uber-research/intrinsic-dimension

Reproducing CIFAR-10, Resnet-18, Pytorch

jgamper opened this issue · 0 comments

Hi @yosinski, @rquber, @mimosavvy

I've attempted to reproduce Figure S14 (see figure below) in arxiv version of the paper (https://arxiv.org/pdf/1804.08838.pdf), where you estimate intrinsic dimension on CIFAR-10 using ResNet.

resnet_paper

I used ResNet-18 from torchvision.models, fastfood transform, lr=0.0003, batch_size=32, ADAM optimizer, no regularisation, no learning rate schedule. The results I achieved are below.

cifar10

Would appreciate if you could answer the following questions:

  1. What was the learning rate used for your experiment?
  2. Did you use learning rate schedule?
  3. What was the optimizer used for CIFAR-10+Resnet experiment? (ADAM or SGD, as you do mention in Figure S10 captions that ADAM produces higher int dim)
  4. Generally, to estimate intrinsic dimension for how many epochs/iterations do you train? Do you use a stopping criteria?
  5. ResNet-18 I used differs from the resnet you used for the experiment. In your case model was "20-layer structure of ResNet with 280k parameters". I was expecting to see that larger Resnet-18 would actually have lower intrinsic dim. Any comments on this?