yeephycho/nasnet-tensorflow

Image size

YusukeO opened this issue · 1 comments

Thank you for your great work!
I want to train my 64*64 images.
Can I just put this image into your code?

When I add following argument
--train_image_size=64
I got this error.

Traceback (most recent call last):
  File "/home/yusuke/.pyenv/versions/anaconda3-5.1.0/envs/dagan/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py", line 686, in _call_cpp_shape_fn_impl
    input_tensors_as_shapes, status)
  File "/home/yusuke/.pyenv/versions/anaconda3-5.1.0/envs/dagan/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 516, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Negative dimension size caused by subtracting 5 from 4 for 'aux_7/aux_logits/AvgPool2D/AvgPool' (op: 'AvgPool') with input shapes: [32,4,4,528].

@YusukeO I am reviewing this code to solve a classification problem. Though it may be a bit late. As no one answered your question, I thought to give you some insights.

As per the code nasnet.py , if you're using "Cifar Dataset" image size will be 32
build_nasnet_cifar.default_image_size = 32

for imagenet dataset, image size= ImageNet Dataset
build_nasnet_mobile.default_image_size = 224

for Large model for the ImageNet Dataset, image size=331
build_nasnet_large.default_image_size = 331

you have to resize your data as per the existing model's requirement. Else, you have to tweak in the model architecture.