sampepose/flownet2-tf

unmatched shapes in tf.concat()

Opened this issue · 4 comments

In my distribution, the example works well:

python -m src.flownet2.test --input_a data/samples/0img0.ppm --input_b data/samples/0img1.ppm --out ./

But when I tried to use my own data, there was a unmatching problem in tf.concat() between conv5 and deconv5. I pasted the error logs below:

> $ python -m src.flownet2.test --input_a data/examples/00000.jpg --input_b data/examples/00001.jpg --out ./ 

Traceback (most recent call last):
  File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/home/workspace/playground/flownet2-tf/src/flownet2/test.py", line 78, in <module>
    main()
  File "/home/workspace/playground/flownet2-tf/src/flownet2/test.py", line 39, in main
    out_path=FLAGS.out,
  File "src/net.py", line 63, in test
    predictions = self.model(inputs, training_schedule)
  File "src/flownet2/flownet2.py", line 22, in model
    net_css_predictions = self.net_css.model(inputs, training_schedule, trainable=False)
  File "src/flownet_css/flownet_css.py", line 18, in model
    net_cs_predictions = self.net_cs.model(inputs, training_schedule, trainable=False)
  File "src/flownet_cs/flownet_cs.py", line 18, in model
    net_c_predictions = self.net_c.model(inputs, training_schedule, trainable=False)
  File "src/flownet_c/flownet_c.py", line 70, in model
    concat5 = tf.concat([conv5_1, deconv5, upsample_flow6to5], axis=3)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1048, in concat
    name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 495, in _concat_v2
    name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2508, in create_op
    set_shapes_for_outputs(ret)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1873, in set_shapes_for_outputs
    shapes = shape_func(op)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1823, in call_with_requiring
    return call_cpp_shape_fn(op, require_shape_fn=True)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn
    debug_python_shape_fn, require_shape_fn)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 676, in _call_cpp_shape_fn_impl
    raise ValueError(err.message)
ValueError: Dimension 1 in both shapes must be equal, but are 15 and 16 for 'FlowNet2/FlowNetCSS/FlowNetCS/FlowNetC/concat_1' (op: 'ConcatV2') with input shapes: [1,15,27,512], [1,16,28,512], [1,16,28,2], [] and with computed input tensors: input[3] = <3>.

The shape of input images are the same, and I pasted the output of command file below:

> $ file data/examples/*  
data/examples/00000.jpg: JPEG image data, JFIF standard 1.01, resolution (DPCM), density 94x94, segment length 16, baseline, precision 8, 854x480, frames 3
data/examples/00001.jpg: JPEG image data, JFIF standard 1.01, resolution (DPCM), density 94x94, segment length 16, baseline, precision 8, 854x480, frames 3

What's more, I tried to use the same image as --input_a and --input_b, the error still occurs.

I have the same problem.

So how do you fix the problem? i also meet the same problem. Hope for your reply~
@sampepose @Beanocean @Co1dAt0m

@Zealoe
When conducting convolution operation with stride 2, the size of feature map will be divided by 2. In order to match the sizes of feature map in corresponding convolution and deconvolution operations, the shape of input image must be an integral multiple of 64. So you need to scale your images to meet this requirement before inputting them into the model. Please refer the code lines in original FlowNet2

Here is an example:

 _, height, width, _ = frame_batch.shape.as_list()  # batchsize x height x width x channels
 divisor = 64
 adapted_w = int(math.ceil(width/divisor) * divisor)
 adapted_h = int(math.ceil(height/divisor) * divisor)
 inputs = tf.image.resize_images(frame_batch), [adapted_h, adapted_w])

@Beanocean thanks very much!