uber-research/sbnet

How can I prevent the graph growing when using sbnet_module.reduce_mask in loop.

JunhyeonPark opened this issue · 1 comments

The main code is creating randomly changing mask indices for every loop using sbnet_module.

batch_size = 50
block_params_conv1 = calc_block_params([batch_size, 28, 28, 1],
                                       [1, 5, 5, 1],
                                       [5, 5, 1, 1],
                                       [1, 1, 1, 1],
                                       padding='VALID')
t_check = Timer()

print ("Starting 1st session...")
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in range(2000):
        t_check.tic()

        mask_conv1 = generate_random_mask([batch_size, 28, 28, 1], 0.90)
        a_conv1 = tf.constant(mask_conv1, dtype=tf.float32)
        b_conv1 = sbnet_module.reduce_mask(a_conv1,
                      tf.constant(block_params_conv1.bcount, dtype=tf.int32),
                      bsize=block_params_conv1.bsize,
                      boffset=block_params_conv1.boffset,
                      bstride=block_params_conv1.bstrides,
                      tol=0.0,
                      avgpool=True)
        ind_val_conv1, bin_val_conv1 = sess.run([b_conv1.active_block_indices,
                                                 b_conv1.bin_counts])

        if i % 100 == 0:
            time_check = t_check.toc()
            print('step %d  \t= %f (sec)' % (i, time_check))

And the result was getting slower as the session run multiple times.

Starting 1st session...
step    0  = 0.006941 (sec)
step  100  = 0.008982 (sec)
step  200  = 0.012545 (sec)
step  300  = 0.016614 (sec)
step  400  = 0.018152 (sec)
step  500  = 0.023373 (sec)
step  600  = 0.026576 (sec)
step  700  = 0.028291 (sec)
step  800  = 0.031587 (sec)
step  900  = 0.037221 (sec)
step 1000  = 0.043062 (sec)
step 1100  = 0.048337 (sec)
step 1200  = 0.055366 (sec)
step 1300  = 0.060677 (sec)
step 1400  = 0.058936 (sec)
step 1500  = 0.072439 (sec)
step 1600  = 0.068025 (sec)
step 1700  = 0.073672 (sec)
step 1800  = 0.077006 (sec)
step 1900  = 0.083827 (sec)

So, my question is

I want to run sbnet_module with a randomly changing mask for every detection time
just like SBNet + Predicted Mask experiment, it seems to me that run sbnet_module.reduce_mask every time while detecting.
However, my result means that whenever using sbnet_module.reduce_mask, tensorflow graph grows so that speed has slowed down when detecting with sbnet_module.reduce_mask.
How can I use sbnet_module.reduce_mask without loss of time in this situation.

In this example the code keeps adding to the default graph, see mrry's answer in the stackoverflow question below, so the graph keeps growing, which can be verified by adding this statement:

print len(tf.get_default_graph().get_operations())

https://stackoverflow.com/questions/34235225/is-it-possible-to-modify-an-existing-tensorflow-computation-graph

As to why the compute time keeps going up, this is likely due to internal tensorflow session.run() overhead accumulating but is not due to sbnet's reduce_mask op getting slower. This time will keep going up if you substitute reduce_mask with any other tensorflow op.

To avoid accumulating this overhead you'd have to create a new Session every time you run, ie do a with tf.Session() per run.